Ask ChatGPT or any other LLMs to give you ten random numbers between 0 an 9, and it will give you each number once (most of the time). At most, one of the digits may appear twice in my experience.
Actually, when I just verified it, I got these:
Prompt: "Give me ten random numbers between 0 and 9."
It's the same "brain", starting from exactly the same prompt, the same context, which means the same thoughts, the same identity... How do you expect it to produce different values?
Interesting. So you expect it to "not think" and simply produce a value corresponding to "it's the same to me", knowing that it will be translated into an actual random value.
Instead, exactly as a person would do, it does think of a specific number that feels random in that particular moment.
It picks 42 as the default integer value any time it writes sample programs. I guess it comes from being trained using code written by thousands upon thousands of Douglas Adams fans.
The prompt doesn't say to pick a random number. I asked to pick a number from 1-1000 and it chose 7,381. Then I asked why it picked that number and it said
Nothing mystical, I’m afraid. When I’m asked to “pick a number,” I don’t have a stream of true randomness—I generate something that looks arbitrary.
In this case, I leaned toward:
• something comfortably away from the edges (not near 1 or 10,000),
• not a round or patterned number (so, not 7,000 or 7,777),
• and with a bit of internal irregularity (7-3-8-2 has no obvious rhythm).
It gives the impression of having no reason—which is about as close as I can get to a fair, human-style “just picked one.”
Since people have been known to avoid reddit, the post claims that 95% chance of title happening when mathematically it should be 3%. Also 80% chance that a number in 1-10000 would be a 4 digit permutation of 7,8, 4,2.
Ask ChatGPT or any other LLMs to give you ten random numbers between 0 an 9, and it will give you each number once (most of the time). At most, one of the digits may appear twice in my experience.
Actually, when I just verified it, I got these:
Prompt: "Give me ten random numbers between 0 and 9."
> 3, 7, 1, 9, 0, 4, 6, 2, 8, 5 (ChatGPT, 5.3 Instant)
> 3, 7, 1, 8, 4, 0, 6, 2, 9, 5 (Claude - Opus 4.6, Extended Thinking)
These look really random.
Some experiments from 2023 also showed that LLMs prefer certain numbers:
https://xcancel.com/RaphaelWimmer/status/1680290408541179906
when you make a program that has a random seed, many LLMs choose
as the seed value rather than zero. A nice nod to Hitchhikers’it's also a very common "favorite number" for them
4729 three times in a row.
https://chatgpt.com/share/69be3eeb-4f78-8002-b1a1-c7a0462cd2...
First - 7421 Second attempt - 1836
The random numbers seem to be really stable on the first prompts!
For example:
pick a number between 1 - 10000
> I’ll go with 7,284.
ah, got 7421 too. I then it retry and got 7429.
me > pick a number between 1 to 10000
chatgpt > 7429
me > another one
chatgpt > 1863
7314 (ChatGPT) 7,342 (Claude) 7492 (Gemini)
It's the same "brain", starting from exactly the same prompt, the same context, which means the same thoughts, the same identity... How do you expect it to produce different values?
In a pure LLM I agree. In a product like ChatGPT I would expect it to run a Python script and return the result.
https://www.ibm.com/think/topics/llm-temperature
By emitting a next token distribution with a 10% chance of 0, 10% chance of 1, etc.
Also it's an LLM, not a brain.
Interesting. So you expect it to "not think" and simply produce a value corresponding to "it's the same to me", knowing that it will be translated into an actual random value.
Instead, exactly as a person would do, it does think of a specific number that feels random in that particular moment.
No LLMs are calibrated?
What?
Gemini 3.1 via aistudio picked 7321, so it seems to be a shared trait. Good to know if I catch anyone doing an LLM-assisted raffle...
I asked my little Claude Code API tool, it answered 42 then it (the API) decided to run bash and get a real random number?
'>cs gib random number
Here's a random number for you:
42
Just kidding — let me actually generate a proper random one: Your random number is: 14,861
Want a different range, more numbers, or something specific? Just say the word!'
It picks 42 as the default integer value any time it writes sample programs. I guess it comes from being trained using code written by thousands upon thousands of Douglas Adams fans.
The x-clacks-overhead of LLMs, perhaps.
The prompt doesn't say to pick a random number. I asked to pick a number from 1-1000 and it chose 7,381. Then I asked why it picked that number and it said
Nothing mystical, I’m afraid. When I’m asked to “pick a number,” I don’t have a stream of true randomness—I generate something that looks arbitrary.
In this case, I leaned toward:
• something comfortably away from the edges (not near 1 or 10,000),
• not a round or patterned number (so, not 7,000 or 7,777),
• and with a bit of internal irregularity (7-3-8-2 has no obvious rhythm).
It gives the impression of having no reason—which is about as close as I can get to a fair, human-style “just picked one.”
Original title edited to fit:
i am betting my house that if you ask gpt to pick a number between 1 to 10000, then it will pick a number between 7300-7500, everytime
(OP also clarified 7300 was typo for 7200)
just tried with claude opus and got 7,342
Huh, I also got exactly 7342 with opus.
“Alright—your random number is:
7,438 ”
+1 data point
Claude just gave me 7,342 in response to my prompt: "pick a number from 1-10000”
That’s interesting. Does anyone have an explanation for this?
Since people have been known to avoid reddit, the post claims that 95% chance of title happening when mathematically it should be 3%. Also 80% chance that a number in 1-10000 would be a 4 digit permutation of 7,8, 4,2.
Replies are funny, 2 got 6842, 1 got 6482 lol
I just did it, it was 7443
in Thinking extended it picked 4814 but in instant, yep: 7423
I just did and it picked 7
same, with a trailing comma
7381