The Monty Hall Problem
This is a member-only chapter. Log in with your Signal Over Noise membership email to continue.
Log in to readModule 4 · Section 3 of 6
The Monty Hall Problem
You are on a game show. Three doors. One car, two goats. You pick door one. The host opens door three — a goat. He asks: do you want to switch to door two?
Your instinct says it does not matter. Two doors left, so it is 50-50. Most people stay.
Most people are wrong. You should always switch. Switching wins roughly twice as often as staying.
Why Your Instinct Is Wrong
When you first picked door one, you had a one in three chance of being right. That means there was a two in three chance the car was behind one of the other two doors. When the host opens door three to show a goat, that two in three probability does not vanish — it collapses entirely onto door two.
Your door still has its original one in three probability. Door two now carries two in three.
The reason this feels wrong is that your brain treats the host’s action as neutral — just a random reveal. It is not. The host knows where the car is. He will never open the door with the car. His choice carries information, and that information shifts the probabilities in a way your intuition does not track.
Scale it up: imagine 100 doors. You pick one (1% chance). The host opens 98, all goats. Do you switch to the one remaining door? Obviously yes — it carries 99% of the probability. The Monty Hall problem is the same situation, just small enough that the intuitive mistake feels reasonable.
Where This Shows Up Beyond Game Shows
Medical testing works this way. If a disease affects one in a thousand people, and a test is 99% accurate, a positive test result still leaves you more likely to be healthy than sick. The prior probability — how rare the condition is — dominates the calculation. Ignoring it leads to the wrong conclusion even when you are reasoning correctly from the result you can see.
Social media algorithms create Monty Hall situations constantly. When you see a colleague’s excellent results or a competitor’s success story, you are seeing the doors the platform chose to open. The algorithm surfaces what gets clicks — dramatic outcomes, confident claims, surprising numbers. The unremarkable majority stays closed. Your brain reads the visible doors as a representative sample. They are not.
Search results do the same thing. When you look up symptoms and find alarming diagnoses, you are reading the doors a ranking algorithm decided to show you. The mundane explanations are behind the closed ones.
The Skill: Ask What You Cannot See
The Monty Hall problem trains a specific question: what information shaped how I got to these options?
When an AI gives you a recommendation, you are looking at the doors it opened. The model was trained on a specific dataset. That dataset had gaps, biases, and over-represented categories. The recommendation reflects what the training data surfaced — not what the full space of possibilities contains.
This is not a reason to reject AI recommendations. It is a reason to ask what the host knew before deciding which doors to open. What data was the model trained on? What categories are over-represented? What would need to be true about the training data for this recommendation to be right?
An AI recommendation is a Monty Hall problem. The algorithm has information you do not. The door it is pointing at carries real probability weight — but that weight comes from choices made before you sat down at the table. Understanding those choices is how you decide whether to switch.
The bridge: The Monty Hall problem shows that the history of eliminated options carries probability information we naturally ignore. Smullyan’s puzzles, in the next section, take this further: they show how to work backwards from a contradiction to find which assumption produced it — a skill that maps directly onto diagnosing broken AI reasoning.