Breaking Enigma
This is a member-only chapter. Log in with your Signal Over Noise membership email to continue.
Log in to readModule 4 · Section 5 of 6
Breaking Enigma
In 1939, Germany’s military communication was encrypted by the Enigma machine — a device that scrambled messages through a series of rotating wheels before transmitting. With billions of possible daily settings, brute force was not an option. Testing every combination would take longer than the war.
Alan Turing and the team at Bletchley Park did not try every combination. They worked from what they already knew.
Cribs: Known Facts as Constraints
German radio operators had habits. Weather reports followed predictable formats. Routine transmissions often began with standard phrases. “Nothing to report” appeared repeatedly. These known pieces of plaintext — called cribs — gave the codebreakers a foothold.
A crib is a piece of text you expect to find somewhere in the encrypted message. If you know a message contains the phrase “weather report,” you can try to match that phrase against different positions in the ciphertext. Each position the phrase does not fit eliminates a set of possible machine settings. Each constraint narrows the space.
Turing’s Bombe — the electromechanical machine built to automate this process — did not search randomly. It used logical deduction. If this letter could be that letter, then certain other letters must be those letters, and if that is true, then this entire class of settings is impossible. Contradiction after contradiction, each one eliminating possibilities, until only a small number of settings remained to check by hand.
The Bombe was fast because it exploited what was already known. The crib was the lever. Logic did the rest.
The Human Network
Turing was not working alone. Bletchley Park employed thousands of people — mathematicians, linguists, crossword champions, and recent graduates working in shifts around the clock. Fragments decoded by one team became cribs for another. A weather report decoded at 6am might constrain the settings for a supply manifest intercepted at noon.
The collaborative structure was not incidental. No single analyst could hold all the fragments in mind at once. The organisation was designed so that partial knowledge could be shared and combined systematically. Individual blind spots were covered by different perspectives.
That is not just a wartime lesson about teamwork. It is a lesson about how to handle any system where the information you need is distributed across multiple sources.
Applying This to AI Output Review
When an AI gives you an output you cannot fully verify, the Enigma method is a concrete procedure.
Start with your cribs — the things you already know to be true. These might be facts from primary sources, constraints from domain knowledge, or specific details you have verified independently. Apply them to the AI output: does this output contradict anything I know? Does this claim require something I know to be false?
Each contradiction eliminates a class of explanations for what the AI was doing. Each consistent crib narrows the space of possible errors. You are not checking everything — you are using known facts as constraints to make the checking tractable.
The Bombe did not verify that a setting was correct. It eliminated settings that were certainly wrong. You can do the same with AI outputs: you may not be able to confirm a claim is right, but you can often confirm specific things are wrong, and eliminating wrong explanations is progress.
Where the Enigma parallel is most useful is in scope. The codebreakers were not trying to read every German message — they were trying to find one piece of known text in a stream of noise. AI output review works the same way. You do not need to verify every claim in a 2,000-word summary. You need to find two or three known facts and check whether the output is consistent with them. If it is not, you know something is broken even if you cannot say exactly what.
The Mindset
The Enigma story carries three lessons that transfer directly.
Complex problems often have elegant solutions — but only if you look from the right angle. Turing did not beat Enigma through more effort. He beat it by finding the structural weakness: the fact that a letter could never encrypt to itself. One constraint, rigorously applied, changed everything.
Collaboration eliminates blind spots. No individual reviewer catches everything. A second perspective applied to the same AI output often finds the thing the first missed — not because the second reviewer is smarter, but because they are checking different cribs.
Persistence is not stubbornness. The codebreakers failed constantly. Messages they could not crack, machines that broke, settings that changed overnight. They adapted their methods rather than repeating failed approaches. When an AI output resists quick verification, changing the angle is more productive than applying more force to the same approach.
The bridge: Cribs, constraints, and elimination — these are the tools of structured logical reasoning under uncertainty. The next section gives you a hands-on exercise to build the boolean logic skills that underpin all of it.