The Five Whys Technique
This is a member-only chapter. Log in with your Signal Over Noise membership email to continue.
Log in to readModule 6 · Section 3 of 6
The Five Whys Technique
In the 1950s, Toyota engineers developed a diagnostic method for production line failures. When something went wrong, they did not just fix the symptom and move on. They asked why the failure occurred. Then they asked why that was the case. Then why again — until they had asked five times and reached the actual root cause rather than the surface-level problem.
A machine stops on the assembly line. Why? The bearing failed. Why? It was not lubricated. Why? There was no lubrication schedule. Why? Nobody assigned responsibility for it. Why? The onboarding process does not cover equipment maintenance tasks.
The fix is not to oil the bearing. The fix is to add equipment maintenance to the onboarding process. Everything else is a band-aid.
The technique in practice
Five Whys works because it forces you past the obvious answer. Most problems feel self-explanatory at first — the fix seems clear, the cause seems obvious. The technique slows you down and requires you to justify each step. That discipline is what surfaces the non-obvious causes that keep repeating.
The method is flexible. You do not always need exactly five iterations. Sometimes three questions get you to the root. Sometimes you need seven. The number is a rough guide toward persistence, not a hard rule.
It helps to write each question and answer down as you go. Seeing the chain on paper makes it easier to spot where you have made an assumption rather than identified a real cause.
Applying this to AI
This is the technique that maps most directly to AI failure diagnosis. The next time an AI gives you wrong, incomplete, or unhelpful output, try walking the chain.
Here is an example. A professional asks an AI assistant to summarise a meeting transcript and flag the key action items. The output misses three of the five action items.
- Problem: AI missed action items in the summary.
- Why? It summarised the discussion but did not specifically look for action items.
- Why? The prompt asked for a summary, not a structured extraction of commitments.
- Why? I assumed “summarise and flag action items” was clear enough to trigger that behaviour.
- Why? I did not give the AI a definition of what counts as an action item.
- Why? I treated the AI like a colleague who knows what I mean, rather than a system that needs explicit instruction.
The fix is not to try the summary again. It is to provide a definition: “An action item is a specific commitment made by a named person to complete a named task by a specific date.” That single addition changes the output.
The Five Whys prevents the most common AI debugging mistake: blaming the model for a failure that originated in the prompt. It is not always the model. Often the root cause is an assumption you made about what the model would understand without being told.