This is a member-only chapter. Log in with your Signal Over Noise membership email to continue.
Log in to readChapter 1: The AI Implementation Crisis
The Capability Trap: When Better Tools Create Worse Results
AI tools have never been more powerful or more accessible — yet most people are getting worse results with them, not better.
S&P Global reports that 42% of companies abandoned most AI initiatives in 2025, up from 17% the year before. The tools keep getting better. The results keep getting worse.
But here’s what’s interesting: a small group is achieving $10.30 in return for every dollar invested — with the same technology that’s failing everyone else. IDC research shows top performers hit that $10.30 mark compared to the average of $3.70. And they’re not doing it with fancier tools — they’re doing it with better methodology.
The difference isn’t the AI. It’s how you use it.
The Scattered Experiment Problem
Monday you’re using ChatGPT for blog content. Tuesday you tried Canva AI for social graphics. Wednesday someone recommended Jasper, so you signed up for a free trial. Thursday you watched a YouTube video about an AI scheduling assistant and added that too. Friday you’re back in ChatGPT but starting a new chat because you can’t find the one from Monday.
You’re one person wearing every hat, and each hat now has its own AI tool — none of them talking to each other, none of them building on what the others produce.
The result? Money leaking out across subscriptions you barely use, context scattered across tools that can’t share information, and no clear sense of whether any of it is actually saving you time or making you money.
The Undefined Success Problem
Most AI projects fail before implementation begins — not because of technology, but because nobody defined what success looks like.
A course creator buys an AI writing tool because everyone says it’ll speed up content production. Six months and several hundred dollars later, someone asks: “Did it work?”
The honest answer: “I don’t know, because I never defined what ‘working’ meant.”
Projects without clear metrics can’t fail. There’s no definition of failure. So they drift indefinitely, consuming budget while delivering “lessons learned” instead of results.
Veljko Krunic puts it directly in Succeeding with AI: “If you can’t quantify the business result you’re hoping to achieve, you have to ask yourself and your stakeholders whether the project is worth doing.”
The Metric Mandate: Before any AI project moves past the idea stage, it must answer five questions — what number will change, what that number is today, what improvement justifies the cost, when you’ll measure, and what result means you stop. Chapter 2 covers these in detail. For now: if you can’t answer all five, the project is still a wish, not a plan.
Without clear metrics, you end up keeping tools out of inertia rather than conviction — unable to tell what’s helping from what’s just sitting there.
Why Smart People Make Bad AI Decisions
Most people treat AI like any other software purchase: sign up, watch a tutorial, hope it sticks. But AI is different — the tools evolve monthly, the value comes from how you change your work, not from the tool itself.
You read about a tool that “uses RAG pipelines to ground LLM outputs in your proprietary data.” What does that actually mean for your Tuesday afternoon? If you can’t translate a tool’s description into a specific outcome for your business, you’ll buy it for the wrong reasons and drop it when it doesn’t magically solve an undefined problem.
The fix: Before you sign up for anything, finish this sentence: “This will help me [specific outcome] by [specific mechanism].” If you can’t finish it without using the vendor’s marketing language, you don’t understand what you’re buying yet.
What the Data Actually Tells Us
Three patterns show up again and again:
- Simple beats custom. 67% of off-the-shelf AI setups succeed vs. 33% of custom builds. People overestimate how unique their needs are.
- Fewer beats more. AI leaders pursue half as many opportunities but scale twice as many successfully. Going deep on one thing beats going shallow on ten.
- Unsanctioned beats official. 50% of employees use AI tools their employers didn’t approve — and report higher satisfaction than people using the official enterprise tools. People make tools worse by adding complexity.
The Hidden Costs of Uncoordinated AI
Subscription bleed: Three tools for content, two for research, one for scheduling — none of them earning their keep individually.
Context loss: Each tool starts from scratch. Work done in one doesn’t inform another, so you repeat yourself constantly.
No compounding: Scattered experiments don’t build on each other. Each month you’re starting over instead of going deeper.
Security exposure: Client data going into free-tier tools you haven’t properly evaluated. You ARE the IT department, which means this falls on you.
Tool fatigue: Constantly switching between tools and re-learning interfaces exhausts the attention you need for actual work.
You’re Still Responsible
In 2024, Air Canada’s chatbot gave a customer wrong information about bereavement fares. The company argued they weren’t liable for what their AI said. The court disagreed.
The principle is simple: AI tools are yours to check, yours to verify, and yours to take responsibility for. “The AI wrote it” is not a defence your clients will accept.
What Works Instead
BCG found that the businesses getting real value from AI pursue half as many opportunities but scale twice as many successfully. They’re not using better tools. They’re using a better approach:
- Go deep on fewer things. One use case, done properly, before the next.
- Change how you work, not just what tools you use. Redesign the workflow, don’t just bolt AI onto your existing process.
- Use off-the-shelf tools first. Vendor solutions succeed at twice the rate of custom builds. Every layer of customisation reduces your odds.
- Put most of your effort into learning, not buying. Technology is the smallest part of making AI work.
The frameworks in this playbook help you avoid scattered experiments and build AI use that compounds over time.