This is a member-only chapter. Log in with your Signal Over Noise membership email to continue.
Log in to readChapter 5: Keeping AI Honest
Why Checking Your Work Matters More Than the Technology
Every business using AI eventually hits the same wall: the tools are working, the results look promising, and then something goes wrong that nobody planned for. A client gets bad information. An email goes out with fabricated details — or worse, a proposal quotes numbers the AI invented.
Checking your work is how you catch these problems before they reach your clients.
The traditional approach: Write a 40-page AI policy before touching any tools. The practical approach: Build simple guardrails that grow as you learn what works.
Most review efforts fail because they try to solve tomorrow’s problems with today’s limited understanding. Start simple and add rules as you go — every mistake teaches you something worth writing down.
If you’re using AI without thinking about what could go wrong, you’re gambling with your reputation.
The Air Canada case made this clear: businesses are fully responsible for AI-generated decisions. You can’t outsource liability to an algorithm — and “the AI wrote it” is not a defence your clients will accept.
Managing Risk Without Paralysis
Checking your work isn’t about preventing all risks — it’s about knowing which risks matter and handling those well.
Level 1: The Essentials These are non-negotiable. Get these right from day one.
Data Privacy: Don’t paste client data, contracts, or confidential information into public AI tools like ChatGPT unless you’re on a plan that guarantees your data isn’t used for training. If in doubt, anonymise first.
Check Before Sending: AI will confidently present wrong information as fact. Every piece of AI-generated content that reaches a client, customer, or the public needs your eyes on it first. You’re the expert — the AI is the assistant.
Take Responsibility: If something AI-generated goes wrong, it’s your problem. Not the AI’s, not the vendor’s. Own everything that goes out under your name or your business’s name.
Level 2: The Smart Habits These protect you longer-term.
Dependency Awareness: If a tool disappeared tomorrow, could you still do the work? Don’t build your entire operation around a single AI service.
Keep Your Skills Sharp: AI handles the routine work so you can focus on judgment, creativity, and relationships — the things clients actually pay for. Don’t let those muscles atrophy.
Protect Your Voice: Generic AI output sounds like everyone else’s generic AI output. Your clients chose you for a reason. Make sure AI-assisted work still sounds like you.
For regulated industries: If you work in finance, healthcare, legal, or another regulated sector, you’ll need more than these two levels. The EU AI Act (effective 2025-2026) requires transparency, human oversight, and audit trails for high-risk AI uses. Penalties run up to EUR 35 million or 7% of global annual revenue. If this applies to you, read the full requirements and get proper legal advice. The frameworks in this chapter are your foundation, but regulated industries need specialist help on top.
Being Straight About AI Use
Ethics guidelines only matter when they change what you actually do. For a solo operator or small business, the essentials are straightforward:
Be Transparent: If a client asks whether you used AI, tell the truth. For client-facing content, consider adding “AI-assisted” where appropriate. The stigma around AI use is fading, but dishonesty about it is not.
Check for Accuracy: AI hallucinates — it confidently states things that are not true. Every factual claim, statistic, and specific detail in AI-generated content needs verification. This is especially critical for anything that could affect a client’s decisions or reputation.
Take Responsibility: You are the last line of defence. If AI-generated content goes out with your name on it, you own the result — good or bad. Build checking into your workflow, not as an afterthought.
Watch for Bias: AI can produce unfair results in ways you wouldn’t expect. If you’re using AI for anything that affects people — hiring help, sorting customers, targeting content — check the outputs across different groups. Patterns you didn’t intend can emerge.
Content Classification: Two Levels That Actually Work
Not all AI output needs the same level of review. Here’s a simple approach:
Level 1: For Your Eyes Only
- AI draft is fine with minimal review
- Speed matters more than polish
- Examples: meeting notes, personal research summaries, brainstorming, internal to-do lists, first drafts you’ll rewrite anyway
Level 2: Client-Facing
- AI draft plus your expertise and review before sending
- Must demonstrate your knowledge, not generic templates
- Examples: client proposals, published articles, customer emails, social media posts, anything with your name or business name attached
The Quick Check for Level 2 Content:
- Could any business in my industry send this exact text? (If yes, rewrite with your specifics)
- Does this demonstrate my actual expertise or just template knowledge?
- Would a knowledgeable reader suspect this is unedited AI output?
- Have I verified every factual claim?
AI Slop Red Flags:
- Generic superlatives (“impressive,” “industry-leading,” “innovative solutions”)
- Flattery sandwich pattern (compliment-pitch-compliment)
- Perfect grammar with zero original insights
- Could apply to any business with find-and-replace on the company name
- Lacks specific data, examples, or contextual knowledge
Choosing Your AI Tools
For most solopreneurs and small businesses, the tool choice is simpler than enterprise guides make it sound. Here are the questions that actually matter:
Does it do what I need? Test it on your actual work, not demo scenarios. A tool that’s brilliant at generating marketing copy is useless if your bottleneck is invoice processing.
Can I afford it? Factor in your time learning and configuring it. A EUR 20/month tool that saves you 5 hours is a bargain. A EUR 200/month tool that saves you 6 hours might not be.
Is my data safe? Read the privacy policy (or at least the summary). Does the vendor use your data to train their models? Where is your data stored? Can you delete it if you leave?
Can I leave easily? Check the exit terms before you start. Can you export your data? What happens to your workflows if you cancel? Avoid tools that lock you in with proprietary formats.
Is this the simplest option that works? Remember the Takers principle from Chapter 1: off-the-shelf tools succeed twice as often as custom builds. Don’t over-engineer your setup. A simple prompt template might do the same job as a complex Zapier/Make.com automation — with none of the maintenance burden.
Keeping AI Honest: A Worked Example
Meet Priya, a freelance marketing consultant who uses AI for client proposal drafts, social media scheduling, and market research summaries.
The challenge: She needs her AI outputs to be accurate, sound like her (not like every other marketing consultant), and never misrepresent her capabilities to clients.
Her Approach:
The Essentials (Level 1):
- Client briefs go into ChatGPT Plus (data not used for training) — never the free tier
- Every proposal gets a fact-check pass before sending, especially any statistics or market claims
- She runs a monthly “mystery shopper” test — sending herself a sample AI-generated email to check if it still sounds like her
The Smart Habits (Level 2):
- She keeps her own templates and frameworks as the backbone, using AI to speed up drafting rather than replacing her thinking
- Her Constraints Document includes: “Never claim ROI figures without citing the source study” and “Never use the phrase ‘in today’s fast-paced digital landscape’”
- Every quarter, she reviews her AI configurations against her current services and client types
Her Content Classification:
- For her eyes only: research summaries, competitor analysis drafts, brainstorming sessions
- Client-facing: proposals, strategy decks, campaign reports — all get her expert review and specific client context added
The result: Her clients consistently comment that her proposals feel personal and well-researched. She’s faster than she was without AI, but the quality and voice remain distinctly hers.
Key Principle: Your review process should grow alongside your AI use. Build guardrails that are clear enough to use now but flexible enough to improve as you learn — don’t try to anticipate every scenario upfront.
The Constraints Document: Defining What AI Shouldn’t Do
Most AI setup focuses on instructions — what you want the tool to do. But the real quality gap lives in what you don’t want it to do.
A Constraints Document sets boundaries that stay the same across every interaction with an AI tool. Instructions change per task. Constraints don’t. Over time, the constraints become more valuable than the instructions themselves.
Building Your Constraints Document:
Start with three questions about each AI tool you use:
- What do you want it to do? (This is standard prompting — most people stop here.)
- What don’t you want it to do? (This is the constraints document.)
- What can it actually do? (This is testing its limits — ask the tool itself.)
Example Constraints for a Freelancer’s Writing Assistant:
- Never invent statistics, case studies, or client names
- Never claim expertise I don’t have or certifications I haven’t earned
- Never use words from my banned list (see Chapter 3’s AI slop discussion)
- Always flag when a claim needs verification rather than presenting it as fact
- Never draft client communications without including specific details from our actual project
Example Constraints for a Customer Service AI:
- Don’t make promises about timelines without checking the project schedule
- Don’t offer discounts or credits without human approval
- Don’t reference competitor products, even favourably
- Don’t use generic acknowledgment language (“I understand your frustration”)
- Don’t escalate to a manager unless the customer explicitly requests it
Testing the Limits — Questions to Ask Your AI Tools:
- “What are you most likely to get wrong in this context?”
- “What should I always double-check when working with you?”
- “Given what you know about my setup, what am I trusting you to do that I probably shouldn’t be?”
- “What would you need from me to do this task better?”
The answers won’t always be accurate — AI tools can be overconfident about their own capabilities — but they surface useful starting points for thinking about boundaries.
Key Principle: Add to the constraints document every time something goes wrong. Each failure becomes a rule that stops it happening again. Over time, the constraints document becomes a living record of what you’ve learned about working with AI in your specific context.
Silent Drift: The Quality Problem Nobody Talks About
AI setups don’t fail loudly — they fail silently.
Silent drift happens when your AI setup quietly points to things that have moved, changed, or disappeared — with no error messages. The output still looks plausible. The system still runs. But the results are slowly becoming less relevant, less accurate, and less useful.
Why it happens: AI works from context given at setup time. That context doesn’t auto-update. When your services change, your pricing shifts, your client base evolves, or your tools get updated, the AI keeps working from the old reality.
That ChatGPT system prompt you wrote six months ago? Your business has changed since then. Has your AI setup?
The Diagnostic Questions (Run These Monthly):
- When did you last review your AI tool configurations and system prompts?
- What’s changed in your business since those configurations were set?
- If something was quietly producing outdated information, would you notice?
- What are you trusting the AI to do that you haven’t verified recently?
Building a Drift Audit Into Your 90-Day Cycles:
At the end of each 90-day cycle (Chapter 4), add a drift check:
- Review all AI tool configurations against current processes and services
- Verify that referenced documents, templates, and data sources still exist and are current
- Test a sample of AI outputs against manual verification
- Check whether the metrics you’re tracking still measure what matters
- Document any changes and update configurations accordingly
Remember: This maintenance takes an hour or two per quarter. Undetected drift costs far more — in client trust, in wasted effort, in decisions made on stale information.