The Constraints Document: Defining What AI Shouldn't Do
This is a member-only chapter. Log in with your Signal Over Noise membership email to continue.
Log in to readModule 5 · Section 8 of 10
The Constraints Document: Defining What AI Shouldn’t Do
Most AI setup focuses on instructions — what you want the tool to do. But the real quality gap lives in what you don’t want it to do.
A Constraints Document sets boundaries that stay the same across every interaction with an AI tool. Instructions change per task. Constraints don’t. Over time, the constraints become more valuable than the instructions themselves.
Building Your Constraints Document:
Start with three questions about each AI tool you use:
- What do you want it to do? (This is standard prompting — most people stop here.)
- What don’t you want it to do? (This is the constraints document.)
- What can it actually do? (This is testing its limits — ask the tool itself.)
Example Constraints for a Freelancer’s Writing Assistant:
- Never invent statistics, case studies, or client names
- Never claim expertise I don’t have or certifications I haven’t earned
- Never use words from my banned list (see Chapter 3’s AI slop discussion)
- Always flag when a claim needs verification rather than presenting it as fact
- Never draft client communications without including specific details from our actual project
Example Constraints for a Customer Service AI:
- Don’t make promises about timelines without checking the project schedule
- Don’t offer discounts or credits without human approval
- Don’t reference competitor products, even favourably
- Don’t use generic acknowledgment language (“I understand your frustration”)
- Don’t escalate to a manager unless the customer explicitly requests it
Testing the Limits — Questions to Ask Your AI Tools:
- “What are you most likely to get wrong in this context?”
- “What should I always double-check when working with you?”
- “Given what you know about my setup, what am I trusting you to do that I probably shouldn’t be?”
- “What would you need from me to do this task better?”
The answers won’t always be accurate — AI tools can be overconfident about their own capabilities — but they surface useful starting points for thinking about boundaries.
Key Principle: Add to the constraints document every time something goes wrong. Each failure becomes a rule that stops it happening again. Over time, the constraints document becomes a living record of what you’ve learned about working with AI in your specific context.