A Minimal Workable Policy
This is a member-only chapter. Log in with your Signal Over Noise membership email to continue.
Log in to readModule 5 · Section 5 of 7
A Minimal Workable Policy
For a small team without a security function, this is the minimum viable AI use policy:
AI Tool Use Policy — [Team/Organisation Name]
Approved tools for work use: [List specific tools and tiers — e.g., “Claude Pro, Microsoft Copilot for M365”]
Conditionally approved: [Consumer-tier tools for specific use cases — e.g., “ChatGPT free tier for drafting only, using no client or project-specific information”]
What not to enter into any AI tool (regardless of tier):
- Client names, contact information, or project details
- Unpublished financial data
- Source code from internal systems
- Employee information
- Anything marked confidential or restricted in our document classification system
Verification rule: Any unusual financial request, credential reset, or access grant received via email, message, or voice call — regardless of how credible it appears — requires verification by calling back on a number from our own contact records. Not a number provided in the request.
If you think something went wrong: Contact [name/email] without delay. No-blame reporting. Early disclosure limits damage.
This takes under an hour to write and covers the most common failure modes.