Writing a Policy That Works
This is a member-only chapter. Log in with your Signal Over Noise membership email to continue.
Log in to readModule 5 · Section 4 of 7
Writing a Policy That Works
The principles that make AI use policies effective:
Be specific about the risk, not just the rule. “Don’t enter confidential information into AI tools” is harder to follow than “Don’t enter client names, specific financial figures, or unpublished strategy documents into consumer-tier AI tools.” The first requires judgement about what counts as confidential. The second gives clear examples people can apply without asking for clarification every time.
Create a tiered framework rather than a binary. Distinguish between approved tools (enterprise tier, data handling agreement in place), conditional tools (consumer tier, acceptable for specific use cases that are listed), and prohibited tools (no acceptable use case for work purposes). Most teams land on two categories: approved for specific tasks, and not approved. This is more actionable than a blanket ban and more protective than blanket permission.
Specify what “sensitive” means in your context. Every business has specific categories of information that are especially sensitive: the client list, the pricing model, the technical architecture, the acquisition target. Make these explicit. “Sensitive information” is ambiguous. “Do not enter client names, signed contract terms, or the contents of the [specific folder]” is not.
Include a reporting path, not just prohibitions. People need to know what to do if they think they may have made a mistake. If the only thing the policy says is “don’t do this,” then someone who has already done it has no good option — they can self-report and face consequences, or they can say nothing and leave a potential data issue unaddressed. A no-blame reporting path for incidents encourages early disclosure, which limits damage.