Why Bans Fail
This is a member-only chapter. Log in with your Signal Over Noise membership email to continue.
Log in to readModule 5 · Section 2 of 7
Why Bans Fail
The reflex response to AI security risk, especially after a high-profile incident, is to ban the tools. Samsung did it after the semiconductor leak. JPMorgan Chase, Goldman Sachs, and Bank of America banned ChatGPT in February 2023. Many of those bans were eventually softened or replaced with more nuanced policies as the businesses recognised that a blanket ban simply moved AI use underground.
Banning a tool that people find genuinely useful does not make the underlying need go away. If ChatGPT helps someone draft a report in 20 minutes that would otherwise take two hours, banning ChatGPT means that person will either use it anyway from their phone or personal account (creating a shadow IT problem with no visibility), or stop getting the benefit and become less productive (creating a retention and morale problem), or start using a different tool that may have worse security properties (solving nothing).
The goal is not zero AI use. The goal is AI use that does not create avoidable risk.