The Shadow IT Problem
This is a member-only chapter. Log in with your Signal Over Noise membership email to continue.
Log in to readModule 3 · Section 4 of 7
The Shadow IT Problem
Cisco’s 2024 Data Privacy Benchmark Study, which surveyed 2,600 privacy and security professionals, found that 48% of employees admit entering non-public company information into generative AI tools. Separately, 43% use AI tools at work without telling their employer.
This is shadow IT: technology being used for work purposes outside the business’s visibility or control. It is not new — people have been using personal email for work files and personal cloud storage for years. What is new is the scale of data that AI tools can absorb in a single session.
Pasting a client proposal into ChatGPT to improve the writing. Summarising a confidential contract. Asking an AI to analyse a spreadsheet containing customer data. Each of these feels like a productivity shortcut. Each of these may represent a data breach depending on the platform used, the data classification of the information, and the regulatory obligations the business is subject to.
Healthcare is the clearest example. Five hospitals in Western Australia banned ChatGPT in May 2023 after discovering staff had used it to write private medical notes. Netskope research from 2024 found 71% of healthcare workers still use personal AI accounts for work, and 81% of data policy violations in healthcare involve regulated health data. Most consumer AI tools do not meet HIPAA compliance standards and do not sign Business Associate Agreements — which means patient data entered becomes data the vendor controls.
The legal profession learned a related lesson in June 2023. Attorneys Steven Schwartz and Peter LoDuca submitted a legal brief to US District Court containing fabricated case citations — real-looking cases that did not exist, generated by ChatGPT. Judge Kevin Castel fined them $5,000 collectively. More significantly, the law firm’s reputational damage was described as severe. A survey of legal professionals found that 11% of data inputted by law firm staff into ChatGPT involved information protected by attorney-client privilege.