Three Incidents That Shaped How I Think About This
This is a member-only chapter. Log in with your Signal Over Noise membership email to continue.
Log in to readModule 1 · Section 3 of 6
Three Incidents That Shaped How I Think About This
Samsung, April 2023. Three employees in the semiconductor division independently uploaded sensitive materials to ChatGPT within weeks of each other. One pasted faulty source code to ask for debugging help. Another submitted code for optimising equipment processes. A third transcribed an internal meeting and asked ChatGPT to write the minutes. Proprietary source code, equipment testing sequences, and strategic discussions from internal meetings all went into OpenAI’s systems. Samsung banned all generative AI tools company-wide the same week. The intellectual property damage is impossible to quantify.
MGM Resorts, September 2023. Attackers called the MGM IT help desk, used social engineering techniques — and possibly AI-enhanced voice synthesis — to convince staff to grant administrator access to their systems. The attack triggered a 10-day operational shutdown. Slot machines went dark. ATMs stopped working. Digital room keys failed across properties including the Bellagio and Mandalay Bay. MGM refused to pay the ransom. The total loss: $100 million. The attack group, Scattered Spider, consisted of young people aged 19 to 22. They did not need nation-state resources. They needed AI tools and a phone.
Arup, January 2024. Described above. The finance worker authorised $25.6 million based on a fabricated video call. The most striking detail: he was initially suspicious. The deepfake video call resolved his doubts. The very thing that security training tells you to trust — seeing someone’s face, hearing their voice — had been weaponised.