Deepfake Video
This is a member-only chapter. Log in with your Signal Over Noise membership email to continue.
Log in to readModule 2 · Section 4 of 7
Deepfake Video
The Arup incident from Module 1 is the clearest example of operational deepfake video use. An entire multi-person video conference fabricated from publicly available footage. At scale, the technology that powered it is now available to anyone.
Tools like Deep-Live-Cam — which reached number one on GitHub’s trending repositories in August 2024 — enable real-time face-swapping during live video calls using a single source photo. HeyGen and D-ID offer text-to-video avatar creation, with the first video free. Tencent’s service produces half-body deepfakes within 24 hours for approximately $145.
The detection problem is severe. Humans viewing high-quality deepfake videos achieve only 24.5% accuracy in identifying them as synthetic — worse than random chance, because people are actively fooled rather than simply uncertain. Commercial deepfake detectors achieve 78% accuracy at best on real-world examples, and they degrade badly on content they have not been specifically trained on.
One practical test that still works as of early 2026: ask the person on the video call to turn their head slowly to a 45-degree angle and hold it for two seconds. Current real-time deepfake tools produce visible glitching or distortion at angles that differ from the source material. This is not a permanent solution — the technology will improve — but it is a meaningful check today.
What to watch for: Any video call that involves an unusual financial request, a request for system access, or an unusual decision that bypasses normal process. The scenario is almost always urgent and confidential: “We need to move quickly on this and keep it between us.” Set a policy: no financial transfers of any size are authorised based on video calls alone. A second verification via a known phone number or in-person is required.