Deepfakes at Work: Even Your Boss Could Be a Fake
U.S. intelligence agencies, including the National Security Agency, FBI, and CISA, have published a report stating that the threat from “synthetic media,” or deepfakes, has increased significantly in recent times. Synthetic media refers to artificially created texts, videos, and audio recordings that are becoming increasingly difficult to distinguish from real ones.
According to these agencies, scammers and spies often use deepfakes to gain access to corporate systems by impersonating company employees or deceiving clients. The main targets are military personnel, government employees, first responders, as well as critical infrastructure and defense companies.
In May 2023, two notable incidents were reported. In one case, attackers imitated the voice and appearance of a company’s CEO during a WhatsApp call. They even managed to recreate the interior of his room. In another example, criminals used a combination of fake audio, video, and text messages to pose as one of the company’s executives. The conversation started on WhatsApp and then moved to a video conference on the Teams platform. “The connection quality was very poor, so the attacker suggested switching to text and began insisting on a money transfer,” the report states. “At this point, the victim started to suspect something was wrong and ended the conversation.”
Government agencies also reference the 2023 list of top political and economic risks from Eurasia Group, where generative artificial intelligence ranks third. According to this report, advances in AI could undermine social trust and strengthen the influence of authoritarian regimes.
Synthetic media created with generative technologies is a convenient tool for spreading disinformation in political and social spheres.
To protect against deepfake threats, agencies recommend that companies set up special programs to detect and verify fake media in real time. In addition, cybersecurity specialists should develop a detailed incident response plan that takes into account various attack scenarios. This plan should be regularly practiced through training exercises.