10 Real Voice Deepfake Attacks (And Why AI Voice Agents Are the Next Target)
The voice on the other end of the line used to be the most trusted signal in business communications. In 2019, that started to break.
In March 2019, the CEO of a UK-based energy firm thought he was speaking with his boss - the chief executive of the German parent company. The voice was familiar, the accent right, the cadence convincing. The caller asked for an urgent €220,000 wire to a Hungarian supplier within the hour. The CEO sent the money. It was the first publicly documented case of AI-generated voice fraud, and the insurer Euler Hermes covered the entire loss.
Six years later, that case is a curiosity. The toolchain that took skill and budget in 2019 takes three seconds of reference audio and a free download in 2026. The FBI says AI-enabled fraud topped $893 million in reported losses in 2025, part of a record $21 billion in total US cybercrime that year. The real number is almost certainly higher - most incidents never get reported.
Below are ten documented voice attacks from the last five years. Some emptied accounts. Some were foiled at the door by a single sharp employee. Every one of them is what your AI voice agent will see if it's listening to a phone line in 2026.
1. UAE / Hong Kong bank heist - $35 million (January 2020)
A Hong Kong branch manager of a Japanese company took a call from someone whose voice matched a company director he had spoken with before. Forged emails confirmed the request. The manager authorized $35 million in transfers. The fraud allegedly involved at least 17 individuals across multiple jurisdictions, and only became public in October 2021, when a US court filing surfaced as UAE investigators tried to trace $400,000 routed through US bank accounts at Centennial Bank. (AI Incident Database)
2. Arup, Hong Kong - $25.6 million (February 2024)
An employee at the Hong Kong office of Arup, one of the world's largest engineering firms, joined a video call with what appeared to be the CFO and several senior colleagues. Every face and every voice on the call was AI-generated. Convinced by the meeting, the employee made 15 separate transfers totaling HK$200 million ($25.6 million) before the fraud was discovered. It is the single largest publicly documented deepfake fraud loss to date.
3. AI-cloned Biden robocalls, New Hampshire - January 2024
Two days before the New Hampshire primary, voters received robocalls in an AI-cloned voice of President Joe Biden telling them not to vote. The FCC adopted a $6 million forfeiture order against political consultant Steve Kramer, who was also indicted on state felony voter-suppression and impersonation charges. Lingo Telecom - the carrier that transmitted the calls - paid a separate $1 million civil penalty and agreed to a first-of-its-kind STIR/SHAKEN compliance plan. (Perkins Coie analysis)
4. LastPass - foiled (April 2024)
An employee at password manager LastPass received WhatsApp calls, texts, and at least one voicemail featuring a deepfake of CEO Karim Toubba. The employee ignored them - WhatsApp wasn't a normal business channel, and the messages carried the trademark urgency of a social-engineering attempt. No money was lost. The company later detailed how it now prepares for the next attack.
5. WPP - foiled (May 2024)
Attackers created a WhatsApp account using a public photo of WPP CEO Mark Read, then set up a Microsoft Teams meeting where they played an AI voice clone and looped YouTube footage of him, impersonating him in the chat at the same time. The target - described as an "agency leader" - was asked to set up a new business and hand over money and personal information. The attempt failed; no money or data was lost. (OECD AI incident record)
6. Ferrari - foiled by a book question (July 2024)
A Ferrari executive received WhatsApp messages and a phone call apparently from CEO Benedetto Vigna, with a convincing southern Italian accent, discussing a confidential deal that supposedly required a currency-hedge transaction. The executive sensed something was off and asked the caller a question only Vigna would know - the title of a book Vigna had recommended days earlier (*Decalogue of Complexity* by Alberto Felice De Toni). The caller could not answer and hung up. (MIT Sloan Management Review)
7. Italian defense minister cloned - €1 million from Moratti (February 2025)
Scammers used an AI-cloned voice of Italy's defense minister Guido Crosetto to call some of the country's wealthiest people - including Giorgio Armani, former Inter Milan owner Massimo Moratti, Prada co-founder Patrizio Bertelli, and members of the Beretta and Menarini families. The caller asked for roughly €1 million to be wired to a Hong Kong bank account, claiming the funds were needed to free kidnapped Italian journalists. Only Moratti paid - €1 million (~$1.03 million). Italian police later traced the funds to a Dutch bank account and froze them. (Bloomberg)
8. Singapore multinational - $499,000 (March 2025)
A finance director at a Singapore multinational firm joined what looked like a routine Zoom call with her senior leadership team. Every face was AI-generated. Every voice was synthesized. She authorized a $499,000 transfer before anyone flagged the fraud. (ScamWatchHQ summary)
9. FBI warning - senior US officials being impersonated (April–December 2025)
The FBI's Internet Crime Complaint Center issued a public service announcement warning that since April 2025, malicious actors have impersonated senior US state, White House, and Cabinet officials - along with members of Congress - using AI-generated voice messages and texts. Targets include current and former federal and state officials, governors, senators, and business leaders. A follow-up PSA in December 2025 said the campaign was growing more sophisticated. Objectives ranged from financial fraud to intelligence gathering and access to protected systems. (CNN coverage)
10. Florida grandmother - $15,000 in cash (July 2025)
Sharon Brightwell of Dover, Florida, received a call apparently from her daughter saying she had been in a car accident and needed legal help. The voice was AI-cloned from public audio. She sent $15,000 in cash. Distress and grandparent scams of this kind drove more than $5 million in reported losses in 2025, per the FBI; AI-related elder-fraud complaints alone totaled roughly $352 million that year. (Bitdefender summary, SecureWorld)
The pattern
Across these ten incidents, the attack surface is always the same: a phone line, a video call, or a voice memo - a channel where the listener is trusting their ears.
The defenses that worked were never technical. WPP, Ferrari, and LastPass were all saved by the same thing: an employee who noticed something was off. That is a defense that does not scale. Most companies will eventually lose someone who hesitates a beat too long.
The threat side is moving fast. Voice cloning now requires as little as three seconds of audio. CrowdStrike's 2025 threat report measured a +442% rise in voice-phishing attacks between the first and second halves of 2024. Pindrop's 2025 Voice Intelligence and Security Report found that one in every 599 inbound calls to a contact center is fraudulent, and one in 106 already shows deepfake characteristics. The European Parliament has warned that existing consumer protections are not keeping up.
Why AI voice agents are the next target
Every incident above required a human at the receiving end - a CEO, a finance director, a grandmother. Those targets are still being attacked. But the next layer is already in production.
AI voice agents - built on Vapi, Retell, LiveKit, Twilio, and the dozens of platforms layered on top - handle inbound and outbound calls autonomously for banks, insurers, healthcare networks, and contact centers. They have:
- A microphone, which cannot tell synthetic audio from real audio.
- A transcript pipeline, which strips out the exact signal - spectrogram artifacts, prosody glitches, synthesis fingerprints - that would have caught the deepfake.
- An LLM, which is happy to act on any plausible request from any plausible voice.
- No audio-side verification layer.
The Ferrari executive noticed the call was off because he knew his CEO personally. A voice agent has no prior relationship with the caller, no shared history, no book to ask about. It just picks up the phone.
Voice-biometric authentication is not a backstop either. University of Waterloo researchers showed that modern deepfake models can defeat commercial voice-biometric systems - when an attacker is allowed up to six authentication attempts (the typical retry budget before lockout), the deepfake gets through in roughly 99% of cases. Voiceprint matching was designed to answer "does this voice match the enrolled user?" - not "is this voice real?"
The bouncer at the door
CAPTCHA stopped bots on the web. Cloudflare Turnstile made it invisible. Vocos Bouncer is the same idea for voice. One API call per inbound stream. Sub-second verdict. Drops in front of any voice stack. Real callers walk in. Cloned voices stay outside.
If your voice agent is in production and you don't have a detector listening to the audio side, the next entry on this list could be yours.
- ---
*Topics: AI voice deepfake attacks, voice cloning fraud, deepfake CEO scam, AI voice agent security, voice phishing (vishing), synthetic voice fraud, deepfake detection, voice biometric bypass, contact center fraud, voice agent vulnerabilities, AI fraud statistics 2025, FBI AI voice warning, audio deepfake API, real-time voice verification.*