AI/ML 4 min read

Your CFO Just Called to Approve a Wire Transfer. It Wasn't Your CFO.

Deepfake fraud losses hit $1.1B in 2025. One company lost $25.6M from a single AI-generated video call. Here's the 6-point verification protocol every company needs.

By Meetesh Patel

In January 2024, a finance employee at Arup, a global engineering firm with 18,500 people, joined a video call with what looked like his CFO and several colleagues. They asked him to process a series of urgent wire transfers. He did. Fifteen transfers. One day. $25.6 million gone. Every person on that call was an AI-generated deepfake. The attackers had trained their models on publicly available conference recordings and company meeting footage. As of today, no one has been arrested and no money has been recovered.

That's not a one-off. Deepfake fraud losses in the US hit $1.1 billion in 2025, tripling from $360 million the year before. Pindrop's research shows deepfake fraud attempts surged over 1,300% in a single year, moving from something that happened once a month to multiple attempts per day. And the technology keeps getting cheaper: a convincing voice clone now requires roughly three seconds of sample audio.

If your company approves wire transfers, vendor changes, or financial decisions over phone or video, this is your risk profile right now.

Why traditional verification doesn't work anymore

The Arup attack succeeded because it exploited the one thing most people still trust: a live video call. The finance employee initially suspected the email was phishing. But once he saw faces he recognized and heard voices he knew, his guard dropped. That's exactly the playbook.

Studies from 2025 show humans perform barely better than a coin flip when trying to identify sophisticated deepfake audio. Your ear isn't a security tool. And video isn't much better. The deepfakes used against Arup weren't pre-recorded clips; they were real-time synthetic participants on a live call.

Gartner predicts that by the end of this year, 30% of enterprises won't trust voice or video verification on its own. That's not a forecast about the distant future. That's now.

The regulatory response is building

Regulators are catching up, but slowly.

The FCC unanimously ruled in February 2024 that AI-generated voices count as "artificial" under the Telephone Consumer Protection Act (47 U.S.C. 227). That makes unauthorized AI voice clone calls illegal and gives state attorneys general a direct enforcement hook.

The FTC finalized its impersonation rule (16 C.F.R. 461) banning AI-generated impersonation of businesses and government. A separate proposal to hold AI tool providers liable under a "means and instrumentalities" theory is still pending. If adopted, it would extend liability upstream to companies that build voice cloning tools knowing they'll be used for fraud.

Tennessee's ELVIS Act (effective July 2024) is the first state law explicitly targeting AI voice cloning. It covers both deepfake creators and tool providers, with civil liability and criminal penalties (Class A misdemeanor, up to 11 months 29 days imprisonment).

And H.R. 1734, the Preventing Deep Fake Scams Act, would create a Treasury-led task force on AI financial fraud with a mandate to develop best practices for financial institutions.

But here's the thing: none of this would have stopped the Arup attack. The attackers weren't in a regulated jurisdiction. They didn't use a US-based voice cloning tool. And the victim company had no procedural defense in place. Regulation matters, but your internal protocols matter more.

What to do this week

1. Kill voice/video as a standalone verification method for financial approvals. If someone calls or appears on video asking for a wire transfer, that call proves nothing by itself. Period.

2. Implement out-of-band callback verification. For any payment over your threshold (pick one: $10K, $25K, whatever fits your business), require a callback to a pre-registered phone number. Never use a number provided in the request itself.

3. Mandate two-person approval for non-routine transfers. One person can be fooled. Two people being fooled simultaneously, through separate channels, is orders of magnitude harder.

4. Create a "financial safe word" protocol. This sounds low-tech because it is. Establish a rotating verbal passphrase between executives authorized to approve large transactions. Change it monthly. Don't store it digitally.

5. Brief your finance team on the Arup case. Print it out. Walk through it. The single most valuable thing you can do is make the people who move money understand that video calls can be completely fabricated.

6. Review your insurance. Deepfake wire fraud sits in an awkward gap between cyber insurance, crime insurance, and social engineering coverage. Check your policies and compliance posture now, not after an incident. Many standard cyber policies don't cover social engineering losses.

The companies that will avoid a deepfake wire fraud loss aren't the ones with the best AI detection software. They're the ones that built verification protocols that don't depend on human senses at all.


This article is for informational purposes only and does not constitute legal advice. Consult qualified legal counsel before making compliance decisions based on the developments discussed here.

Building internal controls against AI-enabled fraud is core governance work. If your verification protocols haven't been updated for the deepfake era, that's a conversation worth having with outside counsel who understands both the technology and the liability exposure.

Disclaimer: This article is provided for informational purposes only and does not constitute legal advice. The information contained herein should not be relied upon as legal advice and readers are encouraged to seek the advice of legal counsel. The views expressed in this article are solely those of the author and do not necessarily reflect the views of Consilium Law LLC.

Schedule a Call