When everyone on your video call is fake, how do you know who to trust?

🎙️ Related Podcast: California Compliance Currents: Navigating Privacy, AI, and Cybersecurity in the Golden State


Executive Summary: The Terrifying New Normal

In early February 2026, the AI Incident Database released findings that should keep every business leader awake at night: deepfake fraud has officially gone “industrial.”

This isn’t hyperbole. What began as a technological curiosity—swapping celebrity faces in videos—has metastasized into a sophisticated criminal industry capable of fabricating entire video conferences filled with AI-generated executives, passing job interviews with synthetic faces, and extracting millions of dollars from companies that trusted what their eyes and ears told them.

The numbers tell the story:

  • 8 million deepfake files now circulate online, up from 500,000 just three years ago- $12.5 billion in consumer fraud losses in the US during 2025- $25 million stolen from one engineering firm via a single video call where every participant was fake- 900% annual growth in deepfake volume- 24.5% accuracy—that’s how often humans correctly identify high-quality deepfakes (worse than a coin flip)

As MIT researcher Simon Mylius warned: “It’s become very accessible to a point where there is really effectively no barrier to entry.”

Welcome to the new reality. The scammers have industrialized. Your defenses need to catch up.


From Celebrity Fakes to Business Fraud: The Evolution of Deepfake Attacks

Phase 1: Proof of Concept (2017-2019)

The term “deepfake” was coined on Reddit in 2017, primarily describing face-swapping technology used for non-consensual celebrity imagery. Creating convincing fakes required significant expertise and computing power—this was the domain of researchers and early adopters, not criminals.

That changed in 2019 with the first high-profile financial attack. A UK energy firm’s CEO received a phone call from what he believed was his German parent company’s CEO, requesting an urgent €220,000 transfer to a Hungarian supplier. The voice was a near-perfect AI clone. The money vanished.

The proof of concept was complete. Criminals had a new weapon.

Phase 2: Democratization (2020-2023)

Open-source tools like DeepFaceLab made face-swapping accessible to anyone with a computer. Mobile apps enabled basic manipulations. Voice cloning accuracy improved dramatically.

By 2023, 500,000 deepfake files circulated online, and identity fraud attempts using deepfakes had surged by 3,000%. The tools were spreading faster than awareness.

Phase 3: Industrial Scale (2024-2025)

This was the inflection point. Real-time deepfake video calls became possible. Criminals proved they could fabricate not just voices, but entire multi-person video conferences. The average business loss per deepfake incident: $500,000.

Attacks began occurring every five minutes. North Korean IT workers started using deepfakes to infiltrate Fortune 500 companies at scale. The technology had gone from “proof of concept” to mass-produced criminal infrastructure.

Phase 4: Industrial Production (2026)

The AI Incident Database’s February 2026 analysis confirmed what security professionals feared: we’ve crossed into full industrial production.

Today, voice cloning requires only 3 seconds of audio. A convincing deepfake video can be produced for as little as $1. There is no meaningful barrier to entry. Anyone can be the target—and increasingly, anyone can be the attacker.

Harvard researcher Fred Heiding captures the trajectory: “The scale is changing. It’s becoming so cheap, almost anyone can use it now. The models are getting really good—they’re becoming much faster than most experts think.”


Case Study: The $25 Million All-Fake Video Call

The Attack That Changed Everything

In January 2024, a finance worker at Arup Group—a British multinational engineering firm with 18,000 employees—received an unusual message. It claimed to be from the company’s UK-based Chief Financial Officer, requesting participation in a “confidential transaction.”

The employee was suspicious. It looked like phishing.

Then came the video call.

On screen, the CFO appeared—his face, his voice, his mannerisms. Alongside him were multiple other senior executives, all known to the employee. The urgency was explained. The transaction was authorized.

Over the following days, the finance worker executed 15 separate transfers totaling $25 million (HKD 200 million) to five different Hong Kong bank accounts.

The fraud was only discovered when the employee contacted Arup headquarters through official channels to follow up.

The Revelation

When Hong Kong police investigated, they uncovered something unprecedented.

“(In the) multi-person video conference, it turns out that everyone [he saw] was fake,” Senior Superintendent Baron Chan Shun-ching told reporters.

Every single person on that video call—the CFO, the senior executives, the authoritative voices demanding action—was an AI-generated deepfake. The attackers had scraped publicly available videos and audio of Arup executives from corporate presentations, earnings calls, and media appearances. They’d reconstructed entire digital humans convincing enough to pass a live video conference.

Why This Case Matters

The Arup attack wasn’t just a theft. It was proof that the “see it to believe it” assumption—the foundation of trust in video communications—is fundamentally broken.

Before Arup, deepfake fraud typically involved one-on-one interactions: a fake CEO voice on a phone call, a synthetic executive in a direct video chat. The multi-participant attack demonstrated that criminals can now fabricate entire meetings, complete with supporting cast members, side conversations, and the social proof of multiple “colleagues” validating a request.

If everyone on the call can be fake, the call itself proves nothing.


Case Study: The Singapore CFO Who Thought He Knew His Executives

The Attack

In 2025, a finance director at an unnamed Singaporean multinational logged into what appeared to be a routine video call with company leadership. The CFO was there. Other top executives joined. The discussion turned to wire transfers that needed authorization.

Everything seemed normal. The faces were familiar. The voices matched. The finance director authorized the transfers.

Approximately $500,000 disappeared into fraudulent accounts.

The Aftermath

Like the Arup victim, this finance director had done everything seemingly right. They’d verified the request visually. They’d seen the executives on video. They’d heard their voices.

None of it was real.

The Singapore case underscores a critical vulnerability: the more we trust video conferencing as a verification tool, the more valuable it becomes as an attack vector. Criminals aren’t just exploiting technology—they’re exploiting our assumption that video calls are inherently more trustworthy than emails or phone calls.

That assumption is now a liability.


The Deepfake Job Applicant Epidemic

When an AI Security CEO Nearly Hired a Synthetic Person

In January 2026, Jason Rebholz, CEO of AI security startup Evoke, posted job openings on LinkedIn. Within hours, his inbox contained a strange message.

A stranger was recommending a “talented engineer” for the position. Rebholz noticed the recommender’s profile picture “looked like an anime character”—the first red flag. The candidate’s resume was hosted on Vercel, suggesting AI generation using tools like Claude Code. Emails went directly to spam.

Then the recommender followed up with unusual insistence: “Check your spam folder, he replied to you.”

Rebholz, who has researched deepfakes for years, decided to proceed with the interview—curious whether he could identify the deception.

The Interview

The candidate joined with their camera off. It took over 30 seconds before they turned it on—a classic stalling tactic while the deepfake technology initialized.

What Rebholz saw raised immediate alarms:

  • The background was “extremely fake”—a virtual backdrop with visible artifacts and rendering errors- Facial edges appeared “very soft”—a telltale sign of AI-generated imagery struggling to blend synthetic faces with real backgrounds- Parts of the body kept appearing and disappearing—the deepfake couldn’t maintain consistent rendering- Greenscreen reflection was visible in the candidate’s glasses- Dimples appeared and disappeared unnaturally—the AI inconsistently rendering facial features- The candidate repeated every question before answering—compensating for AI processing latency

Most eerily, the candidate’s answers contained near-verbatim quotes from Rebholz’s own public statements about AI security. The deepfake was trained on the interviewer’s own content.

The Inner Turmoil

Even knowing what he was seeing, Rebholz experienced doubt:

“Even though I’m 95 percent sure I’m right here, what if I’m wrong and I’m impacting another human’s ability to get a job? That was literally the dialog that was going on in my head, even though I knew it was a deepfake the whole time.”

This psychological barrier—the fear of falsely accusing a real person—is exactly what deepfake attackers exploit. The social cost of calling out a fake feels higher than the professional risk of missing one.

Rebholz recorded the interview and sent it to Moveris, a deepfake detection firm. They confirmed: the applicant was AI-generated.

The Lesson

“Small companies are also victims,” Rebholz warns. “You don’t need to be a massive tech company to be a victim.”


The North Korea Connection: State-Sponsored Deepfake Infiltration

An Unprecedented Threat

The FBI has issued multiple warnings about a sophisticated campaign by North Korean operatives to infiltrate American companies through fraudulent IT job applications. The scale is staggering.

Amazon alone has blocked over 1,800 suspected DPRK applicants between April 2024 and December 2025, with a 27% increase in applications quarter-over-quarter. According to The Register, “most Fortune 500 companies have fallen for the scam.”

This isn’t random crime. It’s state-sponsored infiltration using deepfake technology as a primary enabler.

How the Scheme Works

  1. Identity Theft: Operatives acquire stolen or forged US identity documents and passports2. Laptop Farms: US-based facilitators receive company laptops on behalf of the remote “workers”3. Remote Access: Software is installed allowing overseas access to corporate systems4. Deepfake Interviews: AI-generated video and audio helps operatives pass video interviews5. Salary Siphoning: Wages are funneled back to the DPRK regime6. Escalation: Once inside, operatives access source code, intellectual property, and corporate networks7. Extortion: Recent FBI warnings indicate some DPRK workers have begun stealing data and demanding ransom payments

The FBI’s Response

The US government has taken aggressive action:

  • Federal arrest warrants issued January 21, 2025- DOJ announced sweeping law enforcement actions June 30, 2025- Multiple indictments and arrests- OFAC sanctions imposed

But the campaign continues. The tools are too accessible, the payoff too lucrative, and the geopolitical motivations too strong for enforcement alone to stem the tide.

Beyond State Actors

The DPRK scheme has proven the concept for criminal organizations and even desperate individuals. Experian’s 2026 Future of Fraud Forecast warns that employment fraud will escalate as AI improves interview performance.

“It’s a very competitive job market,” the report notes. “Individuals may offer their services to get through a technical interview”—selling deepfake interview assistance to candidates who can’t pass on their own merits.

The synthetic job candidate is becoming normalized.


By The Numbers: The 2026 Deepfake Explosion

Volume Growth

YearDeepfake Files OnlineGrowth Rate2023500,000Baseline20243-4 million~700%20258 million900%2026 (projected)15+ millionContinuing

Financial Losses

Consumer Fraud:

  • US 2025: $12.5 billion (FTC)- UK 2025 (9 months): ÂŁ9.4 billion (CIFAS)- North America Q1 2025: $200+ million from deepfakes alone

Business Losses:

  • Average per incident: $500,000- Large enterprises: Up to $680,000- Financial services sector: $603,000 average (34% above other industries)- Mexico: $627,000 average (highest globally)

Projections:

  • US 2027: $40 billion in generative AI fraud (Deloitte forecast)- Compound Annual Growth Rate: 32%

Attack Frequency

  • 2024: One deepfake attack every 5 minutes- Q1 2025: 179 reported incidents (19% more than all of 2024 combined)- 2026: Continuous, industrial-scale attacks

Regional Breakdown (2022-2023 Increase)

  • North America: 1,740%- Asia-Pacific: 1,530%

Sector Analysis

  • Cryptocurrency: 88% of all deepfake fraud (654% surge 2023-2024)- Financial Services: Primary target (53% of professionals targeted in 2024)- Fintech: 700% increase in identity verification bypass attempts- All sectors: 60% reported 25%+ loss increase year-over-year

Why Humans Can’t Detect Deepfakes

The Detection Problem

Here’s the uncomfortable truth: you almost certainly cannot reliably identify a high-quality deepfake.

Human Detection Accuracy:

  • Average accuracy: 55-60% (barely better than random guessing)- High-quality video deepfakes: Only 24.5% detection rate- Reliable detectors: 0.1% of the global population

The Overconfidence Gap:

  • 60% of people believe they could spot a deepfake- 71% are aware deepfakes exist but cannot identify them- 56% of businesses claim confidence in detection abilities- Only 6% have actually avoided financial losses

You think you can tell. You probably can’t. This gap between confidence and competence is a primary attack vector.

Traditional Tells Are Dying

Security professionals once taught people to watch for visual anomalies. Those guidelines are increasingly obsolete:

  • “Soft edges” around faces—being solved by newer AI models- Inconsistent lighting—improving rapidly- Unnatural blinking patterns—now accurately mimicked- Hand waving in front of face—Jason Rebholz calls this “completely dead” as a detection method

The tells that worked in 2023 don’t work in 2026. The technology is advancing faster than awareness.

Automated Detection Struggles Too

Commercial detection tools exist (Reality Defender, Moveris, Pindrop), but they face significant challenges:

  • 45-50% accuracy drop in real-world conditions vs. lab tests- Below 50% accuracy against deepfake techniques not in their training data- Bias problems: Lower accuracy on darker skin tones and younger faces- The generalization problem: New deepfake methods evade trained detectors

As Reality Defender acknowledges: “Every time a more advanced algorithm emerges, detection models must quickly adapt, and the cycle repeats itself.”

The Arms Race Is Being Lost

  • Detection technology market growth: 28-42% annually- Deepfake generation growth: 900%+ annually

The gap is widening, not closing.


The “Machine-to-Machine” Future: Experian’s 2026 Forecast

On January 13, 2026, Experian released its Future of Fraud Forecast identifying five major threats for the coming year. The most alarming: Machine-to-Machine Mayhem.

What’s Coming

As organizations deploy agentic AI—shopping bots, customer service agents, automated assistants—criminals are developing methods to blend “good bots” with “bad bots.” The challenge becomes distinguishing legitimate AI agents from fraudulent ones.

Experian predicts 2026 will be a “tipping point” forcing conversations about AI liability. When an AI agent is deceived by another AI agent, who bears responsibility?

The Five Threats

  1. Machine-to-Machine Mayhem (Top threat): AI bots attacking AI bots2. Deepfake Job Candidates: Employment fraud escalating as AI improves3. Smart Home Device Exploitation: IoT vulnerabilities as attack vectors4. Website Cloning at Scale: AI-powered replication overwhelming fraud teams5. Emotionally Intelligent Scambots: Romance and family emergency scams with unprecedented sophistication

The Business Impact

  • 72% of business leaders identify AI-enabled fraud/deepfakes as their top operational challenge for 2026- 60% of companies reported 25%+ increase in financial losses (2024-2025)- Fraud losses grew by 25% while report volume stayed flat—attacks are getting more effective, not just more frequent

As Kathleen Peters, Experian’s Chief Innovation Officer, explains: “With less expertise, [fraudsters] are able to create more convincing scams and more convincing text messages that they can blast out at scale.”


How to Protect Yourself: Individual Countermeasures

The good news: while detection is failing, verification still works. You just need to implement it consistently.

Immediate Actions

  1. Establish a family “safe word” that you never share online—use it to verify emergency calls2. Be skeptical of any urgent request for money, even from “family” or close friends3. If you receive a distress call from a loved one, hang up and call them back on a number you know is theirs4. Never transfer money based solely on a video or phone call—always verify through a separate channel5. Limit your online audio/video presence—voice cloning needs only 3 seconds of audio

Verification Protocol

When you receive any unusual request:

  • Ask questions only the real person would know—not information available online- Request video calls for important matters—but don’t trust video alone- Use multiple channels to verify identity—if someone calls, text to confirm; if they video call, call back on a known number- Trust your gut—if something feels off, it probably is

The 3-Second Rule

Remember: criminals need just 3 seconds of your voice to clone it convincingly. Consider:

  • Limiting voice content on social media- Being cautious about who you speak to on phone calls- Understanding that any audio of you can be weaponized against your family and colleagues

How to Protect Your Business: Corporate Protocols

The Foundation: Verification, Not Detection

Stop trying to identify deepfakes. Start implementing verification systems that make deepfakes irrelevant.

Mandatory Callback Protocol

For ANY unexpected high-stakes request via video call:

  1. Hang up immediately2. Call back on a number from the OFFICIAL company directory3. Never use contact information provided in the suspicious communication

This single protocol would have prevented both the Arup and Singapore attacks. Implement it unconditionally.

Code Word Systems

Establish authentication phrases for sensitive operations:

  • “Code of the day” that changes regularly- Pre-arranged verification questions only insiders would know- Must be asked, never volunteered—if someone offers the code word unprompted, that’s suspicious

Multi-Channel Verification

Require confirmation through 2+ independent channels before executing high-risk actions:

  • Email → Video call → Callback → Execute- No single channel can authorize major transactions

Time Delays

Build mandatory waiting periods for large transfers:

  • Defeats urgency-based manipulation- Allows for verification and cool-down- “We need this done in the next hour” should be a red flag, not a compliance trigger

Video Call Protocols

Before the call:

  • Verify meeting invites through separate channels- Confirm participant identities beforehand- Set up callback verification in advance

During the call:

  • Require cameras on at all times- Request participants turn off virtual backgrounds- Ask them to move objects in their background to camera- Note delayed camera activation (deepfake initialization)- Watch for soft facial edges, inconsistent lighting, features appearing/disappearing

For sensitive requests:

  • Never authorize transfers during the call itself- Always verify through callback protocols- Get multiple approvals from different people, contacted separately

Hiring Protocols (FBI Recommendations)

  1. Identity Verification:
  • Cross-reference photos with social media and portfolio sites- Verify prior employment directly with companies- Verify education directly with institutions- Check for misspellings in documents (common in fraudulent applications)2. In-Person Requirements:
  • Mandate in-person drug tests or fingerprinting- Require first week on-site, even for remote roles- Capture images for comparison with future meetings3. Virtual Interview Red Flags:
  • Cameras must stay on throughout- No virtual backgrounds (or require background change during interview)- Ask about current location and request they look out windows- Note unusual urgency from referrers4. Equipment Protocols:
  • Only ship equipment to address on ID documents- Require additional verification for different addresses- No system access until background checks complete5. Payment Analysis:
  • Flag employees with matching banking information- Monitor frequent bank account changes- Be wary of virtual currency payment requests

United States Federal Action

TAKE IT DOWN Act (Public Law 119-12)

  • Signed May 19, 2025- Criminalizes publishing non-consensual intimate deepfakes- Penalties: Up to 2 years imprisonment (3 years for minors)- Requires platforms to remove content upon request

Preventing Deep Fake Scams Act (H.R. 1734)

  • Currently before 119th Congress- Specifically addresses deepfakes used for fraud- Pending passage

State-Level Action

As of mid-2025, over 45 states have enacted some form of deepfake legislation. Total bills proposed (2019-2024): 319 across all 50 states.

Recent examples:

StateLawEffective DateKey ProvisionsWashingtonHB 1205July 27, 2025Criminalizes “forged digital likeness” with intent to defraudPennsylvaniaNew lawSeptember 5, 2025Creates/distributes deepfakes with fraudulent/injurious intentCaliforniaPioneer2019First state with deepfake lawsTexasPioneer2019Early adoption

International Response

European Union - AI Act:

  • Passed March 2024- Comprehensive AI regulation framework- Disclosure requirements for synthetic media- Significant penalties for non-compliance

China:

  • Strict regulations on deepfake creation and distribution- Registration requirements for synthetic media services

UK:

  • Online Safety Act provisions addressing deepfakes- New criminal offenses for non-consensual intimate imagery

The Gap Problem

Despite increasing legislation, critical gaps remain:

  • Most jurisdictions still rely on traditional legal doctrines (privacy, defamation, fraud)- Existing laws often formulated before deepfakes emerged as significant threats- Many lack explicit reference to AI-generated digital manipulation- Cross-border enforcement remains extremely difficult- Technology continues evolving faster than legislation

What’s Coming Next: The Future of Synthetic Fraud

Near-Term (2026-2027)

Real-Time Interactive Deepfakes: The shift from pre-rendered clips to live synthesis is accelerating. We’re moving toward AI-driven actors whose faces and voices adapt instantly to conversational prompts. Scammers will deploy responsive avatars instead of fixed videos—synthetic humans that can answer unexpected questions, respond to challenges, and maintain deception dynamically.

Unified Identity Modeling: Next-generation systems will capture not just appearance, but behavior—movement patterns, speech cadences, mannerisms across contexts. A “unified identity model” could convince observers that “this person behaves like Person X over time,” not just “this looks like Person X in this moment.”

The Trust Erosion Crisis

Harvard researcher Fred Heiding identifies the deepest long-term threat:

“That’ll be the big pain point here, the complete lack of trust in digital institutions, and institutions and material in general.”

The ultimate impact may not be financial losses—devastating as they are—but the systematic erosion of trust in:

  • Digital communications of all kinds- Video and audio evidence- Institutional communications- Election materials and political discourse- Media authenticity- Any information delivered through screens

When anything can be faked convincingly, nothing can be trusted implicitly. That’s not a technological problem. That’s a civilizational one.


Conclusion: Adapting to the Industrial Age of Deception

Deepfake fraud has gone industrial. The tools are cheap, accessible, and improving faster than defenses can adapt. The Arup attack proved that entire video conferences can be fabricated. The Singapore case showed that experienced finance professionals can be deceived by synthetic executives. The Evoke interview demonstrated that even AI security experts struggle to identify fakes with complete certainty.

The 8 million deepfakes online today will be 15 million by year’s end, and the attacks occurring every five minutes will only accelerate. Human detection is no better than a coin flip. Automated detection is losing the arms race.

But you are not defenseless.

The countermeasures that work don’t require detecting deepfakes—they require verification protocols that make deepfakes irrelevant. Mandatory callbacks. Code words. Multi-channel confirmation. Time delays. In-person verification for high-stakes actions.

These aren’t technological solutions. They’re procedural ones. They work because they don’t depend on identifying whether a face or voice is real—they depend on confirming identity through channels attackers can’t simultaneously compromise.

The organizations that survive this industrial age of deception will be those that stop trusting what they see and hear, and start verifying through systems designed for a world where anything can be faked.

That world is here.

Adapt accordingly.


Key Takeaways

✅ Deepfake fraud is now industrial-scale—8 million files, 900% annual growth, attacks every 5 minutes

✅ Major attacks have succeeded—$25M Arup (all-fake video call), $500K Singapore (fake executives), and countless others

✅ Human detection is essentially useless—24.5% accuracy for high-quality video deepfakes (worse than guessing)

✅ North Korean operatives use deepfakes for corporate infiltration—most Fortune 500 companies have been targeted

✅ Voice cloning needs only 3 seconds of audio—limit your voice presence online

✅ Verification beats detection—implement callback protocols, code words, and multi-channel confirmation

✅ Never authorize high-stakes actions on a video call alone—always verify through separate channels

✅ The technology will keep improving—build processes that don’t depend on identifying fakes


Resources

Report Incidents:

Detection Tools (Enterprise):

  • Reality Defender- Moveris- Pindrop

Further Reading:

  • FBI Alert: “North Korean IT Worker Threats to U.S. Businesses” (July 2025)- Experian: “2026 Future of Fraud Forecast”- AI Incident Database: 2026 Analysis

This investigation was compiled from 22 sources including The Guardian, FBI IC3 advisories, Fortune, Experian reports, The Register, CNN, and academic researchers at Harvard and MIT. All statistics current as of February 2026.