When everyone on your video call is fake, how do you know who to trust?
Executive Summary: The Terrifying New Normal
In early February 2026, the AI Incident Database released findings that should keep every business leader awake at night: deepfake fraud has officially gone âindustrial.â
This isnât hyperbole. What began as a technological curiosityâswapping celebrity faces in videosâhas metastasized into a sophisticated criminal industry capable of fabricating entire video conferences filled with AI-generated executives, passing job interviews with synthetic faces, and extracting millions of dollars from companies that trusted what their eyes and ears told them.
The numbers tell the story:
- 8 million deepfake files now circulate online, up from 500,000 just three years ago- $12.5 billion in consumer fraud losses in the US during 2025- $25 million stolen from one engineering firm via a single video call where every participant was fake- 900% annual growth in deepfake volume- 24.5% accuracyâthatâs how often humans correctly identify high-quality deepfakes (worse than a coin flip)
As MIT researcher Simon Mylius warned: âItâs become very accessible to a point where there is really effectively no barrier to entry.â
Welcome to the new reality. The scammers have industrialized. Your defenses need to catch up.
From Celebrity Fakes to Business Fraud: The Evolution of Deepfake Attacks
Phase 1: Proof of Concept (2017-2019)
The term âdeepfakeâ was coined on Reddit in 2017, primarily describing face-swapping technology used for non-consensual celebrity imagery. Creating convincing fakes required significant expertise and computing powerâthis was the domain of researchers and early adopters, not criminals.
That changed in 2019 with the first high-profile financial attack. A UK energy firmâs CEO received a phone call from what he believed was his German parent companyâs CEO, requesting an urgent âŹ220,000 transfer to a Hungarian supplier. The voice was a near-perfect AI clone. The money vanished.
The proof of concept was complete. Criminals had a new weapon.
Phase 2: Democratization (2020-2023)
Open-source tools like DeepFaceLab made face-swapping accessible to anyone with a computer. Mobile apps enabled basic manipulations. Voice cloning accuracy improved dramatically.
By 2023, 500,000 deepfake files circulated online, and identity fraud attempts using deepfakes had surged by 3,000%. The tools were spreading faster than awareness.
Phase 3: Industrial Scale (2024-2025)
This was the inflection point. Real-time deepfake video calls became possible. Criminals proved they could fabricate not just voices, but entire multi-person video conferences. The average business loss per deepfake incident: $500,000.
Attacks began occurring every five minutes. North Korean IT workers started using deepfakes to infiltrate Fortune 500 companies at scale. The technology had gone from âproof of conceptâ to mass-produced criminal infrastructure.
Phase 4: Industrial Production (2026)
The AI Incident Databaseâs February 2026 analysis confirmed what security professionals feared: weâve crossed into full industrial production.
Today, voice cloning requires only 3 seconds of audio. A convincing deepfake video can be produced for as little as $1. There is no meaningful barrier to entry. Anyone can be the targetâand increasingly, anyone can be the attacker.
Harvard researcher Fred Heiding captures the trajectory: âThe scale is changing. Itâs becoming so cheap, almost anyone can use it now. The models are getting really goodâtheyâre becoming much faster than most experts think.â
Case Study: The $25 Million All-Fake Video Call
The Attack That Changed Everything
In January 2024, a finance worker at Arup Groupâa British multinational engineering firm with 18,000 employeesâreceived an unusual message. It claimed to be from the companyâs UK-based Chief Financial Officer, requesting participation in a âconfidential transaction.â
The employee was suspicious. It looked like phishing.
Then came the video call.
On screen, the CFO appearedâhis face, his voice, his mannerisms. Alongside him were multiple other senior executives, all known to the employee. The urgency was explained. The transaction was authorized.
Over the following days, the finance worker executed 15 separate transfers totaling $25 million (HKD 200 million) to five different Hong Kong bank accounts.
The fraud was only discovered when the employee contacted Arup headquarters through official channels to follow up.
The Revelation
When Hong Kong police investigated, they uncovered something unprecedented.
â(In the) multi-person video conference, it turns out that everyone [he saw] was fake,â Senior Superintendent Baron Chan Shun-ching told reporters.
Every single person on that video callâthe CFO, the senior executives, the authoritative voices demanding actionâwas an AI-generated deepfake. The attackers had scraped publicly available videos and audio of Arup executives from corporate presentations, earnings calls, and media appearances. Theyâd reconstructed entire digital humans convincing enough to pass a live video conference.
Why This Case Matters
The Arup attack wasnât just a theft. It was proof that the âsee it to believe itâ assumptionâthe foundation of trust in video communicationsâis fundamentally broken.
Before Arup, deepfake fraud typically involved one-on-one interactions: a fake CEO voice on a phone call, a synthetic executive in a direct video chat. The multi-participant attack demonstrated that criminals can now fabricate entire meetings, complete with supporting cast members, side conversations, and the social proof of multiple âcolleaguesâ validating a request.
If everyone on the call can be fake, the call itself proves nothing.
Case Study: The Singapore CFO Who Thought He Knew His Executives
The Attack
In 2025, a finance director at an unnamed Singaporean multinational logged into what appeared to be a routine video call with company leadership. The CFO was there. Other top executives joined. The discussion turned to wire transfers that needed authorization.
Everything seemed normal. The faces were familiar. The voices matched. The finance director authorized the transfers.
Approximately $500,000 disappeared into fraudulent accounts.
The Aftermath
Like the Arup victim, this finance director had done everything seemingly right. Theyâd verified the request visually. Theyâd seen the executives on video. Theyâd heard their voices.
None of it was real.
The Singapore case underscores a critical vulnerability: the more we trust video conferencing as a verification tool, the more valuable it becomes as an attack vector. Criminals arenât just exploiting technologyâtheyâre exploiting our assumption that video calls are inherently more trustworthy than emails or phone calls.
That assumption is now a liability.
The Deepfake Job Applicant Epidemic
When an AI Security CEO Nearly Hired a Synthetic Person
In January 2026, Jason Rebholz, CEO of AI security startup Evoke, posted job openings on LinkedIn. Within hours, his inbox contained a strange message.
A stranger was recommending a âtalented engineerâ for the position. Rebholz noticed the recommenderâs profile picture âlooked like an anime characterââthe first red flag. The candidateâs resume was hosted on Vercel, suggesting AI generation using tools like Claude Code. Emails went directly to spam.
Then the recommender followed up with unusual insistence: âCheck your spam folder, he replied to you.â
Rebholz, who has researched deepfakes for years, decided to proceed with the interviewâcurious whether he could identify the deception.
The Interview
The candidate joined with their camera off. It took over 30 seconds before they turned it onâa classic stalling tactic while the deepfake technology initialized.
What Rebholz saw raised immediate alarms:
- The background was âextremely fakeââa virtual backdrop with visible artifacts and rendering errors- Facial edges appeared âvery softââa telltale sign of AI-generated imagery struggling to blend synthetic faces with real backgrounds- Parts of the body kept appearing and disappearingâthe deepfake couldnât maintain consistent rendering- Greenscreen reflection was visible in the candidateâs glasses- Dimples appeared and disappeared unnaturallyâthe AI inconsistently rendering facial features- The candidate repeated every question before answeringâcompensating for AI processing latency
Most eerily, the candidateâs answers contained near-verbatim quotes from Rebholzâs own public statements about AI security. The deepfake was trained on the interviewerâs own content.
The Inner Turmoil
Even knowing what he was seeing, Rebholz experienced doubt:
âEven though Iâm 95 percent sure Iâm right here, what if Iâm wrong and Iâm impacting another humanâs ability to get a job? That was literally the dialog that was going on in my head, even though I knew it was a deepfake the whole time.â
This psychological barrierâthe fear of falsely accusing a real personâis exactly what deepfake attackers exploit. The social cost of calling out a fake feels higher than the professional risk of missing one.
Rebholz recorded the interview and sent it to Moveris, a deepfake detection firm. They confirmed: the applicant was AI-generated.
The Lesson
âSmall companies are also victims,â Rebholz warns. âYou donât need to be a massive tech company to be a victim.â
The North Korea Connection: State-Sponsored Deepfake Infiltration
An Unprecedented Threat
The FBI has issued multiple warnings about a sophisticated campaign by North Korean operatives to infiltrate American companies through fraudulent IT job applications. The scale is staggering.
Amazon alone has blocked over 1,800 suspected DPRK applicants between April 2024 and December 2025, with a 27% increase in applications quarter-over-quarter. According to The Register, âmost Fortune 500 companies have fallen for the scam.â
This isnât random crime. Itâs state-sponsored infiltration using deepfake technology as a primary enabler.
How the Scheme Works
- Identity Theft: Operatives acquire stolen or forged US identity documents and passports2. Laptop Farms: US-based facilitators receive company laptops on behalf of the remote âworkersâ3. Remote Access: Software is installed allowing overseas access to corporate systems4. Deepfake Interviews: AI-generated video and audio helps operatives pass video interviews5. Salary Siphoning: Wages are funneled back to the DPRK regime6. Escalation: Once inside, operatives access source code, intellectual property, and corporate networks7. Extortion: Recent FBI warnings indicate some DPRK workers have begun stealing data and demanding ransom payments
The FBIâs Response
The US government has taken aggressive action:
- Federal arrest warrants issued January 21, 2025- DOJ announced sweeping law enforcement actions June 30, 2025- Multiple indictments and arrests- OFAC sanctions imposed
But the campaign continues. The tools are too accessible, the payoff too lucrative, and the geopolitical motivations too strong for enforcement alone to stem the tide.
Beyond State Actors
The DPRK scheme has proven the concept for criminal organizations and even desperate individuals. Experianâs 2026 Future of Fraud Forecast warns that employment fraud will escalate as AI improves interview performance.
âItâs a very competitive job market,â the report notes. âIndividuals may offer their services to get through a technical interviewââselling deepfake interview assistance to candidates who canât pass on their own merits.
The synthetic job candidate is becoming normalized.
By The Numbers: The 2026 Deepfake Explosion
Volume Growth
YearDeepfake Files OnlineGrowth Rate2023500,000Baseline20243-4 million~700%20258 million900%2026 (projected)15+ millionContinuing
Financial Losses
Consumer Fraud:
- US 2025: $12.5 billion (FTC)- UK 2025 (9 months): ÂŁ9.4 billion (CIFAS)- North America Q1 2025: $200+ million from deepfakes alone
Business Losses:
- Average per incident: $500,000- Large enterprises: Up to $680,000- Financial services sector: $603,000 average (34% above other industries)- Mexico: $627,000 average (highest globally)
Projections:
- US 2027: $40 billion in generative AI fraud (Deloitte forecast)- Compound Annual Growth Rate: 32%
Attack Frequency
- 2024: One deepfake attack every 5 minutes- Q1 2025: 179 reported incidents (19% more than all of 2024 combined)- 2026: Continuous, industrial-scale attacks
Regional Breakdown (2022-2023 Increase)
- North America: 1,740%- Asia-Pacific: 1,530%
Sector Analysis
- Cryptocurrency: 88% of all deepfake fraud (654% surge 2023-2024)- Financial Services: Primary target (53% of professionals targeted in 2024)- Fintech: 700% increase in identity verification bypass attempts- All sectors: 60% reported 25%+ loss increase year-over-year
Why Humans Canât Detect Deepfakes
The Detection Problem
Hereâs the uncomfortable truth: you almost certainly cannot reliably identify a high-quality deepfake.
Human Detection Accuracy:
- Average accuracy: 55-60% (barely better than random guessing)- High-quality video deepfakes: Only 24.5% detection rate- Reliable detectors: 0.1% of the global population
The Overconfidence Gap:
- 60% of people believe they could spot a deepfake- 71% are aware deepfakes exist but cannot identify them- 56% of businesses claim confidence in detection abilities- Only 6% have actually avoided financial losses
You think you can tell. You probably canât. This gap between confidence and competence is a primary attack vector.
Traditional Tells Are Dying
Security professionals once taught people to watch for visual anomalies. Those guidelines are increasingly obsolete:
- âSoft edgesâ around facesâbeing solved by newer AI models- Inconsistent lightingâimproving rapidly- Unnatural blinking patternsânow accurately mimicked- Hand waving in front of faceâJason Rebholz calls this âcompletely deadâ as a detection method
The tells that worked in 2023 donât work in 2026. The technology is advancing faster than awareness.
Automated Detection Struggles Too
Commercial detection tools exist (Reality Defender, Moveris, Pindrop), but they face significant challenges:
- 45-50% accuracy drop in real-world conditions vs. lab tests- Below 50% accuracy against deepfake techniques not in their training data- Bias problems: Lower accuracy on darker skin tones and younger faces- The generalization problem: New deepfake methods evade trained detectors
As Reality Defender acknowledges: âEvery time a more advanced algorithm emerges, detection models must quickly adapt, and the cycle repeats itself.â
The Arms Race Is Being Lost
- Detection technology market growth: 28-42% annually- Deepfake generation growth: 900%+ annually
The gap is widening, not closing.
The âMachine-to-Machineâ Future: Experianâs 2026 Forecast
On January 13, 2026, Experian released its Future of Fraud Forecast identifying five major threats for the coming year. The most alarming: Machine-to-Machine Mayhem.
Whatâs Coming
As organizations deploy agentic AIâshopping bots, customer service agents, automated assistantsâcriminals are developing methods to blend âgood botsâ with âbad bots.â The challenge becomes distinguishing legitimate AI agents from fraudulent ones.
Experian predicts 2026 will be a âtipping pointâ forcing conversations about AI liability. When an AI agent is deceived by another AI agent, who bears responsibility?
The Five Threats
- Machine-to-Machine Mayhem (Top threat): AI bots attacking AI bots2. Deepfake Job Candidates: Employment fraud escalating as AI improves3. Smart Home Device Exploitation: IoT vulnerabilities as attack vectors4. Website Cloning at Scale: AI-powered replication overwhelming fraud teams5. Emotionally Intelligent Scambots: Romance and family emergency scams with unprecedented sophistication
The Business Impact
- 72% of business leaders identify AI-enabled fraud/deepfakes as their top operational challenge for 2026- 60% of companies reported 25%+ increase in financial losses (2024-2025)- Fraud losses grew by 25% while report volume stayed flatâattacks are getting more effective, not just more frequent
As Kathleen Peters, Experianâs Chief Innovation Officer, explains: âWith less expertise, [fraudsters] are able to create more convincing scams and more convincing text messages that they can blast out at scale.â
How to Protect Yourself: Individual Countermeasures
The good news: while detection is failing, verification still works. You just need to implement it consistently.
Immediate Actions
- Establish a family âsafe wordâ that you never share onlineâuse it to verify emergency calls2. Be skeptical of any urgent request for money, even from âfamilyâ or close friends3. If you receive a distress call from a loved one, hang up and call them back on a number you know is theirs4. Never transfer money based solely on a video or phone callâalways verify through a separate channel5. Limit your online audio/video presenceâvoice cloning needs only 3 seconds of audio
Verification Protocol
When you receive any unusual request:
- Ask questions only the real person would knowânot information available online- Request video calls for important mattersâbut donât trust video alone- Use multiple channels to verify identityâif someone calls, text to confirm; if they video call, call back on a known number- Trust your gutâif something feels off, it probably is
The 3-Second Rule
Remember: criminals need just 3 seconds of your voice to clone it convincingly. Consider:
- Limiting voice content on social media- Being cautious about who you speak to on phone calls- Understanding that any audio of you can be weaponized against your family and colleagues
How to Protect Your Business: Corporate Protocols
The Foundation: Verification, Not Detection
Stop trying to identify deepfakes. Start implementing verification systems that make deepfakes irrelevant.
Mandatory Callback Protocol
For ANY unexpected high-stakes request via video call:
- Hang up immediately2. Call back on a number from the OFFICIAL company directory3. Never use contact information provided in the suspicious communication
This single protocol would have prevented both the Arup and Singapore attacks. Implement it unconditionally.
Code Word Systems
Establish authentication phrases for sensitive operations:
- âCode of the dayâ that changes regularly- Pre-arranged verification questions only insiders would know- Must be asked, never volunteeredâif someone offers the code word unprompted, thatâs suspicious
Multi-Channel Verification
Require confirmation through 2+ independent channels before executing high-risk actions:
- Email â Video call â Callback â Execute- No single channel can authorize major transactions
Time Delays
Build mandatory waiting periods for large transfers:
- Defeats urgency-based manipulation- Allows for verification and cool-down- âWe need this done in the next hourâ should be a red flag, not a compliance trigger
Video Call Protocols
Before the call:
- Verify meeting invites through separate channels- Confirm participant identities beforehand- Set up callback verification in advance
During the call:
- Require cameras on at all times- Request participants turn off virtual backgrounds- Ask them to move objects in their background to camera- Note delayed camera activation (deepfake initialization)- Watch for soft facial edges, inconsistent lighting, features appearing/disappearing
For sensitive requests:
- Never authorize transfers during the call itself- Always verify through callback protocols- Get multiple approvals from different people, contacted separately
Hiring Protocols (FBI Recommendations)
- Identity Verification:
- Cross-reference photos with social media and portfolio sites- Verify prior employment directly with companies- Verify education directly with institutions- Check for misspellings in documents (common in fraudulent applications)2. In-Person Requirements:
- Mandate in-person drug tests or fingerprinting- Require first week on-site, even for remote roles- Capture images for comparison with future meetings3. Virtual Interview Red Flags:
- Cameras must stay on throughout- No virtual backgrounds (or require background change during interview)- Ask about current location and request they look out windows- Note unusual urgency from referrers4. Equipment Protocols:
- Only ship equipment to address on ID documents- Require additional verification for different addresses- No system access until background checks complete5. Payment Analysis:
- Flag employees with matching banking information- Monitor frequent bank account changes- Be wary of virtual currency payment requests
The Regulatory Response: Legal Landscape
United States Federal Action
TAKE IT DOWN Act (Public Law 119-12)
- Signed May 19, 2025- Criminalizes publishing non-consensual intimate deepfakes- Penalties: Up to 2 years imprisonment (3 years for minors)- Requires platforms to remove content upon request
Preventing Deep Fake Scams Act (H.R. 1734)
- Currently before 119th Congress- Specifically addresses deepfakes used for fraud- Pending passage
State-Level Action
As of mid-2025, over 45 states have enacted some form of deepfake legislation. Total bills proposed (2019-2024): 319 across all 50 states.
Recent examples:
StateLawEffective DateKey ProvisionsWashingtonHB 1205July 27, 2025Criminalizes âforged digital likenessâ with intent to defraudPennsylvaniaNew lawSeptember 5, 2025Creates/distributes deepfakes with fraudulent/injurious intentCaliforniaPioneer2019First state with deepfake lawsTexasPioneer2019Early adoption
International Response
European Union - AI Act:
- Passed March 2024- Comprehensive AI regulation framework- Disclosure requirements for synthetic media- Significant penalties for non-compliance
China:
- Strict regulations on deepfake creation and distribution- Registration requirements for synthetic media services
UK:
- Online Safety Act provisions addressing deepfakes- New criminal offenses for non-consensual intimate imagery
The Gap Problem
Despite increasing legislation, critical gaps remain:
- Most jurisdictions still rely on traditional legal doctrines (privacy, defamation, fraud)- Existing laws often formulated before deepfakes emerged as significant threats- Many lack explicit reference to AI-generated digital manipulation- Cross-border enforcement remains extremely difficult- Technology continues evolving faster than legislation
Whatâs Coming Next: The Future of Synthetic Fraud
Near-Term (2026-2027)
Real-Time Interactive Deepfakes: The shift from pre-rendered clips to live synthesis is accelerating. Weâre moving toward AI-driven actors whose faces and voices adapt instantly to conversational prompts. Scammers will deploy responsive avatars instead of fixed videosâsynthetic humans that can answer unexpected questions, respond to challenges, and maintain deception dynamically.
Unified Identity Modeling: Next-generation systems will capture not just appearance, but behaviorâmovement patterns, speech cadences, mannerisms across contexts. A âunified identity modelâ could convince observers that âthis person behaves like Person X over time,â not just âthis looks like Person X in this moment.â
The Trust Erosion Crisis
Harvard researcher Fred Heiding identifies the deepest long-term threat:
âThatâll be the big pain point here, the complete lack of trust in digital institutions, and institutions and material in general.â
The ultimate impact may not be financial lossesâdevastating as they areâbut the systematic erosion of trust in:
- Digital communications of all kinds- Video and audio evidence- Institutional communications- Election materials and political discourse- Media authenticity- Any information delivered through screens
When anything can be faked convincingly, nothing can be trusted implicitly. Thatâs not a technological problem. Thatâs a civilizational one.
Conclusion: Adapting to the Industrial Age of Deception
Deepfake fraud has gone industrial. The tools are cheap, accessible, and improving faster than defenses can adapt. The Arup attack proved that entire video conferences can be fabricated. The Singapore case showed that experienced finance professionals can be deceived by synthetic executives. The Evoke interview demonstrated that even AI security experts struggle to identify fakes with complete certainty.
The 8 million deepfakes online today will be 15 million by yearâs end, and the attacks occurring every five minutes will only accelerate. Human detection is no better than a coin flip. Automated detection is losing the arms race.
But you are not defenseless.
The countermeasures that work donât require detecting deepfakesâthey require verification protocols that make deepfakes irrelevant. Mandatory callbacks. Code words. Multi-channel confirmation. Time delays. In-person verification for high-stakes actions.
These arenât technological solutions. Theyâre procedural ones. They work because they donât depend on identifying whether a face or voice is realâthey depend on confirming identity through channels attackers canât simultaneously compromise.
The organizations that survive this industrial age of deception will be those that stop trusting what they see and hear, and start verifying through systems designed for a world where anything can be faked.
That world is here.
Adapt accordingly.
Key Takeaways
â Deepfake fraud is now industrial-scaleâ8 million files, 900% annual growth, attacks every 5 minutes
â Major attacks have succeededâ$25M Arup (all-fake video call), $500K Singapore (fake executives), and countless others
â Human detection is essentially uselessâ24.5% accuracy for high-quality video deepfakes (worse than guessing)
â North Korean operatives use deepfakes for corporate infiltrationâmost Fortune 500 companies have been targeted
â Voice cloning needs only 3 seconds of audioâlimit your voice presence online
â Verification beats detectionâimplement callback protocols, code words, and multi-channel confirmation
â Never authorize high-stakes actions on a video call aloneâalways verify through separate channels
â The technology will keep improvingâbuild processes that donât depend on identifying fakes
Resources
Report Incidents:
- FBI Internet Crime Complaint Center: IC3.gov- FTC Fraud Report: ReportFraud.ftc.gov
Detection Tools (Enterprise):
- Reality Defender- Moveris- Pindrop
Further Reading:
- FBI Alert: âNorth Korean IT Worker Threats to U.S. Businessesâ (July 2025)- Experian: â2026 Future of Fraud Forecastâ- AI Incident Database: 2026 Analysis
This investigation was compiled from 22 sources including The Guardian, FBI IC3 advisories, Fortune, Experian reports, The Register, CNN, and academic researchers at Harvard and MIT. All statistics current as of February 2026.
