The Call That Wasnât Real
In early 2024, a finance worker at British engineering giant Arup joined what he believed was a routine video call with the companyâs Chief Financial Officer and several senior colleagues. The CFO explained an urgent, confidential transaction was required. The other executives on screen nodded along, confirming the requestâs legitimacy.
Over the next several days, the employee transferred $25 million to accounts controlled by the fraudsters.
What investigators discovered next shattered assumptions about corporate security: every single person on that video call was a deepfake. The CFO. The colleagues. All of themâsynthetic replicas generated in real-time, convincing enough to fool a trained professional who had worked with these executives for years.
Welcome to February 2026, where deepfake fraud has reached what The Guardian this week called âindustrial scaleââa systematic, professionalized operation that is redefining what âseeing is believingâ means in the digital age.
Breaking: The Industrial-Scale Deepfake Epidemic
The Arup heist wasnât an isolated incident. It was an early warning signal that security experts say has now exploded into a full-blown epidemic affecting corporations, governments, and individuals worldwide.
In the past 18 months alone:
- $25 million stolen from Arup (UK) via all-deepfake video conference- $500,000 stolen from a Singapore-based company through fake executive video call- Multiple Fortune 500 companies targeted with deepfake CEO impersonation calls- Thousands of job interviews conducted by applicants using real-time deepfake face-swapping- State-sponsored actors deploying deepfakes for corporate infiltration
âWhat weâre witnessing is the industrialization of identity fraud,â warns Dr. Hao Li, a leading deepfake researcher and CEO of Pinscreen. âThe technology that required significant computing power and expertise three years ago is now available as a service. Anyone with modest resources can create convincing deepfakes in real-time.â
The February 2026 Experian Future of Fraud Forecast placed âmachine-to-machine fraudââAI systems attacking AI systems, with deepfakes as a primary vectorâas the top emerging threat for the year. Their researchers identified deepfake-assisted fraud as growing at an estimated 400% year-over-year since 2024.
The âEveryone on the Call Was Fakeâ Attack Pattern
The Arup attack introduced a terrifying new paradigm in social engineering: the full-call fabrication. Understanding this attack pattern is critical for every organization.
How It Works
Step 1: Intelligence Gathering Attackers spend weeks or months collecting publicly available footage of target executives. Earnings calls, keynote speeches, TV interviews, LinkedIn videos, and YouTube appearances provide the raw material. For the Arup attack, investigators believe fraudsters harvested footage from company presentations, industry conferences, and social media.
Step 2: Voice Cloning Modern AI voice synthesis requires as little as three seconds of audio to create a convincing voice clone. With the wealth of corporate audio available publicly, attackers can create voice models that capture executivesâ speech patterns, accents, and verbal tics with disturbing accuracy.
Step 3: Real-Time Deepfake Generation Using commercially available softwareâmuch of it sold openly on underground forumsâattackers generate real-time video of executives. The technology has advanced to the point where it can:
- Match lip movements to AI-generated audio- Simulate natural eye movement and blinking patterns- Adapt to different lighting conditions- Display realistic emotional expressions
Step 4: Full Environment Staging Sophisticated attackers create convincing virtual backgrounds matching known office locations. Some have been observed using actual photos of executive offices obtained through corporate photography or social media.
Step 5: Multi-Participant Coordination In the Arup case, attackers operated multiple deepfakes simultaneously during the call. This required coordinated operationâlikely with different team members controlling different synthetic participantsâbut effectively eliminated the targetâs ability to verify authenticity through colleague reactions.
Step 6: Urgency and Authority Pressure The synthetic CFO emphasized confidentiality and urgency, pressing psychological pressure points that bypassed normal verification procedures. The victim couldnât verify through back-channels because doing so would âviolate confidentiality.â
Why Multi-Participant Deepfakes Are So Dangerous
Single-person deepfake attacks are already devastating, but the multi-participant variant represents an exponential escalation because it eliminates the most common defense: verification through colleagues.
âWhen youâre on a call with your CFO and youâre uncertain, your natural instinct is to look at other participants for cues,â explains Marcus Johnson, a social engineering researcher at Stanfordâs Internet Observatory. âIf three other executives are nodding along, your doubt evaporates. The attackers understood this psychology perfectly.â
The Singapore $500,000 fraud followed an identical pattern. A finance employee received a video call invitation that appeared to originate from the companyâs established communication platform. Multiple âexecutivesâ participated, all requesting urgent fund transfers. The employee only discovered the fraud when attempting to verify the transaction through an in-person meeting the following day.
The North Korean Deepfake Job Applicant Phenomenon
While the Arup and Singapore cases targeted existing employees for immediate financial theft, a parallel threat has emerged: state-sponsored deepfake job infiltration.
Evoke AI Security: Catching a Deepfake in the Interview
In one of the most chilling documented incidents, David Kulp, CEO of Evoke AI Security, caught a job applicant using real-time deepfake technology during a video interviewâfor a position at an AI security company.
âAbout ten minutes into the interview, I noticed something wasnât right,â Kulp recounted in a widely-circulated LinkedIn post. âThe candidateâs face had an unnatural smoothness. When I asked him to turn his head to the side, there was visible distortion. When I asked him to hold up his hand in front of his face, the deepfake system couldnât handle the occlusion.â
The applicant abruptly ended the call when challenged. Kulpâs subsequent investigation, shared with law enforcement, connected the application to patterns associated with North Korean IT worker infiltration campaigns.
The DPRK Remote Worker Threat
U.S. intelligence agencies have been warning about North Koreaâs overseas IT worker program since at least 2022. The regime dispatches thousands of trained IT workers to obtain remote employment at Western companies, funneling salaries back to Pyongyang to evade sanctions and fund weapons programs.
Whatâs new in 2026: These workers are now systematically using deepfake technology to:
- Conceal their identities during interviews and ongoing video calls2. Impersonate individuals with stronger credentials listed on their fraudulent resumes3. Bypass identity verification by presenting synthetic faces that match fabricated ID documents4. Enable single operators to work multiple jobs by switching between deepfake personas
The FBI, CISA, and Treasury Department issued a joint advisory in January 2026 warning that deepfake-assisted DPRK IT workers have successfully infiltrated companies in technology, defense, finance, and healthcare sectors.
âThese arenât just opportunistic scammers,â warns a senior Treasury official who spoke on condition of anonymity. âThis is a coordinated state program. Theyâre stealing intellectual property, inserting backdoors into code, and generating revenue to fund weapons of mass destruction.â
The Scale of the Problem
According to Mandiantâs 2026 Threat Intelligence Report, investigators have identified:
- Over 3,000 suspected DPRK-affiliated IT workers operating in Western companies- $600+ million in estimated annual revenue generated for the regime- Deepfake usage increasing 700% among detected infiltration attempts since 2024- Average duration before detection: 14 monthsâallowing extensive access and damage
The Technology: How Deepfakes Got So Good, So Fast
Understanding the technology helps explain why this threat escalated so rapidly.
From Specialized Labs to Consumer Laptops
In 2019, creating a convincing deepfake required:
- High-end GPU clusters ($50,000+)- Specialized machine learning expertise- Hours of source video footage- Days or weeks of processing time
By 2026, the barrier has collapsed:
- Consumer-grade laptops with modern GPUs- One-click applications sold for $100-500 monthly- 3-10 seconds of source audio/video- Real-time generation during live calls
The Deepfake-as-a-Service Economy
Underground markets now offer deepfake services with disturbing professionalization:
âExecutive Cloneâ Services For $5,000-15,000, criminals can purchase a complete deepfake package: voice model, video model, and custom software configured to impersonate a specific target executive.
Real-Time Deepfake Rental Hourly rental of deepfake infrastructure allows attackers to conduct video calls using synthetic personas without maintaining their own technical capabilities.
Deepfake Quality Assurance Premium services offer âQA testingâ where separate teams attempt to detect deepfakes before theyâre deployed against real targets, improving success rates.
Detection Arms Race
Detection technology exists but faces structural disadvantages:
- Asymmetric Development Speed: Generative AI improves faster than detection2. Economic Incentives: More money flows into creation than detection tools3. Deployment Challenges: Detection must be integrated into every video call platform4. False Positive Tolerance: Legitimate callers flagged as fake creates operational friction
Current detection methods include:
- Biological signal analysis: Detecting unnatural eye blinking or heartbeat patterns- Micro-expression inconsistencies: AI often fails to replicate subtle facial movements- Lighting and shadow analysis: Deepfakes may have physically impossible shadow patterns- Audio spectral analysis: Synthetic voices contain telltale frequency signatures- Provocation testing: Asking unexpected questions or requesting unusual movements
Corporate Countermeasures: The New Security Playbook
Organizations are rapidly implementing new security protocols in response to the deepfake threat. Hereâs whatâs workingâand whatâs not.
Whatâs Working
1. Callback Verification Protocols The most effective defense against deepfake payment fraud is mandatory callback verification: before executing any financial transaction over a threshold (typically $10,000-50,000), employees must call the requesting party at a pre-registered phone numberânot the number provided in the call or email.
This simple procedure would have prevented both the Arup and Singapore attacks. The key is using numbers registered before any transaction request, stored in secure systems inaccessible to attackers.
2. Code Word Systems Some organizations have implemented verbal code words that change daily or weekly. During any video call involving sensitive decisions, participants must provide the current code word. Since attackers cannot know these codes, they cannot replicate them even with perfect deepfakes.
Example implementation:
- Daily code words distributed via secure internal app- Required for any financial transaction over threshold- Required before discussing M&A, legal, or personnel matters- Changed immediately if any breach suspected
3. Multi-Channel Verification Before acting on any video call instruction, employees verify through a separate communication channelâideally one that includes physical presence or historically established contact.
âTrust your video. Verify your action through a separate path,â is how one Fortune 100 CISO described their policy.
4. Deepfake Detection Technology Emerging vendors offer real-time deepfake detection integrated into video conferencing platforms:
- Intelâs FakeCatcher: Claims 96% detection rate using blood flow analysis- Microsoft Video Authenticator: Enterprise deployment beginning in 2025- Sensity AI: B2B detection platform monitoring for synthetic media- Reality Defender: Real-time detection API for video platforms
However, security researchers caution that detection remains imperfect, and attackers actively test their deepfakes against these tools before deployment.
5. In-Person Verification Requirements For transactions above certain thresholds ($1 million+ is common), some organizations now require in-person meetings before executionâeliminating the deepfake vector entirely for highest-value targets.
Whatâs Not Working
âJust Look Closerâ Early advice suggested employees should scrutinize video calls for visual artifacts. This guidance has been largely abandoned as deepfake quality improved beyond human detection ability in most cases.
Relying Solely on Platform Security Major video platforms (Zoom, Teams, Meet) have limited native deepfake detection. Relying on the platform to authenticate participants is insufficient.
One-Time Training Annual security awareness training that mentions deepfakes once is inadequate. Organizations with successful prevention have implemented ongoing, scenario-based training with regular deepfake exposure exercises.
The Hiring Pipeline: Protecting Against Deepfake Candidates
The North Korean infiltration threat demands specific countermeasures in the hiring process:
Enhanced Identity Verification
1. Liveness Detection During video interviews, implement liveness checks: ask candidates to perform specific actions (turn head, touch ear, hold object in front of face) that stress deepfake systems.
2. Multi-Session Verification Conduct multiple video interviews across different days and times. Maintaining consistent deepfake impersonation across multiple sessions with varying questions is technically challenging.
3. In-Person Final Rounds For sensitive positions, require at least one in-person interview stage, even if remote work is planned. This eliminates deepfake-concealed candidates.
4. Document Verification Services Use services that verify identity documents against government databases, not just visual inspection. Deepfake candidates often present convincing-looking but fabricated credentials.
5. Reference Deep-Dives Actually call references and ask open-ended questions that require genuine prior interaction. Deepfake-assisted candidates often have fabricated references who arenât prepared for detailed questioning.
Red Flags in Candidate Behavior
Based on documented deepfake applicant cases, watch for:
- Reluctance to enable camera or poor excuses for video quality issues- Unusual delays in speech (processing lag for real-time deepfake)- Lighting or background that seems inconsistent or artificial- Avoidance of unscripted conversation or lateral moves in discussion- Camera positioned to show only face (hiding deepfake body artifacts)- Extraordinary resistance to in-person meetings- Technical credentials that donât match interview performance- References only available via email, never phone
The 2026 Forecast: Where This Is Heading
Security researchers and law enforcement sources paint a sobering picture of whatâs coming:
Immediate Threats (2026)
Voice-Only Deepfakes at Scale Phone calls donât require video generation, making voice-only deepfakes easier and more reliable. Expect massive scaling of âCEO calling from travelâ phone-based fraud.
Supply Chain Attacks Deepfakes targeting vendor relationshipsââyour account rep is on the call, ready to process your order changesââwill exploit trust in established business relationships.
Investor and Board Deception Startups and public companies will face deepfake risks in investor relations and board communications, with potential for stock manipulation and governance compromise.
Medium-Term Evolution (2027-2028)
Bidirectional Deepfake Calls Both parties on a call may be deepfakes, with AI systems negotiating while humans believe theyâre speaking to each other.
Deepfake Evidence in Legal Proceedings Video evidence in court cases will face systematic challenges, potentially undermining prosecution of legitimate crimes.
Political Deepfakes at Scale The 2028 election cycle will likely see sophisticated deepfake deployment for political manipulation, building on current fraud infrastructure.
Experianâs âMachine-to-Machineâ Fraud Era
Experianâs 2026 forecast specifically highlighted the emergence of machine-to-machine fraudâattacks where AI systems target other AI systems without human intervention.
In this model:
- AI identifies targets through automated social media analysis2. AI generates custom deepfake materials for each target3. AI conducts phishing and social engineering autonomously4. AI processes stolen funds through cryptocurrency mixers
Human operators move to supervisory roles, dramatically scaling attack capacity while reducing costs. One organized crime group could theoretically target thousands of companies simultaneously.
Protecting Yourself: Actionable Advice
For Individuals
- Verify Before Sending Any request for money or sensitive action via video call should be verified through a completely separate channelâideally in person or via a long-established phone number.2. Establish Family Code Words Create verbal passwords with family members for emergency situations. If grandma receives a call from someone claiming to be you in trouble, she asks for the code word.3. Limit Public Video/Audio Every public video and audio recording of you is training data for deepfakes. Consider limiting social media video presence, especially detailed speaking content.4. Be Suspicious of Urgency Legitimate emergencies rarely require immediate wire transfers. If someone is pressuring you to act before you can verify, thatâs the fraud.5. Trust Your Instincts If something feels wrong about a video callâthe person seems slightly off, responses are delayed, visual quality fluctuatesâtrust that instinct and verify independently.
For Organizations
- Implement Callback Protocols Immediately No financial transaction over $10,000 without callback verification to pre-registered numbers. No exceptions.2. Deploy Code Word Systems Daily rotating codes for sensitive discussions. Simple to implement, nearly impossible to defeat.3. Upgrade Hiring Verification Multi-session interviews, in-person requirements for sensitive roles, liveness detection in video screens.4. Invest in Detection Technology Evaluate and deploy commercial deepfake detection for critical communications.5. Run Deepfake Red Team Exercises Hire security firms to attempt deepfake attacks against your organization, testing employee response and identifying procedural gaps.6. Update Incident Response Plans Deepfake fraud is now a category requiring specific response procedures, evidence preservation, and reporting pathways.
The Uncomfortable Truth
The deepfake fraud epidemic exposes a fundamental vulnerability in modern business: we built our processes around the assumption that seeing and hearing someone confirms their identity. That assumption is now obsolete.
Every video call, every phone call, every voice message must now be treated as potentially synthetic. The burden of verification has shifted from exception to default.
This is not paranoia. This is risk management in 2026.
The $25 million Arup loss. The $500,000 Singapore theft. The thousands of infiltrated job positions. These are not outliersâtheyâre the documented cases. For every fraud that makes headlines, security researchers estimate ten more go unreported, settled quietly to avoid reputational damage.
The organizations that survive this threat will be those that recognized it early and adapted their processes accordingly. For everyone else, itâs only a matter of time before the call comes throughâand everyone on the screen is fake.
Key Takeaways
- Deepfake fraud has reached industrial scale, with organized operations targeting companies globally- The âeveryone on the call was fakeâ attack pattern eliminates traditional verification through colleague confirmation- State actors (notably North Korea) are using deepfakes for employment fraud and corporate infiltration- Real-time deepfake technology is now accessible for $100-500/month with minimal technical expertise- Callback verification and code word systems are the most effective current defenses- Detection technology exists but remains imperfect and is outpaced by generation improvements- Every organization needs updated procedures assuming video/audio cannot be trusted implicitly
This investigation is part of ScamWatch HQâs ongoing coverage of emerging fraud threats. For updates on deepfake fraud and corporate security, follow our breaking news alerts.
