There is a version of the 2025 fraud story that sounds almost reassuring.
In certain sectors, loss growth has slowed. Fraud detection systems are improving. Law enforcement has become more sophisticated at cryptocurrency tracing. Several major criminal networks have been disrupted. The numbers, while still alarming in absolute terms, are not climbing at the same exponential rate they were three years ago.
This is the stabilization narrative — and it is partially true. But a growing body of research suggests it is also deeply misleading, because it focuses on what can be measured while missing what cannot: the erosion of trust that AI-driven fraud is producing at every level of the economy.
73% of CEOs Were Personally Hit
The most striking data point comes from the World Economic Forum’s executive survey data, which showed that 73% of CEOs said that either they personally or someone in their immediate professional network had been affected by fraud in 2025.
That figure is not about their organizations being victimized — it is about executives themselves encountering fraud at the personal level. Invoice fraud, vendor impersonation, fake investment pitches, deepfake video calls supposedly from known contacts, synthetic voice calls impersonating colleagues asking for wire authorizations.
The survey finding reflects what consumer-facing fraud data has long shown: fraud is not a problem that affects a marginalized subset of inexperienced or less educated people. It is a problem that affects everyone — including people whose job it is to be skeptical of unusual financial requests.
Cyber-Enabled Fraud Has Overtaken Ransomware
For the better part of a decade, ransomware dominated the executive anxiety landscape. It was the threat that encrypted your data, shut down your operations, and forced impossible decisions about paying criminals.
In 2026, cyber-enabled fraud has displaced ransomware as the number one concern among CEOs heading into the year, according to multiple enterprise risk surveys.
This shift reflects several realities:
Ransomware has a natural ceiling. Once organizations implement robust backups, network segmentation, and incident response, ransomware becomes a survivable event. The industry has built institutional muscle for dealing with it.
Fraud has no ceiling. Business email compromise, vendor impersonation, AI-generated invoice fraud, and deepfake-enabled wire fraud are adaptive. As defenses improve in one area, the methods shift to another. The total addressable attack surface — every employee who can authorize a payment, every vendor relationship that involves financial transactions, every communication channel through which trust is established — is essentially unlimited.
The losses are less visible. A ransomware attack shuts operations down visibly. A successful fraud transfer often looks identical to a legitimate one until someone checks the bank statement. By then, the money is gone. The invisibility of fraud losses means they frequently don’t generate the same organizational urgency as operational disruption.
The Trust Erosion Problem
Beyond the direct financial losses, researchers are beginning to quantify something more corrosive: the way AI-enabled fraud is degrading the trust infrastructure that business operations depend on.
Modern business relies on being able to assume that:
- An email from a known colleague is actually from that colleague
- A voice call from a vendor is actually from that vendor
- A video call from an executive is actually that executive
- An invoice that matches a real vendor’s template and bank details is legitimate
AI has made every one of these assumptions dangerous.
Voice cloning tools can replicate a known person’s voice from seconds of publicly available audio. Real-time deepfake technology can replace a face on a video call convincingly enough to fool humans in real-time interaction. Large language models can write emails that precisely match an individual’s writing style, reference real context from prior communications, and make requests that seem completely consistent with normal business patterns.
The consequence is not just that individual fraud attempts succeed more often. It is that every single communication now requires verification overhead that was previously unnecessary. A finance team that once processed a routine payment authorization based on a recognized email and a matching amount must now second-guess routine communications in ways that slow operations, frustrate legitimate employees, and impose real productivity costs.
This trust tax is not captured in fraud loss statistics. It shows up instead in slower business cycles, increased verification bureaucracy, and the organizational friction that comes from having to treat every communication as potentially spoofed.
What “Stabilizing” Actually Means
When industry reports note that fraud losses are stabilizing in certain sectors, the underlying mechanisms are worth examining.
In financial services, stabilization often reflects improved fraud detection systems — machine learning models that catch more fraudulent transactions before they complete. But it also reflects a shift in criminal targeting: as detection improves for high-frequency, low-value fraud (credit card testing, small transfer fraud), criminal attention shifts toward lower-frequency, high-value attacks (wire fraud, BEC, executive impersonation) that are harder to detect algorithmically.
So aggregate losses may stabilize while per-incident losses rise. The FBI’s 2025 IC3 data shows exactly this pattern: the number of investment fraud complaints grew more slowly than in prior years, but the average loss per complaint rose significantly.
In consumer contexts, stabilization sometimes reflects consumer fatigue with certain scam types — people have become better at ignoring generic phishing emails — while criminal operators adapt to more personalized, AI-generated attacks that have not yet triggered the same learned skepticism.
The Sectors Most Exposed
The AI trust erosion problem is most acute in sectors with high-value transaction authorization, complex vendor relationships, and significant inter-organizational communication:
Professional services and law firms: Fake client instructions, fraudulent wire authorization requests, invoice fraud exploiting the complexity of legal billing relationships.
Construction and real estate: Wire fraud at closing remains one of the highest-value individual fraud categories. AI tools have made convincing title company impersonation significantly easier.
Healthcare: Medical billing fraud, insurance claim manipulation, and vendor impersonation targeting medical procurement departments.
Financial services: Despite strong detection systems, the shift toward high-value BEC and deepfake-enabled authorization fraud continues.
Manufacturing: Supply chain vendor impersonation, particularly exploiting the complexity of multi-tier supplier relationships.
Building Defenses Against Trust Erosion
The WEF and enterprise fraud surveys consistently identify the same set of effective countermeasures:
Out-of-band verification for high-value transactions: Any payment authorization above a threshold — or for a new payee — requires a separate verification step using a communication channel established prior to the transaction request.
Code words for voice and video verification: Shared verification codes between regular business contacts that can be requested when identity uncertainty exists.
Vendor payment change controls: Treating any request to change a vendor’s banking details as a potential fraud attempt, requiring multi-step verification before updating records.
Employee training on specific AI threats: General “be skeptical” training is less effective than training that walks employees through specific scenarios — a deepfake executive call, a synthetic voice emergency request — and gives them the scripts for what to do.
The fraud loss numbers that stabilized in 2025 may be the last stabilization we see before AI capabilities in the offense layer advance significantly. Building institutional resistance to AI-enhanced social engineering now, before the next generation of attacks arrives, is the most cost-effective moment to do it.



