For the first time in the history of its annual Internet Crime Report, the FBI broke out AI-enabled fraud as its own category. The reason: the numbers are too large and too distinct to bury in general statistics. In 2025 alone, Americans reported $893 million in losses specifically attributed to AI-powered scams. And older Americans — people 60 and above — accounted for $352 million of that total.
These are not estimates or projections. These are documented losses from complaints filed directly with the FBI’s Internet Crime Complaint Center (IC3). And as with all IC3 data, they represent only the cases that were reported — a fraction of the actual harm, since research consistently shows that fewer than 10% of fraud victims file a formal report.
The FBI’s decision to add the AI-enabled fraud category reflects something important: law enforcement is now treating artificial intelligence not merely as a tool that makes scams more convincing, but as a fundamentally new class of threat that requires its own tracking, analysis, and response infrastructure.
What the FBI Counts as “AI-Enabled” Fraud
The 2025 IC3 report’s AI-enabled category encompasses fraud where artificial intelligence was a material element of the deception — not merely background technology. Specifically:
- Voice cloning used to impersonate family members, government officials, or financial institutions
- Deepfake video used in impersonation calls, fake interviews, or fabricated evidence
- AI-generated written content used in phishing emails or romance fraud that passed previous linguistic filters (spelling errors, grammatical patterns) used to identify non-native speaker scammers
- AI chatbots running autonomous relationship-building operations in romance and investment fraud
- AI-generated documents — fake warrants, bank statements, government notices — used as supporting evidence in fraud scenarios
The $893 million figure represents only those complaints where victims either self-identified AI involvement or where investigators identified AI tools as material to the fraud. The actual AI footprint across total IC3 complaints is believed to be substantially larger.
Why Seniors Bear the Heaviest Burden
The FBI’s data shows that Americans aged 60 and above are disproportionately targeted by AI-enabled fraud, accounting for $352 million — nearly 40% — of the $893 million total, despite representing a smaller share of internet users.
This disparity is not explained by technological naivety alone. FBI analysts point to several structural factors that make older Americans more vulnerable to AI-specific fraud techniques:
Stronger relationships with telephone communication. Older Americans are more likely to trust and engage with a phone call as a legitimate primary communication channel. Younger people have grown up with spam calls and tend to screen them aggressively. For older Americans, receiving an urgent call from a family member or authority figure triggers a response shaped by decades of treating phone communication as reliable.
Less exposure to AI capability awareness. The knowledge that AI can convincingly clone a voice or generate a real-time fake video is widespread among people who work in technology or follow tech media. It is much less widely known among populations that don’t encounter it in daily professional life. Older Americans are less likely to have encountered this knowledge through their regular information environment.
Greater accumulated assets. Older Americans — particularly retirees — typically have access to savings, retirement accounts, and home equity that represent decades of accumulated wealth. This makes them higher-value targets. The average loss per elderly victim in AI-enabled fraud is substantially higher than the average across all age groups.
Higher likelihood of living alone. Isolation reduces the probability that a victim will consult someone before acting on an urgent request. The “grandchild in distress” scenario is specifically designed to exploit isolation — the scammer asks the victim not to tell anyone, and if the victim lives alone, there may be no one immediately available to interrupt the panic response.
The Five AI Fraud Methods Targeting Seniors
The FBI’s analysis of elderly victim reports identified five primary AI-enabled vectors:
1. Voice-Cloned Family Emergency Calls
The most commonly reported scenario: a victim receives a call from what sounds exactly like a grandchild, adult child, or close family member in distress. The voice is AI-generated from audio harvested from the victim’s family member’s social media presence. The scenario — arrest, car accident, medical emergency, robbery — is designed to create immediate panic and suppress the victim’s natural skepticism.
The call typically includes a secondary voice — a “lawyer,” “police officer,” or “hospital administrator” — who provides official-sounding instructions and a financial destination. Money is requested via wire transfer, cryptocurrency, or by purchasing gift cards and reading the codes over the phone.
Average loss in documented cases: $9,000 to $35,000 per incident, with some victims losing over $100,000 across multiple calls from the same operation.
2. AI-Generated Government Impersonation
Government impersonation has been a leading fraud vector for years. AI has dramatically improved its effectiveness.
Victims receive calls or video calls from what appears to be an IRS agent, Social Security Administration official, Medicare representative, or law enforcement officer. In audio-only calls, voice cloning is used to match accent and professional cadence expectations. In video calls, deepfake technology produces a convincing uniformed official in what appears to be a government office.
The AI generates supporting materials in real time: fake warrant numbers, fabricated account information showing “suspicious activity,” or official-looking documents shared via screen during the call.
3. AI Romance Fraud With Long-Term Relationship Building
The romantic fraud pipeline — building an emotional relationship over weeks or months before introducing a financial crisis — has been partially automated using AI chatbot systems.
What makes the AI version particularly devastating for older, isolated victims is the duration and emotional intensity of the relationship before the fraud occurs. Victims frequently describe feeling genuinely close to someone over months of daily contact — someone who asked about their health, remembered their grandchildren’s names, and sent good morning messages every day. When the crisis arrives, the emotional bond is real even though the person never was.
The average loss in AI-assisted long-term romance fraud targeting older Americans exceeded $50,000 in documented 2025 cases.
4. Deepfake Financial Advisor or Investment Expert Video
A variation increasingly documented in 2025: victims receive video calls from or are shown video content of what appears to be a financial advisor, investment expert, or cryptocurrency professional. The video is AI-generated using the likeness of a real professional — sometimes a legitimate financial advisor, sometimes a celebrity or public figure.
The “expert” recommends a specific investment platform or cryptocurrency opportunity. The platform is fake. All profits shown are fabricated. When the victim attempts to withdraw, they are told they must pay taxes, fees, or security deposits first — and each payment is absorbed by the fraud operation.
5. AI-Enhanced Phishing and Document Fraud
Older Americans receiving official-looking emails or documents in the mail — supposedly from their bank, from Medicare, from Social Security — that contain AI-generated text indistinguishable from legitimate communications.
Previous generations of phishing could often be identified by grammatical errors, unusual phrasing, or inconsistencies in official-sounding language. AI-generated phishing content passes these tests — because it is generated by the same underlying technology that produces fluent, contextually appropriate text for legitimate uses.
What $352 Million in Losses Looks Like
The aggregate figure doesn’t convey what individual losses mean in human terms.
A retired teacher in Florida lost $87,000 — her entire savings account — to a voice-cloned “grandson” who called on a Tuesday afternoon, panicked, claiming he’d been in an accident in Mexico and needed bail money before his parents found out. She spent four hours on the phone with the “grandson” and a “consulate official,” withdrawing money from her account in multiple transactions. She did not speak to her actual grandson for three days, believing his silence was embarrassment about the incident. He knew nothing about it.
A widower in Oregon lost $143,000 across nine months to an AI-driven romance fraud operation. He was introduced to the “woman” — a fictional identity operated by an automated system — through a social media group for people over 60. The relationship was warm, consistent, and emotionally meaningful to him. The financial requests came gradually — small amounts for medical bills, then larger amounts for a business crisis, then larger amounts still to unlock a “blocked international account.” He did not report until the relationship ended abruptly and a nephew helped him piece together what had happened.
These are not unusual cases. They are representative of thousands of similar experiences reported to the FBI each year.
What Families Can Do
The most important protective factor identified in FBI research is regular, open family communication about the scam landscape.
Elderly relatives who know that AI can clone voices are significantly less likely to be successfully targeted by voice cloning fraud. The knowledge removes the core mechanism: the assumption that the voice on the phone is real because it sounds exactly right.
Practical steps:
- Have a direct conversation with elderly parents or grandparents: “Scammers can now fake anyone’s voice exactly. If you get a call from someone who sounds like me or another family member asking for money, call me directly before doing anything.”
- Establish a family verification word or phrase that anyone can ask for in an emergency call
- Set up a simple check-in protocol: if an elderly relative receives any call requesting money, they agree to call a designated family member before acting
- Help elderly relatives adjust privacy settings on social media accounts to limit who can access video or audio content — reducing the available material for voice cloning
Financial safeguards:
- Consider setting up account alerts for large withdrawals
- Talk to their bank — many financial institutions now have elder fraud prevention programs that allow family notification for unusual transactions with the account holder’s consent
- Be alert to sudden changes in financial behavior or unusual stress around financial topics
Where to Report
If you or someone you know has been targeted by AI-enabled fraud:
- FBI IC3: ic3.gov — the primary federal reporting channel, particularly important for building the national picture of AI fraud
- FTC: reportfraud.ftc.gov — for scams involving impersonation, fake calls, or phishing
- Elder Fraud Hotline: 1-833-FRAUD-11 (operated by the DOJ’s Elder Justice Initiative)
- Your bank — immediately, if a financial transfer has occurred. Time is critical for recovery attempts.
The FBI’s new AI-enabled fraud category only exists because enough people reported to make the pattern visible. Every report matters — both for potential individual recovery and for the collective picture that drives enforcement resources toward the operations causing the most harm.



