Connecticut residents lost $16.1 million to AI-related fraud in 2025, according to an analysis of FBI data published in April 2026. Residents filed 152 complaints referencing artificial intelligence as a component of the fraud — a figure that almost certainly represents a small fraction of actual incidents, given the FBI’s long-documented underreporting problem in fraud categories.

The analysis, which examined FBI IC3 data broken down by state, found that the Connecticut numbers mirror a national pattern: investment fraud drives 71% of AI scam losses across the United States. And within investment fraud, the delivery mechanism is increasingly consistent — AI-generated deepfake video of celebrities or public figures appearing to endorse specific investment platforms, distributed primarily through social media.

Connecticut’s Attorney General William Tong issued a formal warning to state residents specifically naming Meta platforms — Facebook, Instagram, and WhatsApp — as the primary distribution channels for these investment scam deepfakes.

The Deepfake Investment Scam Formula

The AI-powered investment scam that dominates 2025 and 2026 loss data follows a recognizable pattern:

A video appears in a social media feed — often a short clip of a well-known figure (a news anchor, a business personality, a local politician, or a celebrity) apparently discussing an investment opportunity. The person on screen sounds like the real individual. Their lip movements sync with the audio. The message is credible.

The video is entirely fabricated. The person’s likeness has been generated or manipulated using commercial AI tools. The investment platform it promotes is fake.

Viewers who click through and invest see apparent returns on a convincing dashboard. When they try to withdraw, they face fees, taxes, compliance requirements, or outright ghosting. The platform disappears. The money is gone.

The FBI’s tracking of AI fraud in 2025 specifically documented 22,364 complaints nationally that referenced AI, with losses totaling $893 million. Investment fraud accounted for the majority of that amount — consistent with the 71% figure from the state-level analysis.

Why Meta Platforms Are the Target

Attorney General Tong’s warning singled out Meta platforms for a reason that the data supports: reported social media investment scam losses are dominated by Facebook, Instagram, and WhatsApp according to FTC data released in the same period.

The FTC’s social media fraud data for 2025 shows:

  • Facebook: ~$794 million in reported losses
  • WhatsApp: ~$425 million in reported losses
  • Instagram: ~$234 million in reported losses

Together, Meta’s three platforms account for a disproportionate share of total social media fraud losses. The FTC noted that investment scams drove more than $1.1 billion of total social media fraud losses in 2025 — more than half the $2.1 billion total.

AG Tong’s letter to Meta did not merely flag the problem — it demanded specific policy responses, including better AI-generated content detection and labeling, faster removal of reported fraudulent investment advertisements, and clearer liability for platforms that continue to host identified scam content.

This regulatory pressure reflects a broader trend in state-level enforcement. Several state attorneys general have independently issued warnings and demands to Meta over AI investment scam advertising in 2025 and early 2026, with New Hampshire’s AG issuing a warning specifically about deepfake Zuckerberg videos used to promote fake investment platforms.

The National Picture: Every State Has a Version of This Story

While Connecticut’s $16.1 million figure and 152 complaints reflect a mid-sized state’s exposure, the analysis found that every state has significant AI fraud losses — and several states have far larger absolute numbers while showing similar patterns.

California, New York, Texas, and Florida dominate by absolute loss volume, driven by their larger populations and concentrations of high-net-worth individuals. But loss rate analysis — losses per capita, or losses as a percentage of state GDP — shows that the problem is distributed broadly.

The 152 complaint figure for Connecticut almost certainly reflects significant underreporting. The FBI consistently notes that less than 20% of fraud victims file formal complaints. Adjusted for likely underreporting, Connecticut’s actual AI-related fraud losses in 2025 may be closer to $80–100 million — a figure that would place it among the more affected states per capita.

The Age Dimension

Across all states, the AI investment scam data shows a consistent age dimension: while younger adults encounter scams more frequently, older adults lose more money per incident when they are victimized.

AI-generated celebrity deepfakes are particularly effective against older demographics because:

  • They frequently involve figures — news anchors, financial commentators, public personalities — who are familiar and trusted by older viewers
  • Older adults may be less familiar with the concept of AI-generated video and more likely to accept video as authentic
  • Higher savings balances mean larger potential losses when investment scams succeed
  • Social isolation, which correlates with susceptibility to relationship-based investment fraud, is more prevalent in older age groups

The FBI’s data showing that adults over 60 accounted for $7.7 billion of total 2025 cybercrime losses — roughly 37% of total reported losses — is consistent with the investment fraud concentration in AI loss data.

What Connecticut and Other States Are Doing

Beyond AG Tong’s letter to Meta, Connecticut has taken several additional steps:

The state has issued specific consumer guidance through the Department of Banking warning about fake investment platforms, with particular emphasis on the “verify before you invest” framework — checking whether a platform is registered with FINRA, the SEC, or the CFTC before depositing funds.

Connecticut, along with several other states, has joined a multi-state coalition that is preparing regulatory and potentially legal action against social media platforms over the hosting of fraudulent investment advertising that uses AI-generated content.

The state has also expanded resources for fraud victim support, recognizing that the financial devastation of investment fraud — which frequently involves retirement savings — requires more than just investigation.

What You Can Do

For residents of Connecticut and every other state, the AI investment scam threat has a straightforward mitigation:

No legitimate investment opportunity is advertised through social media video. Berkshire Hathaway does not recruit investors through Facebook videos. Warren Buffett is not promoting cryptocurrency platforms on Instagram. If you see a video in your social feed of any public figure discussing an investment opportunity, assume it is fabricated until you can independently verify otherwise.

Verify any investment platform before depositing funds. The SEC’s EDGAR database, FINRA’s BrokerCheck, and the CFTC’s registration database are free public resources. If a platform is not registered and regulated, it is not a legitimate investment venue.

Report AI investment scam content. Reporting fraudulent investment advertisements through Meta’s reporting tools, the FTC at ReportFraud.ftc.gov, and your state AG’s consumer protection office contributes to the enforcement pressure that is gradually forcing platform accountability.

The $16.1 million Connecticut figure is not a uniquely Connecticut problem. It is a window into a national crisis that AI tools are making faster, more convincing, and more difficult to detect with each passing month.