If you use Google Discover — the personalized news feed built into Android devices and the Google app — you may have encountered an article that seemed startling: a fake arrest warrant in your name, a bank deposit you didn’t make, a police notice about a crime investigation involving your phone number.

You probably dismissed it. But if you clicked, you may have enabled something on your device that you didn’t intend to.

Security researchers at HUMAN Security have published findings on an operation they call Pushpaganda — an AI-powered ad fraud and scareware scheme that weaponized Google’s own content recommendation infrastructure to deliver malicious browser notifications to millions of users worldwide.

At its peak, Pushpaganda generated roughly 240 million fraudulent ad bid requests in a single seven-day period. The operation used 113 custom domains filled with AI-generated content designed to look like local news, health alerts, and government notices — all of it engineered to panic users into clicking, and then into granting notification permissions they would never have granted deliberately.

How Pushpaganda Worked

Google Discover works by analyzing your browsing history, search patterns, and location to surface articles it predicts you’ll engage with. It is designed to show you relevant local news, topics you follow, and trending stories.

Pushpaganda exploited this system through a technique called search engine poisoning — the practice of engineering content to rank well in recommendation algorithms not because it is trustworthy but because it is engineered to trigger engagement signals.

The operation’s AI tools generated massive volumes of sensationalist, hyper-local fake content:

  • Fake arrest warrants citing generic names and local addresses
  • Fake bank deposit alerts suggesting an unexpected transaction required urgent confirmation
  • Fake police notices claiming the reader’s phone number had been flagged in a criminal investigation
  • Fake tech announcements touting non-existent products (such as “$100 smartphones with 300MP cameras”)

Each headline was calibrated to generate the maximum click-through rate from users who saw it in their Discover feed. The content was not designed to be read — it was designed to be clicked.

The Notification Trap

Once a user clicked through to one of the 113 Pushpaganda domains, the page triggered a browser notification permission request. The design of the page was crafted to make granting this permission seem like the next logical step — sometimes framed as “click allow to continue reading” or “verify you’re not a robot.”

If the user granted permission, the site gained the ability to push notifications directly to their device indefinitely. These notifications then redirected users to:

  • Scareware pages claiming their device had been infected with malware, with a “fix” that required downloading software (often additional malware or adware)
  • Fraudulent financial services and fake investment platforms
  • Ad revenue farms that simply generated impressions for illicit advertising networks

The 240 million weekly bid requests figure reflects how Pushpaganda monetized this at scale. Each device that had granted notification permissions could be pinged repeatedly, generating advertising impressions that flowed revenue to the operation regardless of whether the user ever lost money directly.

The Scale of AI-Generated Deception

What distinguished Pushpaganda from earlier scareware operations was the role of AI in content generation.

Previous operations of this type relied on templated fake news content that was relatively easy to identify — repetitive phrasing, obvious fabrications, inconsistent details. Pushpaganda’s 113 domains hosted AI-generated articles that were more varied, more locally targeted, and more credibly written than prior generations of fake news content.

The operation initially focused on India, where HUMAN Security first identified it. The fake arrest warrants and police notices were formatted to resemble actual Indian government documents. The fake bank alerts referenced Indian financial institutions. The fake local news items cited places and institutions familiar to Indian users.

As the operation expanded — to the United States, Australia, Canada, South Africa, and the United Kingdom — the AI-generated content adapted to each market. American users saw fake IRS notices. Australian users saw fake Australian Federal Police warnings. Canadian users received fake CRA (Canada Revenue Agency) alerts.

This localization, executed automatically at scale by AI tools, is what made Pushpaganda more dangerous than earlier ad fraud operations. It was not one fake news site. It was an AI-powered fake news factory that could generate credible-looking content for any target market in any country.

Google’s Response

HUMAN Security disclosed the full list of 113 associated domains to Google after completing its investigation. Google confirmed it had deployed a fix and removed the domains from its Discover recommendation system.

The timeline between the operation’s launch and Google’s fix represents the window of exposure — during which hundreds of millions of bid requests were processed and an unknown number of users had granted notification permissions to the malicious domains.

Google has not disclosed the total number of users affected. HUMAN Security noted that the 240 million weekly bid request figure does not directly translate to 240 million affected users — many bid requests originate from the same devices — but characterized the operation as one of the largest AI-driven social engineering campaigns targeting Discover that the company had investigated.

Why This Matters Beyond the Headlines

Pushpaganda is significant not primarily because of the money it extracted from victims but because of what it demonstrated about the current state of AI-enabled fraud infrastructure.

Prior to AI content generation tools, operating a convincing fake news network across six countries with localized content for each market required a significant team, substantial writing and research resources, and time. Pushpaganda appears to have run this with minimal human involvement in the content layer — AI handled the generation, localization, and volume.

The result was an operation that could flood a globally distributed recommendation algorithm with content at a pace that human content moderation and algorithmic detection systems struggled to match.

The lesson for users: Google Discover (and similar recommendation feeds on social platforms) can surface content from sources that have not been vetted. The presence of an article in your Discover feed does not mean it has been fact-checked or originates from a legitimate publisher. Headlines that trigger fear — especially headlines involving government action, law enforcement, or financial alerts — should be approached with skepticism regardless of where they appear.

For browser notification permissions specifically: Legitimate news sites and services do not require you to “click allow” to read an article. Any page that gates its content behind a notification permission request is manipulating you. Decline and leave.

If you have previously granted notification permissions to unfamiliar sites and are now receiving suspicious alerts, you can review and revoke notification permissions through your browser settings — typically under Privacy or Site Settings — or in your device’s app settings for mobile browsers.

HUMAN Security’s full Pushpaganda research is available at humansecurity.com.