Back to BlogAI Security

900,000 Users Compromised: How to Choose an AI Privacy Extension That Isn't Spying on You

In January 2026, two malicious Chrome extensions with 900,000+ users were caught exfiltrating ChatGPT and DeepSeek conversations every 30 minutes. With 67% of AI Chrome extensions actively collecting user data, here's how to evaluate whether your privacy tool is actually trustworthy.

March 5, 20268 min read
Chrome extension securitymalicious extensionChatGPT privacyAI data protection

The January 2026 Incident

In January 2026, security researchers discovered two malicious Chrome extensions that had compromised 900,000+ users.

The extensions' names were deliberately chosen to appear as legitimate AI enhancement tools:

  • "Chat GPT for Chrome with GPT-5, Claude Sonnet & DeepSeek AI" — 600,000+ users
  • "AI Sidebar with Deepseek, ChatGPT, Claude and more" — 300,000+ users

Both extensions were doing the same thing: exfiltrating complete ChatGPT and DeepSeek conversations every 30 minutes to a remote command-and-control server.

The data flowing out included source code, personally identifiable information, legal matters under discussion, business strategies, and financial data. Everything users had typed into their AI chat sessions — everything they considered private — was being transmitted to unknown parties.

How the Extensions Bypassed Trust Signals

The extensions requested permission to "collect anonymous, non-identifiable analytics data" — language calculated to seem harmless during the permissions review.

In reality, they captured the full content of AI conversations. The analytics permission was the vehicle; the AI conversation exfiltration was the payload.

This technique — using innocuous-sounding permissions to enable harmful data collection — represents the operational playbook that has made the Chrome extension threat category so persistent. Users who would never click a phishing link installed these extensions deliberately, from the Chrome Web Store, because they appeared to offer AI productivity benefits.

The Broader Pattern: 67% of AI Extensions Collect Your Data

The January 2026 incident was not an outlier. Research by Incogni found that 67% of AI Chrome extensions actively collect user data — a figure corroborated across multiple independent analyses of the extension ecosystem.

This is the core paradox of the AI privacy extension market: the extensions that users install specifically to protect their AI privacy are, in the majority of cases, collecting that same data.

The market created a category — AI privacy tools for browsers — but did not create reliable mechanisms for users to verify whether a given extension actually provides privacy or merely claims to. The result: a market where the tool installed for protection is itself the attack vector.

The Architecture That Distinguishes Safe from Unsafe

The January 2026 incident illustrates a specific technical distinction that users should understand when evaluating any AI-adjacent Chrome extension.

Unsafe architecture — routing through the extension's servers:

  1. User types into ChatGPT
  2. Extension intercepts the text
  3. Extension transmits text to its own backend server for "processing"
  4. Backend server returns processed text
  5. Extension submits to ChatGPT

In this architecture, every prompt passes through the extension developer's infrastructure. The extension developer has full access to conversation content. If the extension is malicious (or is later acquired by a malicious actor, or is breached), all that content is exposed.

Safe architecture — local processing only:

  1. User types into ChatGPT
  2. Extension intercepts the text
  3. Extension processes the text locally in the browser (using the same JavaScript runtime that powers the extension)
  4. Processed text is submitted to ChatGPT directly

In this architecture, nothing leaves the user's browser except the final processed text submitted to the AI service. The extension developer's infrastructure is never in the data path.

The question to ask of any AI privacy extension: where does the processing happen? If the answer involves the extension's own servers, your data is flowing through a third party.

Five Questions to Ask Before Installing an AI Privacy Extension

Given that 67% of AI Chrome extensions collect user data (Incogni research), and given that malicious extensions can appear on the Chrome Web Store with hundreds of thousands of users, the evaluation framework matters.

1. Where is PII detection processed? Ask directly or find in the privacy policy: is PII detection performed locally in the browser, or does text get sent to the extension's backend servers for analysis? Local processing means the extension developer never sees your text.

2. What happens to conversation content? Extensions that "protect" by routing through their own proxy servers have full access to everything you type. Extensions that modify text locally and submit directly to the AI service do not.

3. Who is the verified publisher? Chrome Web Store's publisher verification system is imperfect — the January 2026 extensions passed — but a verified publisher with an established identity and a business model independent of data collection is more trustworthy than an anonymous publisher with a free extension and no apparent revenue model.

4. Is there independent security certification? ISO 27001 certification covers the vendor's information security management system, including their extension development and distribution practices. Independent security audits provide external verification of the claims being made.

5. What is the business model? The most durable signal: how does this free extension developer make money? If there is no apparent revenue model, user data is likely the product. An extension that is part of a paid SaaS product with a verifiable business model has less incentive to monetize user data covertly.

What the January 2026 Incident Reveals About AI Security

The 900,000+ compromised users in January 2026 were not unsophisticated. They were professionals who had sought out AI productivity tools, who wanted privacy protection for their AI interactions, and who installed what appeared to be legitimate tools from the Chrome Web Store.

The attack worked because:

The extensions offered real functionality: They were not purely malicious — they provided AI-related features alongside the exfiltration. This made them functionally indistinguishable from legitimate tools during casual use.

Trust signals were manufactured: Hundreds of thousands of users create social proof. Users who saw 600,000 installations were more likely to install, not less.

The permission request was designed to not trigger concern: "Anonymous, non-identifiable analytics" is exactly the kind of permission language that users approve without scrutiny.

The exfiltration was scheduled to minimize detection: 30-minute intervals are frequent enough to capture all conversations but infrequent enough to avoid triggering anomaly-based security monitoring.

The Post-Incident Trust Framework

Following the January 2026 incident, enterprise IT teams evaluating AI privacy extensions for deployment to their workforce should apply a more rigorous trust framework than existed before.

The minimum required elements:

  • Local processing architecture — verified by technical review or independent audit, not just claimed in marketing
  • Publisher identity verification — established company with verifiable business model and history
  • Independent security certification — ISO 27001 or equivalent
  • Privacy policy that specifically addresses extension data flows — including what is collected, where it is sent, and under what circumstances
  • No routing through extension developer's servers for core privacy functionality

Organizations that deploy AI extensions to hundreds or thousands of employees should also consider:

  • Regular audits of installed extensions for data exfiltration behavior
  • Network monitoring to detect unexpected external connections from browser processes
  • Allowlists of approved extensions deployed via Chrome Enterprise policy

The January 2026 incident was a warning. The 67% data collection rate across AI extensions suggests the warning was justified.


anonym.legal's Chrome Extension processes PII detection locally — no conversation content is transmitted to anonym.legal's servers during PII detection. The anonymization occurs in the browser before the modified prompt is submitted to the AI service. Published by anonym.legal, ISO 27001 certified.

Sources:

Ready to protect your data?

Start anonymizing PII with 285+ entity types across 48 languages.