Back to BlogAI Security

After the 900K-User Malicious Extension Incident: How to Choose a Safe AI Privacy Extension

In January 2026, two malicious Chrome extensions installed by 900K+ users exfiltrated complete ChatGPT and DeepSeek conversations every 30 minutes. The tool users installed for privacy was itself the attack. Here's the security verification checklist.

March 5, 20268 min read
malicious Chrome extensionAI extension security auditextension trust verificationlocal processing architecture900K extension incident

The January 2026 Incident

Two Chrome extensions discovered in January 2026 — "Chat GPT for Chrome with GPT-5, Claude Sonnet and DeepSeek AI" (600,000+ users) and "AI Sidebar with Deepseek, ChatGPT, Claude and more" (300,000+ users) — were found to be exfiltrating complete AI conversation histories every 30 minutes to a remote command-and-control server.

The extensions presented themselves as privacy and AI enhancement tools. Their Chrome Web Store descriptions emphasized user data protection and privacy-first design. Their actual behavior — confirmed by Astrix Security's analysis — was to capture complete conversation histories from ChatGPT, DeepSeek, and other AI platforms, then transmit them to an attacker-controlled server. The captured conversations included source code, personally identifiable information, legal strategy discussions, business plans, and financial data.

The extensions requested permission to "collect anonymous, non-identifiable analytics data." They actually collected completely identifiable, highly sensitive data at maximum fidelity.

The Security Inversion Problem

Users who specifically install AI privacy extensions are expressing a preference for tools that protect their AI conversations. The January 2026 incident documented the worst-case outcome of that preference: the tool installed for privacy purposes is itself the data exfiltration mechanism.

This is not merely a risk to be weighed — it is a documented outcome affecting 900,000 users simultaneously. The Chrome Web Store's automated scanning did not detect the malicious behavior because the extensions' data collection was disguised as analytics. The user reviews did not reveal the problem because users had no visibility into network traffic.

Incogni's research found that 67% of AI Chrome extensions actively collect user data — a figure that includes both disclosed analytics collection and undisclosed exfiltration. The meaningful question for enterprise IT teams deploying AI privacy extensions is not "does this extension collect any data?" but "can I verify that this extension's data flow is architecturally incapable of exfiltrating conversation content?"

The Architecture Verification Test

The verification test for trustworthy local processing is technical, not declarative: can the extension's claimed local processing be independently verified by network monitoring?

An extension that processes PII detection locally — running the detection model client-side using TensorFlow.js, WASM, or a local binary — produces zero outbound network traffic during the PII detection phase. Network monitoring on the user's workstation should show no connection to any external server between the user's paste event and the submission to the AI platform. The only outbound traffic should be the anonymized prompt going to the AI provider.

An extension that routes traffic through a proxy server — even if the proxy is described as a "privacy-preserving relay" — sends user content to a third-party server. The security of the proxy's operator is now part of the user's threat model.

For enterprise IT teams adding browser extensions to the corporate approved list, the verification protocol is: deploy the extension in a monitored network environment, generate representative test traffic, and verify that no outbound connection to the extension publisher's servers occurs during PII processing. Extensions that cannot pass this test should not be approved for enterprise deployment regardless of their stated privacy commitments.

The local processing architecture — where all detection runs client-side with no server-side component for the anonymization step — is the architectural property that makes the extension's privacy claims independently verifiable, rather than requiring trust in the publisher's assertions.

Sources:

Ready to protect your data?

Start anonymizing PII with 285+ entity types across 48 languages.