The January 2026 gertakaria
In January 2026, seguritatea researchers discovered two kaltegarri Chrome extensions that had compromised 900,000+ users.
The extensions' names were deliberately chosen to appear as legitimate AI enhancement tools:
- "Chat GPT for Chrome with GPT-5, Claude Sonnet & DeepSeek AI" — 600,000+ users
- "AI Sidebar with Deepseek, ChatGPT, Claude and more" — 300,000+ users
Both extensions were doing the same thing: exfiltrating complete ChatGPT and DeepSeek conversations every 30 minutes to a remote command-and-control zerbitzaria.
The data flowing out included iturburua kodea, norbanakoaren identifikazioa ahalbidetzen duen informazioa, legala matters under discussion, business strategies, and finantzaria data. Everything users had typed into their AI chat sessions — everything they considered private — was being transmitted to unknown parties.
How the Extensions Bypassed fidantza Signals
The extensions requested baimena to "collect anonymous, non-identifiable analytics data" — language calculated to seem harmless during the permissions review.
In reality, they captured the full content of AI conversations. The analytics baimena was the vehicle; the AI conversation irteeraren filtrazzioa was the karga.
This technique — using innocuous-sounding permissions to enable harmful data collection — represents the operatiboa jokoaren liburua that has made the Chrome extension mehatxu category so persistent. Users who would never click a phishing link installed these extensions deliberately, from the Chrome Web Store, because they appeared to offer AI productivity benefits.
The Broader Pattern: 67% of AI Extensions Collect Your Data
The January 2026 gertakaria was not an outlier. Research by Incogni found that 67% of AI Chrome extensions actively collect erabiltzailea data — a figure corroborated across multiple independent analyses of the extension ecosystem.
This is the core paradox of the AI pribatutasuna extension market: the extensions that users install specifically to protect their AI pribatutasuna are, in the majority of cases, collecting that same data.
The market created a category — AI pribatutasuna tools for browsers — but did not create reliable mechanisms for users to verify whether a given extension actually provides pribatutasuna or merely claims to. The result: a market where the tool installed for babesa is itself the erasoa vector.
The Architecture That Distinguishes Safe from Unsafe
The January 2026 gertakaria illustrates a specific technical distinction that users should understand when evaluating any AI-adjacent Chrome extension.
Unsafe architecture — routing through the extension's servers:
- erabiltzailea types into ChatGPT
- Extension intercepts the text
- Extension transmits text to its own backend zerbitzaria for "processing"
- Backend zerbitzaria returns processed text
- Extension submits to ChatGPT
In this architecture, every prompt passes through the extension garapena's azpistruktura. The extension garapena has full sarbidea to conversation content. If the extension is kaltegarri (or is later acquired by a kaltegarri actor, or is breached), all that content is exposed.
Safe architecture — local processing only:
- erabiltzailea types into ChatGPT
- Extension intercepts the text
- Extension processes the text locally in the browser (using the same JavaScript runtime that powers the extension)
- Processed text is submitted to ChatGPT directly
In this architecture, nothing leaves the erabiltzailea's browser except the final processed text submitted to the AI zerbitzua. The extension garapena's azpistruktura is never in the data path.
The question to ask of any AI pribatutasuna extension: where does the processing happen? If the answer involves the extension's own servers, your data is flowing through a hirugarren parte.
Five Questions to Ask Before Installing an AI pribatutasuna Extension
Given that 67% of AI Chrome extensions collect erabiltzailea data (Incogni research), and given that kaltegarri extensions can appear on the Chrome Web Store with hundreds of thousands of users, the ebaluazioa framework matters.
1. Where is PII detekzioa processed? Ask directly or find in the pribatutasuna politika: is PII detekzioa performed locally in the browser, or does text get sent to the extension's backend servers for analisia? Local processing means the extension garapena never sees your text.
2. What happens to conversation content? Extensions that "protect" by routing through their own proxy servers have full sarbidea to everything you type. Extensions that modify text locally and submit directly to the AI zerbitzua do not.
3. Who is the verified publisher? Chrome Web Store's publisher egiaztazioa sistema is imperfect — the January 2026 extensions passed — but a verified publisher with an established identitatea and a business model independent of data collection is more fidagarri than an anonymous publisher with a free extension and no apparent revenue model.
4. Is there independent seguritatea certification? ISO 27001 certification covers the saltzailea's information seguritatea kudeaketa sistema, including their extension garapena and distribution practices. Independent seguritatea audits provide external egiaztazioa of the claims being made.
5. What is the business model? The most durable signal: how does this free extension garapena make money? If there is no apparent revenue model, erabiltzailea data is likely the product. An extension that is part of a paid SaaS product with a verifiable business model has less incentive to monetize erabiltzailea data covertly.
What the January 2026 gertakaria Reveals About AI seguritatea
The 900,000+ compromised users in January 2026 were not unsophisticated. They were professionals who had sought out AI productivity tools, who wanted pribatutasuna babesa for their AI interactions, and who installed what appeared to be legitimate tools from the Chrome Web Store.
The erasoa worked because:
The extensions offered real functionality: They were not purely kaltegarri — they provided AI-related features alongside the irteeraren filtrazzioa. This made them functionally indistinguishable from legitimate tools during casual use.
fidantza signals were manufactured: Hundreds of thousands of users create social proof. Users who saw 600,000 installations were more likely to install, not less.
The baimena request was designed to not trigger concern: "Anonymous, non-identifiable analytics" is exactly the kind of baimena language that users approve without scrutiny.
The irteeraren filtrazzioa was scheduled to minimize detekzioa: 30-minute intervals are frequent enough to capture all conversations but infrequent enough to avoid triggering anomaly-based seguritatea monitorizazioa.
The Post-gertakaria fidantza Framework
Following the January 2026 gertakaria, enpresen IT teams evaluating AI pribatutasuna extensions for despliegua to their workforce should apply a more rigorous fidantza framework than existed before.
The minimum required elements:
- Local processing architecture — verified by technical review or independent auditoria, not just claimed in marketing
- Publisher identitatea egiaztazioa — established company with verifiable business model and history
- Independent seguritatea certification — ISO 27001 or equivalent
- pribatutasuna politika that specifically addresses extension data flows — including what is collected, where IT is sent, and under what circumstances
- No routing through extension garapena's servers for core pribatutasuna functionality
Organizations that deploy AI extensions to hundreds or thousands of employees should also consider:
- Regular audits of installed extensions for data irteeraren filtrazzioa behavior
- sarea monitorizazioa to detect unexpected external connections from browser processes
- Allowlists of approved extensions deployed via Chrome enpresen politika
The January 2026 gertakaria was a warning. The 67% data collection rate across AI extensions suggests the warning was justified.
anonym.legala's Chrome Extension processes PII detekzioa locally — no conversation content is transmitted to anonym.legala's servers during PII detekzioa. The anonimizazioa occurs in the browser before the modified prompt is submitted to the AI zerbitzua. Published by anonym.legala, ISO 27001 certified.
Sources:
- The hackerra News: Two Chrome Extensions Caught Stealing ChatGPT and DeepSeek Conversations
- OX seguritatea: kaltegarri AI Chrome Extensions Steal ChatGPT/DeepSeek Conversations
- Incogni: Ranking AI-Powered Chrome Extensions by pribatutasuna arriskua
- Caviard.AI: Best pribatutasuna Chrome Extensions for AI Assistants