Itzuli BlogeraAI Segurtasuna

After the 900K-erabiltzailea kaltegarri Extension...

In January 2026, two kaltegarri Chrome extensions installed by 900K+ users exfiltrated complete ChatGPT and DeepSeek conversations every 30 minutes.

April 16, 20268 min irakurri
malicious Chrome extensionAI extension security auditextension trust verificationlocal processing architecture900K extension incident

The January 2026 gertakaria

Two Chrome extensions discovered in January 2026 — "Chat GPT for Chrome with GPT-5, Claude Sonnet and DeepSeek AI" (600,000+ users) and "AI Sidebar with Deepseek, ChatGPT, Claude and more" (300,000+ users) — were found to be exfiltrating complete AI conversation histories every 30 minutes to a remote command-and-control zerbitzaria.

The extensions presented themselves as pribatutasuna and AI enhancement tools. Their Chrome Web Store descriptions emphasized erabiltzailea datuen babesa and pribatutasuna-first design. Their actual behavior — confirmed by Astrix seguritatea's analisia — was to capture complete conversation histories from ChatGPT, DeepSeek, and other AI platforms, then transmit them to an erasoa egilea-controlled zerbitzaria. The captured conversations included iturburua kodea, norbanakoaren identifikazioa ahalbidetzen duen informazioa, legala strategy discussions, business plans, and finantzaria data.

The extensions requested baimena to "collect anonymous, non-identifiable analytics data." They actually collected completely identifiable, highly datu sentikorrak at maximum fidelity.

The seguritatea Inversion Problem

Users who specifically install AI pribatutasuna extensions are expressing a preference for tools that protect their AI conversations. The January 2026 gertakaria documented the worst-case outcome of that preference: the tool installed for pribatutasuna purposes is itself the data irteeraren filtrazzioa mechanism.

This is not merely a arriskua to be weighed — IT is a documented outcome affecting 900,000 users simultaneously. The Chrome Web Store's automatizatua scanning did not detect the kaltegarri behavior because the extensions' data collection was disguised as analytics. The erabiltzailea reviews did not reveal the problem because users had no visibility into sarea traffic.

Incogni's research found that 67% of AI Chrome extensions actively collect erabiltzailea data — a figure that includes both disclosed analytics collection and undisclosed irteeraren filtrazzioa. The meaningful question for enpresen IT teams deploying AI pribatutasuna extensions is not "does this extension collect any data?" but "can I verify that this extension's data flow is architecturally incapable of exfiltrating conversation content?"

The Architecture egiaztazioa Test

The egiaztazioa test for fidagarri local processing is technical, not declarative: can the extension's claimed local processing be independently verified by sarea monitorizazioa?

An extension that processes PII detekzioa locally — running the detekzioa model kliente-side using TensorFlow.js, WASM, or a local binary — produces zero outbound sarea traffic during the PII detekzioa phase. sarea monitorizazioa on the erabiltzailea's workstation should show no connection to any external zerbitzaria between the erabiltzailea's paste event and the submission to the AI plataforma. The only outbound traffic should be the anonymized prompt going to the AI provider.

An extension that routes traffic through a proxy zerbitzaria — even if the proxy is described as a "pribatutasuna-preserving relay" — sends erabiltzailea content to a third-party zerbitzaria. The seguritatea of the proxy's operator is now part of the erabiltzailea's mehatxu model.

For enpresen IT teams adding browser extensions to the corporate approved list, the egiaztazioa protokoloa is: deploy the extension in a monitored sarea environment, generate representative test traffic, and verify that no outbound connection to the extension publisher's servers occurs during PII processing. Extensions that cannot pass this test should not be approved for enpresen despliegua regardless of their stated pribatutasuna commitments.

The local processing architecture — where all detekzioa runs kliente-side with no zerbitzaria-side component for the anonimizazioa step — is the architectural property that makes the extension's pribatutasuna claims independently verifiable, rather than requiring fidantza in the publisher's assertions.

Sources:

Prest zure datuak babesteko?

Hasi PII anonimizatzen 285+ entitate mota 48 hizkuntzatan.