Bumalik sa BlogSeguridad ng AI

Ang AI ay Ngayon ang #1 Data Exfiltration...

77% ng mga empleyado ay naglalagay ng sensitibong data sa AI tools. Ang GenAI ay bumubuo na ng 32% ng lahat ng corporate data exfiltration.

February 17, 20268 min basahin
AI securityChatGPTdata leakageenterprise security

Ang AI Data Leakage Crisis

Noong Oktubre 2025, inilabas ng LayerX Security ang mga natuklasang dapat magbigay ng alalahanin sa bawat CISO: 77% ng mga empleyado ay naglalagay ng data sa GenAI tools, with 82% ng aktibidad na iyon ay nagmumula sa hindi pinamamahalaan na personal na mga account.

Mas nakakapag-alala pa: Ang GenAI ay bumubuo na ng 32% ng lahat ng corporate data exfiltration—making it the #1 vector for unauthorized data movement in the enterprise.

Ito ay hindi isang hinaharap na problema. It's happening right now, every day, in your organization.

Ang Mga Numero ay Nakakamangha

PaghahanapDataPinagmulan
Mga empleyado na naglalagay ng data sa AI77%LayerX 2025
Data exfiltration sa pamamagitan ng AI tools32%LayerX 2025
ChatGPT paggamit sa pamamagitan ng hindi pinamamahalaan na mga account67%LayerX 2025
Araw-araw na pastes sa pamamagitan ng personal na mga account14 per employeeLayerX 2025
Mga pastes na naglalaman ng sensitibong data3+ per dayLayerX 2025

On average, employees perform 14 pastes per day via personal accounts, with at least three containing sensitive data. Traditional DLP tools, built around file-centric monitoring, don't even register this activity.

Bakit Ang Pagbabawal sa AI Ay Hindi Gumagana

Sinubukan ng Samsung na ipagbawal ang ChatGPT after employees leaked source code. It didn't work.

The reality is that AI tools make employees significantly more productive. According to research, developers using AI assistants complete tasks 55% faster. When you ban AI, employees either:

  1. Gamitin ito pa rin through personal accounts (67% already do)
  2. Mawalan ng produktibidad and become frustrated
  3. Umalis para sa mga kakompetensya who embrace AI

Ang sagot ay hindi pagbabawal—it's protection.

The 900,000-User Chrome Extension Breach

In December 2025, OX Security discovered two malicious Chrome extensions that had stolen ChatGPT and DeepSeek conversations from 900,000+ users.

One of these extensions had Google's "Featured" badge—the supposed mark of trustworthiness.

The extensions worked by:

  • Intercepting chat conversations in real-time
  • Storing data locally on victims' machines
  • Exfiltrating batches to command-and-control servers every 30 minutes

Even worse: a separate investigation found "free VPN" extensions with over 8 million downloads had been capturing AI conversations since July 2025.

The Solution: Intercept Before Submission

The only way to safely use AI while protecting sensitive data is to anonymize PII before it reaches the AI model.

This is exactly what anonym.legal's Chrome Extension and MCP Server do:

Chrome Extension

  • Intercepts text before you send it to ChatGPT, Claude, or Gemini
  • Automatically detects and anonymizes PII (names, emails, SSNs, etc.)
  • Replaces sensitive data with tokens: "John Smith" → "[PERSON_1]"
  • De-anonymizes AI responses so you see the original names

MCP Server (for developers)

  • Integrates with Claude Desktop, Cursor, and VS Code
  • Transparent proxy—you interact normally with AI
  • PII is anonymized before prompts reach the model
  • Works with your existing workflows

What Gets Protected

Both tools detect and anonymize 285+ entity types across 48 languages:

  • Personal: Names, email addresses, phone numbers, dates of birth
  • Financial: Credit card numbers, bank accounts, IBANs
  • Government: SSNs, passport numbers, driver's licenses
  • Healthcare: Medical record numbers, patient IDs
  • Corporate: Employee IDs, internal account numbers

Even if your AI chat history is compromised (like those 900,000 users), there's no recoverable PII—only anonymized tokens.

Implementation: 5 Minutes to Protection

Chrome Extension

  1. Download from anonym.legal/features/chrome-extension
  2. Log in with your anonym.legal account
  3. Visit ChatGPT, Claude, or Gemini
  4. Type normally—PII is automatically detected and anonymized before sending

MCP Server (for Claude Desktop)

Add to your claude_desktop_config.json:

{
  "mcpServers": {
    "anonym-legal": {
      "command": "npx",
      "args": ["-y", "@anthropic/mcp-server-anonym-legal"],
      "env": {
        "ANONYM_API_KEY": "your-api-key"
      }
    }
  }
}

The Cost of Inaction

Consider what's at risk:

  • Financial data pasted into AI for analysis
  • Customer information used in support queries
  • Pinagmulan code shared for debugging
  • Legal documents summarized by AI
  • Healthcare records processed for insights

A single data breach costs an average of $4.88 million (IBM 2024). The average healthcare breach now costs $7.42 million (IBM 2025)—down from $9.77 million in 2024, but still far exceeding every other industry.

The Chrome Extension is free. The MCP Server is included with Pro plans starting at €15/month.

Conclusion

AI is here to stay. Your employees are already using it—the question is whether they're doing it safely.

The LayerX findings make it clear: traditional security approaches are blind to AI data exfiltration. You need tools specifically designed to protect data before it reaches AI models.

Start protecting your organization today:


Pinagmulans:

Handa nang protektahan ang iyong data?

Simulan ang anonymization ng PII gamit ang 285+ uri ng entidad sa 48 wika.