כל המאמריםאבטחת AI

אבטחת AI

הגנה על נתונים רגישים בעידן הכלים של AI ו-GenAI

27 מאמרים

אבטחת AI

AI Coding Assistants Leak Production PII

Unit test fixtures with real customer records. Log files with production data for debugging. GitHub found 39 million secrets leaked in 2024.

July 6, 20268 דקות
אבטחת AI

Internal Wiki PII: Confluence Customer Data

Support teams document processes with screenshots of customer accounts. Over 3 years, that's thousands of GDPR data minimization violations in your.

July 5, 20266 דקות
אבטחת AI

Screenshot PII: Leaks in Internal Tools

Slack, Teams, Jira, and email regularly receive screenshots containing customer PII. This access-control violation bypasses every DLP tool.

July 2, 20266 דקות
אבטחת AI

PII Highlighting vs Compliance Training

62% of employees who use AI tools for customer data work 'sometimes' forget to remove PII first. Here's why automatic highlighting removes the compliance.

June 23, 20267 דקות
אבטחת AI

Real-Time PII Prevention Saves $2.2M

IBM found a $2.2M cost difference between prevention and detection. Here's the math that makes real-time PII interception non-optional for security teams.

June 19, 20268 דקות
אבטחת AI

GDPR Art. 32: AI Tools PII Monitoring

Enterprise compliance teams need quantitative evidence of AI tool PII controls. Network DLP misses browser AI interactions.

June 18, 20267 דקות
אבטחת AI

Real-Time PII Prevention for AI Data Leaks

When an employee types a customer name into ChatGPT, the data leaves organizational control in real-time. Post-hoc DLP cannot un-ring this bell.

June 17, 20267 דקות
אבטחת AI

GDPR Support AI: Custom Identifiers

Customer support AI receives customer messages with names, emails, AND order IDs. Standard PII tools strip email addresses but leave order IDs intact.

June 2, 20267 דקות
אבטחת AI

Is Your AI Privacy Tool Stealing Your Data?

67% of AI Chrome extensions collect user data. The December 2025 incidents saw 900K users compromised by extensions posing as privacy tools.

April 19, 20268 דקות
אבטחת AI

3.8 Daily PII Exposures in Support Teams

Every support agent using ChatGPT makes an average of 3.8 sensitive data pastes per day. For a 100-person team, that's 380 GDPR exposure incidents daily.

April 18, 20268 דקות
אבטחת AI

After the 900K-User Extension Incident

In January 2026, two malicious Chrome extensions installed by 900K+ users exfiltrated complete ChatGPT and DeepSeek conversations every 30 minutes.

April 16, 20268 דקות
אבטחת AI

Why Policy Fails to Stop ChatGPT PII Leaks

77% of enterprise AI users copy-paste data into chatbot queries. Nearly 40% of uploaded files contain PII or PCI data. HIPAA Security Rule update proposed.

April 15, 20268 דקות
אבטחת AI

Enterprise AI: Dev Access Without Risk

Banks banned ChatGPT. Their developers used it from home anyway. 27.4% of all content fed into enterprise AI chatbots contains sensitive data (Zscaler.

April 6, 20269 דקות
אבטחת AI

Using Cursor & Claude Without Leaking Code

Cursor loads .env files into AI context by default. A financial services firm lost $12M after proprietary trading algorithms were sent to an AI assistant.

April 5, 20269 דקות
אבטחת AI

AI Policy Without Technical Controls Fails

77% of employees share sensitive work data with AI tools despite policies prohibiting it. A government contractor pasted FEMA flood-relief applicant data.

April 4, 20268 דקות
אבטחת AI

IDE vs Browser: Developer AI Security

Developers use AI in two environments: IDE (Cursor, VS Code) and browser (Claude.ai, ChatGPT). Each requires different controls.

March 31, 20268 דקות
אבטחת AI

83% of AI Extensions Are Never Audited

83% of Chrome extensions with broad permissions have never been security-audited (USENIX 2025). 45% of enterprise employees use unapproved extensions.

March 30, 20268 דקות
אבטחת AI

39M GitHub Leaks: AI Coding Risk

67% of developers have accidentally exposed secrets in code (GitGuardian 2025). 39 million secrets leaked on GitHub in 2024, up 25% year-over-year.

March 29, 20268 דקות
אבטחת AI

Vibe Coding וזליגת PII: הסיכון לאבטחה שאף אחד לא מדבר...

קוד שנוצר על ידי AI כמעט לעולם אינו כולל טיפול ב-PII. 73% מיישומי vibe-code עובדים עם נתונים רגישים ללא התחזוקה. הנה מה שמפתחים צריכים לדעת.

March 16, 20267 דקות
אבטחת AI

אבטחת MCP Server 2026: 8,000 חשופים, 492 ללא אימות

8,000+ שרתי Model Context Protocol חשופים ברבים. 492 אין להם אימות. 36.7% פגיעים ל-SSRF. כיצד להגן על PII בקריאות ה-MCP tool שלך.

March 16, 20267 דקות
אבטחת AI

Browser DLP: Blocking vs. Anonymization Approaches 2026

Two approaches to browser DLP: blocking prevents PII submission to AI tools; anonymization transforms data before sending. An objective comparison.

March 14, 202610 דקות
אבטחת AI

Samsung Lost Source Code to ChatGPT 3 Times

Three separate Samsung engineering teams pasted proprietary code and confidential data into ChatGPT in April 2023. Each incident revealed a different.

March 13, 20269 דקות
אבטחת AI

Enterprise AI Bans: Productivity vs Risk

27.4% of enterprise AI chatbot content contains sensitive data—a 156% year-over-year increase. Yet 71.

March 9, 20269 דקות
אבטחת AI

Safe AI Privacy Extensions in 2026

In January 2026, two malicious Chrome extensions with 900,000+ users were caught exfiltrating ChatGPT and DeepSeek conversations every 30 minutes.

March 8, 20268 דקות
אבטחת AI

Browser DLP for ChatGPT, Claude, and Gemini

Traditional enterprise DLP was built for file transfers and email, not AI chatbots. This guide covers browser-native data loss prevention for ChatGPT.

March 8, 202612 דקות
אבטחת AI

900K משתמשים היה לנו גנובים Chats שלהם בתוך AI

שתי הרחבות Chrome זדוניות גנבו שיחות ChatGPT מ-900,000+ משתמשים. אחת הייתה לה בדיקת Google Featured של Google.

February 21, 20266 דקות
אבטחת AI

AI: וקטור ה-#1 לדליפת נתונים

77% מהעובדים משתמשים בהדבקת נתונים רגישים לכלים עם AI. GenAI אחראי כעת לעד 32% מכל דליפות הנתונים הקורפוראטיביים. למד כיצד להגן על הארגון שלך.

February 17, 20268 דקות

התחל להגן על הנתונים שלך היום

285+ סוגי ישויות, 48 שפות, אבטחה ברמה ארגונית במחירי סטארטאפ.