Sigurnost AI-a
Zaštita osjetljivih podataka u doba AI-a i GenAI alata
27 članci
Code, Tests, and Customer Data: How Development Teams...
Unit test fixtures with real customer records. Log files with production data for debugging. GitHub found 39 million secrets leaked in 2024.
The Internal Wiki PII Problem: Why Your Confluence...
Support teams document processes with screenshots of customer accounts. Over 3 years, that's thousands of GDPR data minimization violations in your...
The Screenshot PII Problem: How Customer Data Leaks...
Slack, Teams, Jira, and email regularly receive screenshots containing customer PII. This access-control violation bypasses every DLP tool.
The Paste-and-Forget Problem: Why Automatic PII...
62% of employees who use AI tools for customer data work 'sometimes' forget to remove PII first.
The $2.2M Argument for Real-Time PII Prevention...
IBM found a $2.2M cost difference between prevention and detection. Here's the math that makes real-time PII interception non-optional for security...
Proving GDPR Article 32 Compliance for AI Tools...
Enterprise compliance teams need quantitative evidence of AI tool PII controls. Network DLP misses browser AI interactions.
Prevention vs. Detection: Why Real-Time PII...
When an employee types a customer name into ChatGPT, the data leaves organizational control in real-time. Post-hoc DLP cannot un-ring this bell.
Building GDPR-Compliant Customer Support AI...
Customer support AI receives customer messages with names, emails, AND order IDs.
The Privacy Extension Paradox: How to Tell If Your AI...
67% of AI Chrome extensions collect user data. The December 2025 incidents saw 900K users compromised by extensions posing as privacy tools.
The 3.8 Daily PII Exposures Your Support Team Doesn't...
Every support agent using ChatGPT makes an average of 3.8 sensitive data pastes per day.
After the 900K-User Malicious Extension Incident...
In January 2026, two malicious Chrome extensions installed by 900K+ users exfiltrated complete ChatGPT and DeepSeek conversations every 30 minutes.
Why Policy Training Fails to Stop ChatGPT PII Leaks...
77% of enterprise AI users copy-paste data into chatbot queries. Nearly 40% of uploaded files contain PII or PCI data.
The Enterprise AI Paradox: How to Give Developers AI...
Banks banned ChatGPT. Their developers used it from home anyway. 27.4% of all content fed into enterprise AI chatbots contains sensitive data...
The Developer's Guide to Using Cursor and Claude...
Cursor loads .env files into AI context by default. A financial services firm lost $12M after proprietary trading algorithms were sent to an AI...
From FEMA to Finance: Why AI Policy Without Technical...
77% of employees share sensitive work data with AI tools despite policies prohibiting it.
IDE vs. Browser: The Two-Layer Developer AI Security...
Developers use AI in two environments: IDE (Cursor, VS Code) and browser (Claude.ai, ChatGPT). Each requires different controls.
83% of AI Chrome Extensions Are Never...
83% of Chrome extensions with broad permissions have never been security-audited (USENIX 2025). 45% of enterprise employees use unapproved extensions.
39 Million GitHub Secret Leaks in 2024...
67% of developers have accidentally exposed secrets in code (GitGuardian 2025). 39 million secrets leaked on GitHub in 2024, up 25% year-over-year.
Vibe Coding i curenja PII: Sigurnosni rizik o kojem...
AI-generirani kod rijetko uključuje rukovanje PII-om. 73% vibrira kodiranih aplikacija obrađuje osjetljive podatke bez anonimizacije.
MCP Server sigurnost 2026.: 8.000 izloženih...
8.000+ Model Context Protocol servera je javno izloženo. 492 nema autentifikacije. 36,7% su ranjivi na SSRF.
Blokiranje vs. Anonimizacija: Dva pristupa Browser...
Dva fundamentalno različita pristupa sprječavanju dostupa PII do AI alata: blokiranje (sprječavanje slanja) naspram anonimizacija (transformiranje...
How Samsung Lost Proprietary Source Code to ChatGPT...
Three separate Samsung engineering teams pasted proprietary code and confidential data into ChatGPT in April 2023.
JPMorgan, Goldman Sachs, Apple: Why Enterprise AI...
27.4% of enterprise AI chatbot content contains sensitive data—a 156% year-over-year increase.
900,000 Users Compromised: How to Choose an AI...
In January 2026, two malicious Chrome extensions with 900,000+ users were caught exfiltrating ChatGPT and DeepSeek conversations every 30 minutes.
Browser DLP for ChatGPT, Claude, Gemini...
Traditional enterprise DLP was built for file transfers and email, not AI chatbots.
900,000 Users Had Their AI Chats Stolen—Was Yours One...
Two malicious Chrome extensions stole ChatGPT conversations from 900,000+ users. One had Google's 'Featured' badge.
Umjetna inteligencija je sada #1 vektor za...
77% zaposlenika paste osjetljive podatke u AI alate. GenAI sada čini 32% cijele korporativne eksfiltracije podataka.
Započnite Zaštitu Vaših Podataka Danas
285+ vrsta entiteta, 48 jezika, sigurnost razine poduzeća po cijenama za startupe.