隐私攻击类别
匿名破坏
LLM使用写作风格、事实和时间模式将匿名帖子匹配到真实身份。在$1-$4/档案的90%精度下有68%准确率。
属性推断
LLM从文本推断个人属性(地点、收入、年龄),即使未明确陈述。GPT-4达到85%的top-1准确率。
PII提取
从训练数据或提示中提取个人信息。GPT-4的电子邮件提取准确率100%。使用高级攻击增加5倍。
提示注入
操纵LLM代理在任务执行期间泄漏个人数据。在银行场景中约20%的攻击成功率。
Large-scale online deanonymization with LLMs
Simon Lermen (MATS), Daniel Paleka (ETH Zurich), Joshua Swanson (ETH Zurich), Michael Aerni (ETH Zurich), Nicholas Carlini (Anthropic), Florian Tramèr (ETH Zurich)
Published: February 18, 2026
主要发现
68% recall at 90% precision for deanonymization using ESRC framework
方法论
Designed attacks for closed-world setting with scalable attack pipeline using LLMs to: (1) extract identity-relevant features, (2) search for candidate matches via semantic embeddings, (3) reason over top candidates to verify matches and reduce false positives.
ESRC框架
LLM从匿名帖子中提取识别事实
使用事实查询公共数据库(如LinkedIn)
LLM对候选匹配进行推理
置信度评分以最小化假阳性
实验结果
| 数据集 | 召回率 @ 90%精度 | 说明 |
|---|---|---|
| Hacker News → LinkedIn | 68% | vs near 0% for classical methods |
| Reddit cross-community | 8.5% | Multiple subreddits |
| Reddit temporal split | 67% | Same user over time |
| Internet-scale (extrapolated) | 35% | At 1M candidates |
含义
Practical obscurity protecting pseudonymous users online no longer holds. Classical methods achieve near 0% recall under same conditions.
所有研究论文
关于LLM隐私攻击的11项额外同行评审研究
Beyond Memorization: Violating Privacy via Inference with Large Language Models
Robin Staab, Mark Vero, Mislav Balunović, 等人 (ETH Zurich)
85% top-1 accuracy inferring personal attributes from Reddit posts
First comprehensive study on LLM capabilities to infer personal attributes from text. GPT-4 achieved highest accuracy among 9 tested models.
主要发现
- •85% top-1 accuracy, 95% top-3 accuracy at inferring personal attributes
- •100× cheaper and 240× faster than human annotators
- •Tested 9 state-of-the-art LLMs including GPT-4, Claude 2, Llama 2
- •Infers location, income, age, sex, profession from subtle text cues
AutoProfiler: Automated Profile Inference with Language Model Agents
Yuntao Du, Zitao Li, Bolin Ding, 等人 (Virginia Tech, Alibaba, Purdue University)
85-92% accuracy for automated profiling at scale using four specialized LLM agents
Framework using specialized LLM agents (Strategist, Extractor, Retriever, Summarizer) for automated profile inference from pseudonymous platforms.
主要发现
- •Four specialized agents: Strategist, Extractor, Retriever, Summarizer
- •Iterative workflow enables sequential scraping, analysis, and inference
- •Outperforms baseline FTI across all attributes and LLM backbones
- •Short-term memory for Extractor/Retriever, long-term memory for Strategist/Summarizer
Large Language Models are Advanced Anonymizers
Robin Staab, Mark Vero, Mislav Balunović, 等人 (ETH Zurich SRI Lab)
Adversarial anonymization reduces attribute inference from 66.3% to 45.3% after 3 iterations
LLMs can be used defensively in adversarial framework to anonymize text. Outperforms commercial anonymizers in both privacy and utility.
主要发现
- •Adversarial feedback enables anonymization of significantly finer details
- •Attribute inference accuracy drops from 66.3% to 45.3% after 3 iterations
- •Evaluated 13 LLMs on real-world and synthetic online texts
- •Human study (n=50) showed strong preference for LLM-anonymized texts
AgentDAM: Privacy Leakage Evaluation for Autonomous Web Agents
Arman Zharmagambetov, Chuan Guo, Ivan Evtimov, 等人 (Meta AI, CMU)
GPT-4, Llama-3, and Claude web agents are prone to inadvertent use of unnecessary sensitive information
Benchmark measuring if AI web agents follow data minimization principle. Simulates realistic web interactions across GitLab, Shopping, and Reddit.
主要发现
- •Evaluates GPT-4, Llama-3, Claude-powered web navigation agents
- •Measures data minimization compliance: use PII only if 'necessary' for task
- •Agents often leak sensitive information when unnecessary
- •Three test environments: GitLab, Shopping, Reddit web apps
SoK: The Privacy Paradox in Large Language Models
Various researchers
Systematization of 5 distinct privacy incident categories beyond memorization
Comprehensive survey categorizing privacy risks: training data leakage, chat leakage, context leakage, attribute inference, and attribute aggregation.
主要发现
- •Five privacy incident categories identified:
- •1. Training data leakage via regurgitation
- •2. Direct chat leakage through provider breaches
- •3. Indirect context leakage via agents and prompt injection
PII-Scope: A Comprehensive Study on Training Data PII Extraction Attacks in LLMs
Krishna Kanth Nakka, Ahmed Frikha, Ricardo Mendes, 等人 (Various)
PII extraction rates increase up to 5× with sophisticated adversarial capabilities and limited query budget
Comprehensive benchmark for PII extraction attacks. Reveals notable underestimation of PII leakage in existing single-query attacks.
主要发现
- •PII extraction rates can increase up to 5× with sophisticated attacks
- •Existing single-query attacks notably underestimate PII leakage
- •Taxonomy: Black-box (True-prefix, ICL, PII Compass) and White-box (SPT) attacks
- •Hyperparameters like demonstration selection crucial to attack effectiveness
Evaluating LLM-based Personal Information Extraction and Countermeasures
Yupei Liu, Yuqi Jia, Jinyuan Jia, 等人 (Penn State, Duke University)
GPT-4 achieves 100% accuracy extracting emails and 98% for phone numbers from synthetic profiles
Systematic measurement study benchmarking LLM-based personal information extraction (PIE). Proposes prompt injection as novel defense.
主要发现
- •GPT-4: 100% email extraction, 98% phone number extraction on synthetic data
- •Larger LLMs more successful: vicuna-7b achieves 65%/95% vs GPT-4's 100%/98%
- •LLMs better at: emails, phone numbers, addresses, names
- •LLMs worse at: work experience, education, affiliation, occupation
Preserving Privacy in Large Language Models: A Survey on Current Threats and Solutions
Michele Miranda, Elena Sofia Ruzzetti, Andrea Santilli, 等人 (Various)
Comprehensive taxonomy of privacy attacks: training data extraction, membership inference, model inversion
Survey examining privacy threats from LLM memorization. Proposes solutions from dataset anonymization to differential privacy and machine unlearning.
主要发现
- •Privacy attacks covered: Training data extraction, Membership inference, Model inversion
- •Training data extraction: non-adversarial and adversarial prompting
- •Membership inference: shadow models and threshold-based approaches
- •Model inversion: output inversion and gradient inversion
Beyond Data Privacy: New Privacy Risks for Large Language Models
Various researchers
LLM autonomous capabilities create new vulnerabilities for inadvertent data leakage and malicious exfiltration
Explores privacy vulnerabilities from LLM integration into applications and weaponization of autonomous abilities.
主要发现
- •LLM integration creates new privacy vulnerabilities beyond traditional risks
- •Opportunities for both inadvertent leakage and malicious exfiltration
- •Adversaries can exploit systems for sophisticated large-scale privacy attacks
- •Autonomous LLM abilities can be weaponized for data exfiltration
Simple Prompt Injection Attacks Can Leak Personal Data Observed by LLM Agents
Various researchers
15-50% utility drop under attack with ~20% average attack success rate for personal data leakage
Examines prompt injection causing tool-calling agents to leak personal data during task execution. Uses fictitious banking agent scenario.
主要发现
- •16 user tasks from AgentDojo benchmark evaluated
- •15-50 percentage point drop in LLM utility under attack
- •~20% average attack success rate across LLMs
- •Most LLMs avoid leaking passwords due to safety alignments
Membership Inference Attacks on Large-Scale Models: A Survey
Various researchers
First comprehensive review of MIAs targeting LLMs and LMMs across pre-training, fine-tuning, alignment, and RAG stages
Survey analyzing membership inference attacks by model type, adversarial knowledge, strategy, and pipeline stage.
主要发现
- •Analyzes MIAs across: pre-training, fine-tuning, alignment, RAG stages
- •Strong MIAs require training multiple reference models (computationally expensive)
- •Weaker attacks often perform no better than random guessing
- •Tokenizers identified as new attack vector for membership inference
研究中的防御策略
不起作用的方法
- ✗假名化 — LLM破坏用户名、用户标签、显示名称
- ✗文本转图像转换 — 针对多模态LLM仅略微降低
- ✗模型对齐单独 — 目前对阻止推理无效
- ✗简单文本匿名化 — 针对LLM推理不足
有效的方法
- ✓对抗性匿名化 — 将推理从66.3%降低到45.3%
- ✓差分隐私 — 将PII精度从33.86%降低到9.37%
- ✓提示注入防御 — 对LLM基础PIE最有效
- ✓真正的PII删除/替换 — 删除LLM使用的信号
为什么这项研究很重要
这12篇研究论文演示了隐私威胁的根本转变。假名化、用户名和用户标签变更等传统匿名化方法不再足以保护免受有LLM访问权限的决定的对手的伤害。
关键威胁指标
- Hacker News到LinkedIn匹配在90%精度下的68%匿名破坏准确率
- 位置、收入、年龄、职业的85% top-1属性推断准确率
- 100%电子邮件提取和98%电话号码提取(GPT-4)
- 使用复杂多查询攻击的PII泄漏5倍增加
- 每个档案$1-$4的成本使大规模攻击在经济上可行
谁面临风险
- 举报人和活动人士:匿名帖子可能被链接到真实身份
- 专业人士:Reddit活动链接到LinkedIn档案
- 医疗保健患者:成员推断揭示数据是否在训练中
- 任何有历史帖子的人:多年的数据可能被遡及匿名破坏
anonym.legal如何解决这些威胁
anonym.legal提供删除LLM使用的信号的真正匿名化:
- 285+实体类型:名称、位置、日期、时间标记、标识符
- 写作模式破坏:替换显示笔迹风格的文本
- 可逆加密:对需要授权访问的情况使用AES-256-GCM
- 多个运算符:替换、删除、哈希、加密、掩码、自定义
常见问题
什么是基于LLM的匿名破坏?
基于LLM的匿名破坏使用大型语言模型从匿名或假名在线帖子中识别真实个人。与大规模失败的传统方法不同,LLM可以结合写作风格分析(文体学)、陈述事实、时间模式和上下文推理,将匿名档案与真实身份相匹配。研究显示准确率高达68%,而古典方法接近0%,在90%精度下。
LLM匿名破坏有多准确?
研究显示令人担忧的准确率水平:Hacker News到LinkedIn匹配在90%精度下68%召回,Reddit时间分析67%(同一用户随时间),互联网规模35%(100万+候选者)。对于属性推断,GPT-4在仅从Reddit帖子推断位置、收入、年龄和职业等个人属性时达到85% top-1准确率。
什么是ESRC框架?
ESRC(提取-搜索-推理-校准)是一个四步LLM匿名破坏框架:(1)提取 - LLM使用NLP从匿名帖子中提取识别事实,(2)搜索 - 使用提取的事实和语义嵌入查询LinkedIn等公开数据库,(3)推理 - LLM推理候选匹配分析一致性,(4)校准 - 置信度评分以最小化假阳性同时最大化真实匹配。
LLM匿名破坏的成本是多少?
研究显示基于LLM的匿名破坏成本为每个档案$1-$4,使大规模匿名破坏在经济上可行。对于防御性匿名化,使用GPT-4的成本低于每条评论$0.035。这种低成本使国家行为体、公司、跟踪者和恶意个人能够进行大规模隐私攻击。
LLM可以从文本中提取哪些类型的PII?
LLM擅长提取:电子邮件地址(GPT-4 100%准确率)、电话号码(98%)、邮寄地址和名称。它们还可以推断非显式PII:位置、收入水平、年龄、性别、职业、教育、关系状态和出生地,来自微妙的文本线索和写作模式。
什么是成员推断攻击(MIA)?
成员推断攻击确定特定数据是否用于训练AI模型。对于LLM,这揭示了您的个人信息是否在训练数据集中。研究显示电子邮件地址和电话号码特别容易受到攻击。新的攻击向量包括基于分词器的推断和注意力信号分析(AttenMIA)。
提示注入攻击如何泄漏个人数据?
提示注入操纵LLM代理在任务执行期间泄漏观察到的个人数据。在银行代理场景中,攻击在数据流出方面达到约20%的成功率,效用降低15-50%。虽然安全对齐防止密码泄漏,但其他个人数据仍然容易受到攻击。
anonym.legal如何帮助防护LLM隐私攻击?
anonym.legal通过以下方式提供真正的匿名化:(1)PII检测 - 285+实体类型,包括名称、位置、日期、写作模式,(2)替换 - 用格式有效的替代品替换真实PII,(3)删除 - 完全删除敏感信息,(4)可逆加密 - 对授权访问使用AES-256-GCM。与LLM击败的假名化不同,真正的匿名化删除LLM用于匿名破坏的信号。