LLMs Can Unmask Pseudonymous Users at Scale with Up to 90% Precision

Available in: 中文
2026-03-22T12:07:18.000Z·2 min read
Research shows LLMs can deanonymize pseudonymous users across platforms with 90% precision and 68% recall, threatening the fundamental assumption that pseudonymity provides adequate online privacy.

LLMs Can Unmask Pseudonymous Users at Scale with Up to 90% Precision

Researchers have demonstrated that large language models can deanonymize pseudonymous users across multiple social media platforms with alarming accuracy — up to 90% precision and 68% recall. The findings, published in a peer-reviewed paper, represent a fundamental threat to online privacy as we know it.

The Research

The study tested LLMs' ability to match posts across platforms (e.g., linking a Hacker News account to a LinkedIn profile) by analyzing writing style, topic preferences, and behavioral patterns:

Why This Changes Everything

Online pseudonymity has long been an "implicit threat model" — people assumed that while total anonymity is impossible, pseudonymity provided adequate protection because targeted deanonymization would require extensive effort. LLMs invalidate this assumption:

Technical Approach

The researchers' framework works by:

  1. Cross-platform data collection: Gathering posts from multiple platforms
  2. Writing style analysis: LLMs analyze syntax, vocabulary, sentence structure, and topic patterns
  3. Behavioral fingerprinting: Posting times, interaction patterns, topic preferences
  4. Statistical correlation: Matching behavioral and stylistic features across accounts

Implications for Platforms

Social media platforms and forums face difficult choices:

What Users Can Do

While no method is foolproof against determined LLM-based analysis:

Source: Ars Technica | Research Paper

↗ Original source
← Previous: Invisible Code Supply-Chain Attack Hits GitHub and Other Major RepositoriesNext: Trump Gets AI Data Center Giants to Sign 'Ratepayer Protection Pledge' on Energy Costs →
Comments0