Telling AI It's an Expert Makes It Worse: Persona-Based Prompting Backfires for Technical Tasks
A University of Southern California study finds that telling an AI model to role-play as an expert — like "You are an expert full-stack developer" — actually makes it perform worse on technical tasks like coding and math, despite improving results for writing and safety tasks.
The Research
Key Findings
- Writing and safety tasks: Persona-based prompting improves performance
- Math and coding tasks: Persona-based prompting degrades accuracy
- MMLU benchmark: Expert persona scored 68.0% vs 71.6% for base model
- Root cause: Persona prefixes activate instruction-following mode that competes with factual recall
Why It Fails for Technical Tasks
Telling a model it's an expert doesn't add any expertise — it just activates role-playing behavior that hinders the model's ability to fetch facts from pretraining data. The model is spending tokens on "being an expert" rather than computing the actual answer.
Where It Helps
- Safety: A "Safety Monitor" persona boosted attack refusal rates from 53.2% to 70.9%
- Alignment: LLM-based judges prefer persona-guided responses
- Creative tasks: Writing quality benefits from role-playing context
The PRISM Solution
The researchers propose PRISM (Persona Routing via Intent-based Self-Modeling) to automatically determine whether persona-based prompting will help or hurt for a given task.
Practical Advice
- Skip generic expert personas ("You are an expert programmer")
- Use specific project requirements instead ("This project uses React + TypeScript")
- Persona guidance helps with UI preferences, architecture choices, and tool selection
- For factual tasks, just ask directly without role-playing setup
Source: The Register, USC preprint (Hu, Rostami, Thomason)