Telling AI It's an Expert Makes It Worse: Persona-Based Prompting Backfires for Technical Tasks

Available in: 中文
2026-03-29T19:55:57.142Z·1 min read
A University of Southern California study finds that telling an AI model to role-play as an expert — like "You are an expert full-stack developer" — actually makes it perform worse on technical tas...

A University of Southern California study finds that telling an AI model to role-play as an expert — like "You are an expert full-stack developer" — actually makes it perform worse on technical tasks like coding and math, despite improving results for writing and safety tasks.

The Research

Key Findings

Why It Fails for Technical Tasks

Telling a model it's an expert doesn't add any expertise — it just activates role-playing behavior that hinders the model's ability to fetch facts from pretraining data. The model is spending tokens on "being an expert" rather than computing the actual answer.

Where It Helps

The PRISM Solution

The researchers propose PRISM (Persona Routing via Intent-based Self-Modeling) to automatically determine whether persona-based prompting will help or hurt for a given task.

Practical Advice

Source: The Register, USC preprint (Hu, Rostami, Thomason)

↗ Original source · 2026-03-29T00:00:00.000Z
← Previous: Linux Kernel Czar Greg Kroah-Hartman: AI Bug Reports Went from 'Junk' to Legit OvernightNext: Google Deploys Gemini AI Agents to Monitor the Dark Web at Scale →
Comments0