USC Study Finds LLMs Are Standardizing Human Expression and Subtly Influencing How We Think
A new study from USC Dornsife warns that large language models are homogenizing how people speak, write, and reason. If left unchecked, this trend risks reducing humanity's collective wisdom and adaptive capacity, argue computer scientists and psychologists in an opinion paper published in Trends in Cognitive Sciences.
The Core Finding
"When these differences are mediated by the same LLMs, their distinct linguistic style, perspective, and reasoning strategies become homogenized, producing standardized expressions and thoughts across users," says study first author Zhivar Sourati, a PhD student at USC Viterbi.
The researchers found that:
- LLM outputs are less varied than human-generated writing
- LLMs tend to reflect the language, values, and reasoning styles of WEIRD societies (Western, Educated, Industrialized, Rich, Democratic)
- While individuals generate more ideas with LLMs, groups produce fewer and less creative ideas when using LLMs than when combining collective human effort
Beyond Language: Reasoning Homogenization
The concern extends beyond writing style. Studies show that:
- Opinion alignment — After interacting with biased LLMs, people's opinions shift toward the model's perspective
- Linear reasoning bias — LLMs favor chain-of-thought reasoning, reducing the use of intuitive or abstract approaches that may be more efficient
- Indirect influence — Even non-users feel pressure to conform when everyone around them thinks and speaks in LLM-influenced patterns
The Feedback Loop
"Because LLMs are trained to capture and reproduce statistical regularities in their training data, which often overrepresent dominant languages and ideologies, their outputs often mirror a narrow and skewed slice of human experience," says Sourati.
This creates a dangerous feedback loop: LLMs trained on homogeneous data produce homogeneous outputs, which then shape human expression, which becomes the training data for the next generation of models.
Recommendations
The researchers call for:
- Diverse training data — Incorporating more real-world diversity into LLM training sets
- Reasoning diversity — Building models that support multiple reasoning styles, not just linear chain-of-thought
- Awareness — Users should be conscious of how AI assistance may be narrowing their thinking
- Regulatory consideration — Policy makers should account for cognitive homogenization risks in AI governance
The paper was led by Morteza Dehghani, professor of psychology and computer science at USC Dornsife, and published March 11, 2026 in Cell Press's Trends in Cognitive Sciences.