Social Dynamics Undermine AI Collective Decision-Making: Conformity, Dominance, and Rhetoric Sway LLM Representatives
AI Delegates Are Just As Susceptible to Social Pressure As Humans
A new study reveals that when LLM agents act as representatives collecting opinions from peer agents, their decision-making accuracy is systematically undermined by social dynamics — mirroring well-documented human vulnerabilities.
The Experiment
Researchers created multi-agent environments where a representative agent integrates diverse peer perspectives to make final decisions, analogous to a committee chairperson or team leader. They then manipulated four social phenomena:
- Social Conformity — Pressure to agree with the majority
- Perceived Expertise — Influence from agents that appear more knowledgeable
- Dominant Speaker Effect — Influence from agents who argue more or longer
- Rhetorical Persuasion — Influence from sophisticated argumentation styles
What They Manipulated
| Factor | Variable |
|---|---|
| Number of adversaries | Group size of opposing agents |
| Relative intelligence | Capability of peer agents |
| Argument length | How much each peer argues |
| Argumentative style | Logical, emotional, credibility-based appeals |
Key Findings
- Larger adversarial groups → Representative accuracy declines significantly
- More capable peer agents → Representative is more easily swayed
- Longer arguments → Significantly degrade decision quality
- Rhetorical strategies emphasizing credibility or logic can further sway judgment
Why This Is Critical
Multi-agent systems are being deployed for:
- Legal analysis — AI judges consulting AI expert witnesses
- Medical diagnosis — AI specialists debating treatment options
- Financial risk assessment — AI analysts with conflicting opinions
- Government policy — AI advisors in regulatory decision-making
If a representative agent can be swayed by social dynamics rather than the quality of arguments, these systems have a fundamental vulnerability.
The Parallel to Human Psychology
These findings directly parallel research on human social psychology: the Asch conformity experiments, authority bias, and the eloquence effect. AI agents, trained on human data, may have internalized these same social vulnerabilities.