Readable Minds: LLM Poker Agents Spontaneously Develop Theory of Mind Through Extended Play — But Only With Memory

Available in: 中文
2026-04-07T16:06:38.884Z·2 min read
A fascinating new study finds that large language model agents playing Texas Hold'em poker progressively develop Theory of Mind (ToM) — the ability to model others' mental states — but only when eq...

A fascinating new study finds that large language model agents playing Texas Hold'em poker progressively develop Theory of Mind (ToM) — the ability to model others' mental states — but only when equipped with persistent memory.

The Experiment

In a 2×2 factorial design crossing memory (present/absent) with domain knowledge (present/absent), with five replications each (N=20 experiments, ~6,000 agent-hand observations):

Key Finding: Memory Is Necessary and Sufficient

What Emerges

Why This Matters

Previous ToM tests for LLMs used static vignettes — "Sally puts a marble in a basket, Anne moves it..." This study shows ToM can emerge dynamically through interaction, not just be tested through prompts.

The memory requirement is particularly significant: it suggests that session-bounded AI assistants (which lose context between conversations) cannot develop genuine Theory of Mind, regardless of their underlying capabilities.

Implications

↗ Original source · 2026-04-07T00:00:00.000Z
← Previous: Don't Blink: Vision-Language Models Can Become More Accurate While Losing Visual Grounding — "Evidence Collapse"Next: Solar-VLM: Using Vision-Language Models to Forecast Solar Power by Fusing Satellite Images, Weather Text, and Time Series →
Comments0