Interactive explanations
Simon Willison explores "interactive explanations" as a pattern for understanding AI-generated code — building animated visualizations that make algorithms intuitive instead of opaque.
The Problem: Cognitive Debt
When we lose track of how code written by AI agents works, we accumulate cognitive debt. For simple CRUD operations this doesn't matter, but when the core of an application becomes a black box, planning new features gets harder and progress slows.
The Solution: Build Interactive Explanations
Willison demonstrates this pattern using a word cloud generator built by Claude Code in Rust. Despite having a linear walkthrough of the code, he still lacked intuitive understanding of the "Archimedean spiral placement" algorithm.
His approach: ask an AI to build an animated visualization of how the algorithm works step by step.
Why Animations Work
- Static code walkthroughs explain structure, not behavior
- Animations show the algorithm in motion, making the "click" happen
- You can pause, rewind, and observe edge cases
- They serve as documentation that improves over time
The Pattern
- Have AI build something
- Get a linear walkthrough of the code
- For parts you don't intuitively understand, request an animated explanation
- The animation becomes both learning tool and documentation
This is part of Willison's broader "Agentic Engineering Patterns" series on effective AI-assisted development.
Source: Simon Willison