Undocumented Bug Found in Apollo 11 Guidance Computer Code After 57 Years
Using Claude AI and a behavioral specification language called Allium, researchers have discovered a previously unknown bug in the Apollo Guidance Computer (AGC) code that had eluded detection for 57 years. The flaw — a resource lock leak in the gyroscope control system — could have silently disabled the spacecraft's ability to realign its guidance platform.
The Bug
The AGC manages its Inertial Measurement Unit (IMU) through a shared resource lock called LGYRO. When the computer needs to torque the gyroscopes, it acquires LGYRO at the start and releases it when all three axes have been torqued.
The normal path works correctly: acquire lock → torque → release lock. But there's a third scenario — caging — that doesn't release the lock.
Caging is an emergency measure that physically clamps the IMU's gimbals to protect the gyroscopes from damage. When the IMU is caged while a torque is in progress, the code exits via a routine called BADEND, which skips the two instructions that clear the LGYRO lock:
CAF ZERO
TS LGYRO
Just four missing bytes.
The Consequences
Once LGYRO is stuck, every subsequent attempt to torque the gyros finds the lock held, sleeps waiting for a wake signal that never comes, and hangs. This would block fine alignment, drift compensation, and manual gyro torque.
Historical Context
During Apollo 11, Michael Collins orbited alone in the Command Module Columbia. Every two hours he disappeared behind the Moon. During each pass, he ran Program 52 — a star-sighting alignment that kept the guidance platform pointed correctly. If the platform drifted due to this bug, the engine burn to bring him home would have pointed the wrong way.
How It Was Found
The team used Allium, an open-source behavioral specification language, to distill 130,000 lines of AGC assembly into 12,500 lines of specifications derived from the code itself. The process signposted the defect directly — something that decades of manual reading and emulation had missed.
Broader Implications
- Formal methods at scale — AI can derive behavioral specifications from massive codebases impractical to analyze manually
- Resource management bugs — Lock leaks remain one of the most dangerous concurrency bugs in safety-critical code
- Legacy system verification — Many critical infrastructure systems have never been formally verified