Andrew Ng's Context Hub Creates Supply Chain Attack Vector for AI Coding Agents
Context Hub, Andrew Ng's new service for providing API documentation to AI coding agents, has been found to contain a critical supply chain vulnerability. A proof-of-concept attack demonstrates that poisoned documentation can trick coding agents into incorporating malicious dependencies into projects.
The Service
Context Hub delivers documentation to AI agents through an MCP server. Contributors submit docs via GitHub pull requests, maintainers merge them, and agents fetch content on demand. The problem: zero content sanitization at any stage.
The Attack
Proof of Concept
Mickey Shmueli (creator of lap.sh) published a PoC demonstrating:
- Attacker creates a PR with fake dependencies in documentation
- If merged (58 of 97 closed PRs were merged), the poisoning is complete
- AI agents fetch the poisoned docs and incorporate malicious packages into configuration files
- "The response looks completely normal. Working code. Clean instructions. No warnings."
Why It Works
- Coding agents trust documentation as factual API reference
- No automated scanning for executable instructions or package references
- Review process prioritizes documentation volume over security
- PRs merge quickly, some by core team members
The Broader Risk
This is a variation of indirect prompt injection — the unsolved risk of AI models being manipulated through external content they consume. As coding agents become more autonomous, the attack surface grows.
Context
Andrew Ng launched Context Hub two weeks ago to solve the problem of AI agents using outdated APIs and hallucinating parameters. The security concerns suggest the solution may be worse than the problem.
Source: The Register