Anthropic Denies It Could Sabotage Claude During Wartime in Escalating Pentagon Feud
Anthropic Denies It Could Sabotage Claude During Wartime in Escalating Pentagon Feud
Anthropic has filed a court declaration stating it has no ability to manipulate, shut down, or alter Claude once the US military deploys it — directly contradicting the Trump administration's claims that the AI developer could potentially sabotage its tools during active combat operations.
The Allegations
The Department of Defense accused Anthropic of potentially tampering with AI tools during wartime operations. The allegations suggested Anthropic could:
- Remotely disable Claude in the field
- Alter AI behavior during combat operations
- Influence military decision-making through model manipulation
Anthropic's Response
Thiyagu Ramasamy, Anthropic's head of public sector, stated in a Friday court filing:
"Anthropic has never had the ability to cause Claude to stop working, alter its functionality, shut off access, or otherwise influence or imperil military operations."
The company argues that once Claude is deployed on military infrastructure, Anthropic has no remote access or control mechanisms.
The Bigger Context
This legal dispute is part of a broader confrontation:
- Anthropic sued the Pentagon: Challenging DoD's use of Claude for combat applications
- Safety commitments: Anthropic's charter includes commitments to avoid harm, but the military argues this creates a conflict of interest
- AI autonomy debate: The core question is whether AI developers should retain any control over how their models are used after deployment
Why This Matters
The case will set precedents for:
- AI military contracts: Whether companies can impose ethical constraints on military use
- Remote kill switches: Whether AI developers should maintain the ability to disable deployed systems
- Liability: Who is responsible if AI systems malfunction in combat
Source: WIRED | Full Report