Anthropic Denies It Could Sabotage Claude During Wartime in Escalating Pentagon Feud

Available in: 中文
2026-03-22T12:49:56.000Z·1 min read
Anthropic filed a court declaration denying it can manipulate Claude after military deployment, contradicting Pentagon claims of potential wartime sabotage in an escalating legal feud.

Anthropic Denies It Could Sabotage Claude During Wartime in Escalating Pentagon Feud

Anthropic has filed a court declaration stating it has no ability to manipulate, shut down, or alter Claude once the US military deploys it — directly contradicting the Trump administration's claims that the AI developer could potentially sabotage its tools during active combat operations.

The Allegations

The Department of Defense accused Anthropic of potentially tampering with AI tools during wartime operations. The allegations suggested Anthropic could:

Anthropic's Response

Thiyagu Ramasamy, Anthropic's head of public sector, stated in a Friday court filing:

"Anthropic has never had the ability to cause Claude to stop working, alter its functionality, shut off access, or otherwise influence or imperil military operations."

The company argues that once Claude is deployed on military infrastructure, Anthropic has no remote access or control mechanisms.

The Bigger Context

This legal dispute is part of a broader confrontation:

Why This Matters

The case will set precedents for:

  1. AI military contracts: Whether companies can impose ethical constraints on military use
  2. Remote kill switches: Whether AI developers should maintain the ability to disable deployed systems
  3. Liability: Who is responsible if AI systems malfunction in combat

Source: WIRED | Full Report

↗ Original source
← Previous: FCC Lets Nexstar Buy Tegna, Creating Trump-Approved Broadcaster Reaching 80% of US HouseholdsNext: Palantir's Developer Conference: AI Is Built to Win Wars, Says Alex Karp →
Comments0