Technology & Science
Pentagon Ultimatum to Anthropic After Signing Musk’s Grok for Classified Use
On 24 Feb 2026, Defense Secretary Pete Hegseth hauled Anthropic CEO Dario Amodei to the Pentagon, warning the $200 million Claude contract will be cancelled and the firm black-listed unless it drops safety guardrails that block mass U.S. surveillance and fully autonomous weapons.
Focusing Facts
- Hegseth’s 9 Jan 2026 memo demanded all AI vendors allow “all lawful purposes,” triggering renegotiation of Anthropic’s July 2025, $200 million pilot that put Claude on classified DoD networks.
- Hours before the summons, the Pentagon confirmed a deal with Elon Musk’s xAI making Grok the second model approved for classified systems, signaling a replacement path if Claude is pulled.
- Anthropic disclosed on 23 Feb 2026 that three Chinese firms had scraped its data, underscoring its claim that it already polices foreign exploitation more tightly than rivals.
Context
The clash echoes Robert Oppenheimer’s 1945 fight over nuclear weapons governance: scientists sought safeguards while the War Department prized unfettered use. Just as the 1993 Clipper Chip debate pitted cryptographers against federal demands for ‘lawful access,’ today’s collision shows that each leap in information technology revives the centuries-old question of who sets the rules of force. The Pentagon’s push for ‘all lawful uses’ aligns with a longer U.S. pattern— from the 1958 DARPA founding to Project Maven in 2017—of absorbing private innovation as quickly as geopolitical rivals (now China) demand. Should safety-minded labs like Anthropic lose this standoff, the precedent could chill future attempts by inventors to impose ethical brakes; if they prevail, it may mark the first time a major defense supplier hard-codes limitations on a core war-fighting technology. Either outcome will echo for decades, because whoever dictates the guardrails on early general-purpose battlefield AI will shape the norms—and arms races—of the next hundred years.
Perspectives
Left-leaning U.S. newspapers
e.g., The New York Times, The Washington Post — They portray Anthropic as a safety-conscious firm resisting the Trump Pentagon’s push to drop guardrails, warning the standoff shows how military pressure could erode ethical limits on A.I. Coverage is sympathetic to Anthropic and skeptical of the Trump administration, possibly glossing over operational needs the military cites and presenting Anthropic’s motives as purely principled.
Business & defense-industry press
e.g., The Wall Street Journal, Investing.com — They stress that Anthropic’s refusal jeopardizes a lucrative $200 million deal and threatens U.S. readiness, while highlighting rivals like xAI and Google that accept the Pentagon’s "all lawful uses" standard. Stories center on contract risk, investor impact and national-security imperatives, tending to minimize the civil-liberties or autonomous-weapons concerns Anthropic raises.
Russian state-aligned media
RT — Depicts the U.S. Department of War strong-arming Anthropic as it rushes to weaponize chatbots, underscoring American hypocrisy about tech ethics. Framing amplifies U.S. discord to undermine Washington’s moral authority, omits Russia’s own AI militarization and leans on sensational language to paint the U.S. as aggressive.
Like what you're reading?