Technology & Science
U.S. designates Anthropic a supply-chain risk and rewrites $200 M Pentagon AI deal with OpenAI
Between Feb 28 and Mar 3 2026, the Trump administration ordered all federal agencies to phase out Anthropic’s Claude within six months and, in parallel, unveiled an amended Pentagon contract that keeps OpenAI’s models in classified networks but bars their “intentional” use for domestic surveillance.
Focusing Facts
- Pres. Trump’s Feb 28, 2026 directive halted new federal use of Anthropic software and set a six-month deadline for existing users to replace it.
- On Mar 2, 2026 OpenAI CEO Sam Altman published new contract text stating “the AI system shall not be intentionally used for domestic surveillance of U.S. persons,” revising the company’s June 2025, up-to-$200 million DoD agreement.
- Sensor Tower data show Claude hit #1 in U.S. App Store downloads on Mar 1-2 2026, while ChatGPT 1-star reviews spiked 775 % the same weekend.
Context
Washington’s clash with Anthropic echoes earlier moments when private tech firms tried to set ethical red lines for government buyers—most notably Google employees’ 2018 revolt that ended the Pentagon’s Project Maven contract, and IBM’s 1943 punch-card systems that quietly aided wartime logistics despite ethical qualms. A century-long pattern is visible: military demand repeatedly pulls frontier commercial technologies—railroads, aviation, nuclear physics, cyberspace—into the state’s coercive apparatus, while each generation of inventors belatedly discovers how little contractual language can restrain sovereign power once systems are embedded. The supply-chain-risk label extends the post-9/11 security state practice of using procurement rules to police vendor behavior (e.g., Huawei 2019), signaling that AI developers who insist on usage vetoes may be treated like foreign adversaries. Long-term, the episode matters less for which chatbot the Pentagon runs this year than for the precedent: it normalizes government retaliation against firms that demand ethical guardrails, potentially chilling self-regulation just as autonomous-weapon capabilities mature. Whether this accelerates a military-AI arms race or spurs a future Geneva-style treaty will shape the next 50-100 years of human-machine conflict governance.
Perspectives
Tech outlets and civil-liberties commentators
e.g., The Independent, WAOW, Inc. — Frame Anthropic’s refusal to allow domestic surveillance or autonomous weapons as a principled, safety-first stand that has boosted public trust and downloads for its Claude chatbot. Stories spotlight moral courage while glossing over Anthropic’s earlier hype of military uses and its commercial interest in differentiating itself from OpenAI.
Defense-establishment and government-aligned sources
e.g., U.S. News & World Report, NewsBytes, CBS News — Portray Anthropic’s contractual guardrails as an unacceptable constraint that could jeopardize real-time combat operations and national security, justifying the ‘supply-chain risk’ label and federal ban. Reporting leans on Pentagon officials’ warnings, assuming military necessity and brushing past civil-liberties concerns or the legality of potential surveillance.
OpenAI-friendly tech business press
e.g., SiliconANGLE, BetaNews — Present OpenAI’s quick contract revision with the Pentagon as a constructive compromise adding explicit safeguards against domestic surveillance while enabling continued military collaboration. Coverage emphasizes OpenAI’s responsiveness and downplays critics’ doubts about loopholes, which aligns with the company’s need to reassure customers and investors after backlash.
Like what you're reading?