📰 Full Story
A U.S. federal judge on March 26–27 granted a preliminary injunction that temporarily prevents the Trump administration and the Pentagon from enforcing a presidential order and a Defence Department designation that had labelled AI firm Anthropic a "supply chain risk." Judge Rita F. Lin of the Northern District of California found the actions appeared punitive and likely to chill public debate, describing the government's move as an "attempt to cripple Anthropic" and raising First Amendment concerns.
The injunction freezes both a White House directive ordering agencies to stop using Anthropic’s Claude models and a rare Pentagon label historically reserved for firms in adversary states.
The dispute stems from negotiations over a $200m contract and the company’s refusal to allow unrestricted military uses — including mass domestic surveillance and fully autonomous lethal weapons.
The ruling keeps Anthropic tools in use by federal agencies and contractors while litigation continues; the government has signalled it may seek emergency appellate relief.
Major tech firms, AI researchers and retired military officials have filed amici briefs supporting Anthropic, saying the designation sets a risky precedent for industry-government relations.
🔗 Based On
Global Venture Capital News & Trends - PitchBookAnthropic’s surprise supporters warn of broader implications of DoD move
Artificial intelligence - BBC News3 days agoJudge rejects Pentagon's attempt to 'cripple' AnthropicA federal judge told the government it could not immediately enforce a ban on Anthropic’s tools.3 days ago
Home of AI and Artificial Intelligence NewsHas Anthropic Gained the Upper Hand in Pentagon Lawsuit?







💬 Commentary