📰 Full Story
A week of turmoil for Anthropic culminated in two related developments on March 26–28, 2026.
Security researchers discovered nearly 3,000 unpublished assets in an unsecured Anthropic content management system, including a draft blog describing a new model—referred to as Claude Mythos or internally as Capybara—said to be the company’s most capable system yet and to possess “far ahead” cyber offensive capabilities.
Fortune and other outlets reported Anthropic confirmed testing the model with early-access customers and warned of unprecedented cybersecurity risks; the leak sent cybersecurity stocks sharply lower (CrowdStrike, Palo Alto Networks and others down roughly 4–7%) and briefly weighed on cryptocurrency prices.
At the same time U.S. District Judge Rita Lin granted Anthropic a preliminary injunction blocking the Pentagon’s designation of the company as a “supply chain risk” and a presidential directive to ban its tools from federal use.
Lin wrote the measures appeared retaliatory and likely violated Anthropic’s First Amendment and due-process rights; the injunction is stayed for seven days to allow an appeal.
The twin episodes have immediate implications for defence procurement, private-sector contractors and how governments handle frontier AI risks.
🔗 Based On
Reddit r/ArtificialInteligenceAnthropic accidentally leaked their most powerful model. The draft warned it poses "unprecedented cybersecurity risks.
🤝 Social Media Insights
Social Summary
Analyst commentary frames traditional, deterministic security as less exposed to probabilistic AI disruption, while commenters warn a leaked advanced model could enable mass vulnerability discovery and has stoked market volatility — but they stress the causal link between the leak and stock moves is unproven.







💬 Commentary