Appeals Court Upholds Pentagon Curbs on Anthropic AI Amid Safety Dispute

Appeals Court Upholds Pentagon Curbs on Anthropic AI Amid Safety Dispute

Cover image from nypost.com, which was analyzed for this article

A federal appeals court denied Anthropic's request to temporarily block the Pentagon's designation of the AI firm as a supply-chain risk, keeping procurement restrictions in place. This decision highlights national security tensions around AI amid ongoing litigation. Separately, Anthropic's new Mythos AI model for detecting cybersecurity vulnerabilities is restricted to a consortium of Big Tech firms due to potential risks if released publicly.

PoliticalOS

Thursday, April 9, 2026Tech

4 min read

The Pentagon's designation of Anthropic as a supply-chain risk has been allowed to stand, illustrating how national security priorities can override an AI developer's safety restrictions even for domestic companies. At the same time, Anthropic's controlled release of its powerful Mythos model demonstrates that the same firms advocating caution also recognize the technology's capacity to democratize sophisticated cyberattacks. Readers should understand this as an emerging regulatory vacuum: without clear rules balancing innovation, defense needs, and risk mitigation, both government action and industry self-policing will remain contested and incomplete.

What outlets missed

Most outlets covered either the court ruling or the Mythos model in isolation, missing their connection: Anthropic faces government penalties for insisting on safety limits on existing models yet is itself tightly controlling a new model precisely because its vulnerability-finding power poses broad risks if released openly. Coverage downplayed or omitted the San Francisco federal judge's March 2026 preliminary injunction, which explicitly found likely First Amendment retaliation based on the timing of the designation after Anthropic's public stance on AI ethics. Analyses also ignored that the supply-chain risk label, while rare for domestic firms, is authorized under statute for any entity presenting potential threats and is not reserved exclusively for foreign companies like Huawei. Finally, reports underplayed Anthropic's full financial commitment to Project Glasswing, including $4 million in direct donations to open-source security projects, which extends benefits beyond the select consortium and undercuts narratives of mere favoritism toward Big Tech.

National security needs have again prevailed over a major AI developer's push for limits on how its technology can be deployed. A federal appeals court on April 9, 2026, denied Anthropic's emergency request to pause the Pentagon's designation of the company as a supply-chain risk. The ruling keeps restrictions in place while litigation continues, preventing defense contractors from using Anthropic's Claude models in Pentagon-related work.

The central unresolved question is whether the government can compel unrestricted access to advanced AI for military and intelligence purposes, or whether developers retain authority to enforce safety guardrails against uses such as mass domestic surveillance or fully autonomous lethal weapons. Anthropic had secured a contract with the Defense Department last year. Negotiations broke down after the company refused to remove preexisting safeguards, according to statements from both sides and court filings reviewed by Reuters and Axios. The Pentagon viewed the restrictions as incompatible with standard contract terms requiring access for all lawful purposes.