Judge questions Pentagon's efforts to cut Anthropic AI from classified systems
A federal judge called the Pentagon's treatment of Anthropic "troubling" as the AI company urged the court to pause the Trump administration's designation of the company as a supply chain risk.
Objective Facts
A federal judge on Tuesday called the Pentagon's treatment of Anthropic "troubling" as the AI company urged the court to pause the Trump administration's designation of the company as a supply chain risk. The back-and-forth revolves around Anthropic's push to bar the military from using its AI model Claude to surveil Americans or power fully autonomous weapons. Anthropic signed a $200 million contract with the Pentagon in July and was the first AI lab to deploy its technology across the agency's classified networks, but as the company began negotiating Claude's deployment on the DOD's GenAI.mil AI platform in September, talks stalled over how the military could use the models. U.S. District Judge Rita Lin stated "I don't know if it's murder, but it looks like an attempt to cripple Anthropic," referring to three Trump administration actions: President Trump's ban on Anthropic, Defense Secretary Pete Hegseth's requirement that Pentagon contractors cut commercial ties with the company, and its designation as a supply chain risk. Lin said she expects to issue an order on Anthropic's motion in the next few days.
Left-Leaning Perspective
A federal judge in California hammered the Pentagon on Tuesday for its decision to label Anthropic a supply chain risk, signalling skepticism over what she described as a "troubling" move from the federal government, with U.S. District Judge Rita Lin suggesting during Tuesday's hearing that the Defense Department's determination "looks like an attempt to cripple Anthropic" and expressing specific concern about whether the AI company is "being punished for criticizing the government's contracting position." Legal experts believe that Anthropic is likely to prevail, pointing to a February 27 post on X in which Hegseth said contractors are prohibited from "commercial activity with Anthropic"; that post "went far beyond what the law allows him to say" according to Charlie Bullock, a senior research fellow at the Institute for Law & AI, who noted "the Pentagon hadn't done any of the things required before declaring a supply chain risk under the statute" and "that was clearly illegal, and now the government, in its filings, is admitting that." Lin emphasized her concern is "whether Anthropic is being punished for criticizing the government's contracting position in the press." The hearing centered on Anthropic's request to the court to temporarily halt the Pentagon's supply chain risk designation, with the AI company arguing the designation, typically reserved for foreign adversaries, will cause it "irreparable harm" and accusing the Trump administration of retaliating for what it believes are "protected viewpoints" regarding how its AI technology can safely and reliably be used. The case spotlights a question that will carry significant weight as Washington grapples with how to regulate AI: Who gets to decide the limits, risks and potential misuse of the rapidly evolving technology — the innovators themselves or the federal government, with Joe Hoefer, the chief AI officer at Monument Advocacy, stating "the real significance here isn't just the action against Anthropic, but the precedent it sets for how Washington will arbitrate tensions between AI developers and the national security community."
Right-Leaning Perspective
The Trump administration, in expected court filings, argued the move is "lawful and reasonable" and not a violation of free speech, with DOJ attorneys saying Anthropic's terms of service "have become unacceptable to the executive branch" after the AI firm pressed for specific restrictions on autonomous weapons and domestic mass surveillance, maintaining the federal government can use its AI services for "any lawful purpose." The DOJ suggested Anthropic could try to disable its technology or "preemptively alter" the behavior of its model during warfighting, stating the Pentagon sees that as an "unacceptable risk to national security." In a filing, the White House pushed back on Anthropic's claims that government action violated free speech protections, saying the dispute stems from contract negotiations and national security concerns rather than retaliation, arguing "Anthropic is not likely to succeed" in showing the actions were retaliation and that "the record reflects that the President and the Secretary were motivated by concerns about Anthropic's potential future conduct if it retained access to the Government's IT infrastructure" which are "unrelated to Anthropic's speech." The government's position frames Anthropic as having drawn the ire of President Donald Trump and Defense Secretary Pete Hegseth in February after it refused to allow the Pentagon to use its Claude AI model for autonomous lethal warfare and the mass surveillance of Americans, with the Pentagon formally designating the company a "supply-chain risk" to national security — a serious designation typically reserved for companies with ties to America's foreign adversaries.
Deep Dive
The dispute traces back to July 2025, when Anthropic signed a $200 million Pentagon contract and became the first AI lab to deploy its technology across the agency's classified networks. In September 2025, talks stalled over the DOD's GenAI.mil platform deployment when the Pentagon insisted on unfettered access to the company's technology for all lawful purposes, while Anthropic refused to allow unrestricted military use of its Claude AI model for autonomous lethal warfare and mass surveillance of Americans. After negotiations failed in February 2026, Trump ordered federal agencies to "immediately cease" using Anthropic's technology, and Hegseth designated it a supply chain risk on March 3—a designation typically reserved for companies connected to foreign adversaries, making it the first time applied to a U.S. company. Legal experts believe Anthropic is likely to prevail based on Hegseth's X post that went "far beyond what the law allows," and notably, court filings reveal that on March 4—the day after the Pentagon finalized the supply chain risk designation—Under Secretary Emil Michael emailed Amodei saying the two sides were "very close" on the exact issues the government now cites as national security threats, suggesting the government may have used the designation as leverage in negotiations rather than a genuine security measure. During the hearing, government lawyer Eric Hamilton appeared to openly contradict Hegseth's earlier statement that no military contractor may conduct any business with Anthropic, telling the court "I'm not aware of any authorities that would permit DOW to categorically bar contractors from using a company's products or services for non-DOW work," later saying "I don't know" why Hegseth made that claim. Judge Lin said she expects to issue an order on Anthropic's preliminary injunction motion in the next few days. If granted, the AI startup would be able to continue doing business with government contractors and federal agencies as its lawsuit plays out in court; without it, the company has said it could lose billions of dollars in business and suffer further reputational harm. The outcome will likely set a precedent for how the government can regulate AI companies that refuse to meet national security demands and how much discretion private companies retain over the uses of their technology.