Judge questions Pentagon's efforts to cut Anthropic AI from classified systems

A federal judge called the Pentagon's treatment of Anthropic "troubling" as the AI company urged the court to pause the Trump administration's designation of the company as a supply chain risk.

Objective Facts

A federal judge on Tuesday called the Pentagon's treatment of Anthropic "troubling" as the AI company urged the court to pause the Trump administration's designation of the company as a supply chain risk. The back-and-forth revolves around Anthropic's push to bar the military from using its AI model Claude to surveil Americans or power fully autonomous weapons. Anthropic signed a $200 million contract with the Pentagon in July and was the first AI lab to deploy its technology across the agency's classified networks, but as the company began negotiating Claude's deployment on the DOD's GenAI.mil AI platform in September, talks stalled over how the military could use the models. U.S. District Judge Rita Lin stated "I don't know if it's murder, but it looks like an attempt to cripple Anthropic," referring to three Trump administration actions: President Trump's ban on Anthropic, Defense Secretary Pete Hegseth's requirement that Pentagon contractors cut commercial ties with the company, and its designation as a supply chain risk. Lin said she expects to issue an order on Anthropic's motion in the next few days.

Left-Leaning Perspective

A federal judge in California hammered the Pentagon on Tuesday for its decision to label Anthropic a supply chain risk, signalling skepticism over what she described as a "troubling" move from the federal government, with U.S. District Judge Rita Lin suggesting during Tuesday's hearing that the Defense Department's determination "looks like an attempt to cripple Anthropic" and expressing specific concern about whether the AI company is "being punished for criticizing the government's contracting position." Legal experts believe that Anthropic is likely to prevail, pointing to a February 27 post on X in which Hegseth said contractors are prohibited from "commercial activity with Anthropic"; that post "went far beyond what the law allows him to say" according to Charlie Bullock, a senior research fellow at the Institute for Law & AI, who noted "the Pentagon hadn't done any of the things required before declaring a supply chain risk under the statute" and "that was clearly illegal, and now the government, in its filings, is admitting that." Lin emphasized her concern is "whether Anthropic is being punished for criticizing the government's contracting position in the press." The hearing centered on Anthropic's request to the court to temporarily halt the Pentagon's supply chain risk designation, with the AI company arguing the designation, typically reserved for foreign adversaries, will cause it "irreparable harm" and accusing the Trump administration of retaliating for what it believes are "protected viewpoints" regarding how its AI technology can safely and reliably be used. The case spotlights a question that will carry significant weight as Washington grapples with how to regulate AI: Who gets to decide the limits, risks and potential misuse of the rapidly evolving technology — the innovators themselves or the federal government, with Joe Hoefer, the chief AI officer at Monument Advocacy, stating "the real significance here isn't just the action against Anthropic, but the precedent it sets for how Washington will arbitrate tensions between AI developers and the national security community."

Right-Leaning Perspective

The Trump administration, in expected court filings, argued the move is "lawful and reasonable" and not a violation of free speech, with DOJ attorneys saying Anthropic's terms of service "have become unacceptable to the executive branch" after the AI firm pressed for specific restrictions on autonomous weapons and domestic mass surveillance, maintaining the federal government can use its AI services for "any lawful purpose." The DOJ suggested Anthropic could try to disable its technology or "preemptively alter" the behavior of its model during warfighting, stating the Pentagon sees that as an "unacceptable risk to national security." In a filing, the White House pushed back on Anthropic's claims that government action violated free speech protections, saying the dispute stems from contract negotiations and national security concerns rather than retaliation, arguing "Anthropic is not likely to succeed" in showing the actions were retaliation and that "the record reflects that the President and the Secretary were motivated by concerns about Anthropic's potential future conduct if it retained access to the Government's IT infrastructure" which are "unrelated to Anthropic's speech." The government's position frames Anthropic as having drawn the ire of President Donald Trump and Defense Secretary Pete Hegseth in February after it refused to allow the Pentagon to use its Claude AI model for autonomous lethal warfare and the mass surveillance of Americans, with the Pentagon formally designating the company a "supply-chain risk" to national security — a serious designation typically reserved for companies with ties to America's foreign adversaries.

Deep Dive

The dispute traces back to July 2025, when Anthropic signed a $200 million Pentagon contract and became the first AI lab to deploy its technology across the agency's classified networks. In September 2025, talks stalled over the DOD's GenAI.mil platform deployment when the Pentagon insisted on unfettered access to the company's technology for all lawful purposes, while Anthropic refused to allow unrestricted military use of its Claude AI model for autonomous lethal warfare and mass surveillance of Americans. After negotiations failed in February 2026, Trump ordered federal agencies to "immediately cease" using Anthropic's technology, and Hegseth designated it a supply chain risk on March 3—a designation typically reserved for companies connected to foreign adversaries, making it the first time applied to a U.S. company. Legal experts believe Anthropic is likely to prevail based on Hegseth's X post that went "far beyond what the law allows," and notably, court filings reveal that on March 4—the day after the Pentagon finalized the supply chain risk designation—Under Secretary Emil Michael emailed Amodei saying the two sides were "very close" on the exact issues the government now cites as national security threats, suggesting the government may have used the designation as leverage in negotiations rather than a genuine security measure. During the hearing, government lawyer Eric Hamilton appeared to openly contradict Hegseth's earlier statement that no military contractor may conduct any business with Anthropic, telling the court "I'm not aware of any authorities that would permit DOW to categorically bar contractors from using a company's products or services for non-DOW work," later saying "I don't know" why Hegseth made that claim. Judge Lin said she expects to issue an order on Anthropic's preliminary injunction motion in the next few days. If granted, the AI startup would be able to continue doing business with government contractors and federal agencies as its lawsuit plays out in court; without it, the company has said it could lose billions of dollars in business and suffer further reputational harm. The outcome will likely set a precedent for how the government can regulate AI companies that refuse to meet national security demands and how much discretion private companies retain over the uses of their technology.

OBJ SPEAKING

← Daily BriefAbout

Judge questions Pentagon's efforts to cut Anthropic AI from classified systems

A federal judge called the Pentagon's treatment of Anthropic "troubling" as the AI company urged the court to pause the Trump administration's designation of the company as a supply chain risk.

Mar 24, 2026· Updated Mar 25, 2026
What's Going On

A federal judge on Tuesday called the Pentagon's treatment of Anthropic "troubling" as the AI company urged the court to pause the Trump administration's designation of the company as a supply chain risk. The back-and-forth revolves around Anthropic's push to bar the military from using its AI model Claude to surveil Americans or power fully autonomous weapons. Anthropic signed a $200 million contract with the Pentagon in July and was the first AI lab to deploy its technology across the agency's classified networks, but as the company began negotiating Claude's deployment on the DOD's GenAI.mil AI platform in September, talks stalled over how the military could use the models. U.S. District Judge Rita Lin stated "I don't know if it's murder, but it looks like an attempt to cripple Anthropic," referring to three Trump administration actions: President Trump's ban on Anthropic, Defense Secretary Pete Hegseth's requirement that Pentagon contractors cut commercial ties with the company, and its designation as a supply chain risk. Lin said she expects to issue an order on Anthropic's motion in the next few days.

Left says: The hearing gave the AI company the chance to argue that the Pentagon blacklisted it as a national security risk in retaliation for its safe-use requirements. A U.S. judge said on Tuesday that the Pentagon's blacklisting of Anthropic looked like an effort to punish the artificial intelligence lab for going public with its concerns about AI safety in the military.
Right says: The Trump administration is arguing that the move is "lawful and reasonable" and not a violation of free speech, with DOJ attorneys saying Anthropic's terms of service "have become unacceptable to the executive branch" after the AI firm pressed for specific restrictions on autonomous weapons and domestic mass surveillance.
✓ Common Ground
Both sides agree the Pentagon can choose not to use Anthropic, as noted when Judge Lin called the underlying dispute a "fascinating public policy debate" but not the focus of the case.
The Pentagon has stated it has no interest in using Anthropic's technology for mass surveillance or fully autonomous weapons, and argues those uses are already illegal and banned under existing military policies.
Both Anthropic CEO Dario Amodei and the government acknowledge that the DOD, not private companies, makes military decisions, with Amodei stating the company understands this principle.
Objective Deep Dive

The dispute traces back to July 2025, when Anthropic signed a $200 million Pentagon contract and became the first AI lab to deploy its technology across the agency's classified networks. In September 2025, talks stalled over the DOD's GenAI.mil platform deployment when the Pentagon insisted on unfettered access to the company's technology for all lawful purposes, while Anthropic refused to allow unrestricted military use of its Claude AI model for autonomous lethal warfare and mass surveillance of Americans. After negotiations failed in February 2026, Trump ordered federal agencies to "immediately cease" using Anthropic's technology, and Hegseth designated it a supply chain risk on March 3—a designation typically reserved for companies connected to foreign adversaries, making it the first time applied to a U.S. company.

Legal experts believe Anthropic is likely to prevail based on Hegseth's X post that went "far beyond what the law allows," and notably, court filings reveal that on March 4—the day after the Pentagon finalized the supply chain risk designation—Under Secretary Emil Michael emailed Amodei saying the two sides were "very close" on the exact issues the government now cites as national security threats, suggesting the government may have used the designation as leverage in negotiations rather than a genuine security measure. During the hearing, government lawyer Eric Hamilton appeared to openly contradict Hegseth's earlier statement that no military contractor may conduct any business with Anthropic, telling the court "I'm not aware of any authorities that would permit DOW to categorically bar contractors from using a company's products or services for non-DOW work," later saying "I don't know" why Hegseth made that claim.

Judge Lin said she expects to issue an order on Anthropic's preliminary injunction motion in the next few days. If granted, the AI startup would be able to continue doing business with government contractors and federal agencies as its lawsuit plays out in court; without it, the company has said it could lose billions of dollars in business and suffer further reputational harm. The outcome will likely set a precedent for how the government can regulate AI companies that refuse to meet national security demands and how much discretion private companies retain over the uses of their technology.

◈ Tone Comparison

Left-leaning outlets use language emphasizing the judge's skepticism—words like "hammered," "troubling," "retaliation," and "punishment"—presenting this as a potential constitutional violation targeting a company for its values. Right-leaning or administration-supportive coverage frames the dispute in technical and contractual terms, emphasizing words like "lawful," "national security concerns," and potential future risks, treating it as a legitimate security decision rather than retaliation. The administration's filings emphasize Anthropic's conduct (refusing contractual terms) rather than its speech.

✕ Key Disagreements
Whether the Pentagon's actions constitute illegal retaliation
Left: Left-leaning coverage and the judge argue the designation appears to be retaliation against Anthropic for expressing protected viewpoints about AI safety, rather than a legitimate national security measure tailored to the stated concerns.
Right: The Trump administration argues the actions were motivated by concerns about Anthropic's potential future conduct if it retained access to government IT infrastructure, which are unrelated to Anthropic's speech, and no one has restricted Anthropic's expressive activity.
Whether Anthropic has the ability to sabotage military systems
Left: Anthropic disputes the government's technical claims about its ability to interfere with military operations, explaining that once Anthropic's technology is deployed, the company has no remote access or control.
Right: The DOJ argues Anthropic could theoretically try to disable its technology or "preemptively alter" the behavior of its model during warfighting, which the Pentagon views as an "unacceptable risk to national security."
Scope and appropriateness of the supply chain risk designation
Left: Anthropic's lawyer argued that applying a supply chain risk designation to Anthropic "is something that has never been done with respect to American company," involves "a very narrow authority" that "doesn't apply here, and it's not a normal way to respond to the concerns that have been articulated by the other side."
Right: The government argues that if Anthropic's terms became unacceptable, "an AI provider might gain influence over how DOW conducts operations and which missions it chooses," justifying the supply chain risk designation as an appropriate remedy.
The legal significance of the administration's social media posts
Left: The judge appeared concerned about the discrepancy between Hegseth's social media statement that "no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic" (marked as "final") and the government's later argument that social media posts are merely announcements of pending action.
Right: The Pentagon's lawyer argued that the social media posts are not legally binding, with the judge expressing skepticism about this position.