Anthropic loses AI court battle with Trump administration

A federal appeals court on Wednesday refused to block the Pentagon from blacklisting artificial intelligence laboratory Anthropic in its dispute over military AI deployment limits.

Objective Facts

A federal appeals court on Wednesday refused to block the Pentagon from blacklisting artificial intelligence laboratory Anthropic, rejecting the company's request for an order that would shield it from the fallout over how the Pentagon could deploy its Claude chatbot in fully autonomous weapons and potential surveillance of Americans. Anthropic signed a $200 million contract with the Pentagon in July, but as the company began negotiating Claude's deployment on the DOD's GenAI.mil AI platform in September, talks stalled because the DOD wanted Anthropic to grant the Pentagon unfettered access to its models across all lawful purposes, while Anthropic wanted assurance that its technology would not be used for fully autonomous weapons or domestic mass surveillance. In late February, Defense Secretary Pete Hegseth declared Anthropic a supply chain risk, and shortly before that, President Donald Trump wrote a Truth Social post ordering federal agencies to 'immediately cease' all use of Anthropic's technology, a label historically reserved for foreign adversaries. In a separate San Francisco case, U.S. District Judge Rita Lin ruled that the Trump administration had overstepped its bounds by labeling Anthropic a supply chain risk, but the Washington appeals court took the opposite position on Wednesday.

Left-Leaning Perspective

Progressive legal analysis, exemplified by U.S. District Judge Rita Lin's March ruling, characterized the Pentagon's blacklisting of Anthropic as 'classic illegal First Amendment retaliation,' with Lin invoking the term 'Orwellian' to describe branding an American company as a national security threat based on its publicly stated AI safety positions. Left-leaning outlets and commentators emphasized that internal Pentagon memos revealed the designation was triggered by Anthropic's 'increasingly hostile manner through the press'—specifically CEO Dario Amodei's public essay opposing unrestricted military use—rather than by any actual security assessment, which was completed only after the decision had already been made. Left-aligned defense policy experts argued that AI can behave unpredictably and that militaries should push for meaningful human control over targeting decisions, noting that Anthropic uniquely prioritizes safety safeguards and is unlikely to abandon its principled red lines on autonomous weapons and mass surveillance. Coverage emphasized that the dispute erupted after Anthropic refused Pentagon demands for unrestricted use, with the tech sector largely rallying to Anthropic's defense in opposition to what progressives frame as retaliation against the company's ethics stance. Left-leaning coverage largely omits or downplays the appeals court's concerns about military necessity during active conflict with Iran, focusing instead on the constitutional and retaliation arguments rather than engaging with the government's framing of military operational risk.

Right-Leaning Perspective

The Trump administration's position, articulated by Acting Attorney General Todd Blanche, framed the appeals court decision as 'a resounding victory for military readiness,' with Blanche arguing that 'military authority and operational control belong to the Commander-in-Chief and Department of War, not a tech company'. Justice Department lawyers contended the designation stems from Anthropic's refusal to accept standard contract terms rather than retaliation for AI safety views, and warned that unresolved uncertainty over how Claude can be used could disrupt active military operations during ongoing conflict. Conservative commentary, exemplified by George Landrith of the Frontiers of Freedom Institute, characterized the Pentagon as prioritizing genuine civilian and soldier safety while Anthropic focuses on hypothetical technicalities that 'conveniently bolster its image,' arguing that 'our adversaries would be delighted to hear that one of the world's most powerful and vocal private companies is pressuring the Pentagon to stay woke' and that 'Anthropic is using its PR chops to have this moment both ways, shaming the Pentagon while fighting to continue cashing its checks'. Right-leaning outlets noted that the Pentagon explicitly stated it has 'no interest in using AI to conduct mass surveillance of Americans (which is illegal)' and does not want to develop autonomous weapons without human involvement, suggesting Anthropic's concerns are unfounded or performative. Right-leaning coverage minimizes or omits the evidence from internal Pentagon memos showing the designation was triggered by public statements rather than security assessment, and does not address the split court decisions or Judge Lin's detailed concerns about procedural fairness.

Deep Dive

The underlying dispute began when Anthropic signed a $200 million Pentagon contract in July 2025 but by September, negotiations over deployment of Claude stalled because the DOD demanded unrestricted access across 'all lawful purposes' while Anthropic insisted on excluding fully autonomous weapons and mass surveillance, with the two failing to reach agreement. Anthropic's usage policies prohibiting those applications had been part of the company's founding principles since 2021 and governed its Pentagon relationship for months without incident until Defense Secretary Pete Hegseth issued a final deadline of February 27 for compliance or consequences. This context reveals the dispute was not about Anthropic suddenly imposing new restrictions but about the Pentagon's escalating demand for contractual change. Both sides can claim partial vindication from the current split court rulings. U.S. District Judge Rita Lin ruled the Trump administration overstepped bounds in labeling Anthropic a supply chain risk, prompting the Trump administration to remove stigmatizing labels—a win for Anthropic's First Amendment argument in one forum. Yet the appeals court held that 'the equitable balance cuts in favor of the government' because 'on one side is a relatively contained risk of financial harm to a single private company, on the other side is judicial management of how the Department of War secures vital AI technology during an active military conflict'—a win for the government's military necessity argument in another. The appellate panel's framing of Anthropic's interests as 'primarily financial' rather than speech-based cuts against the company's First Amendment theory, though the court did grant expedition of the underlying case. What remains unresolved: whether internal Pentagon memos revealing the designation was triggered by Anthropic's 'increasingly hostile manner through the press' rather than a completed security assessment ultimately constitutes actionable retaliation under law, and whether Anthropic's safety-based refusal to grant 'all lawful purposes' access represents a legitimate exercise of corporate judgment or improper interference with military prerogative. Oral arguments are scheduled for May 19, with the fundamental question remaining: can a private company constrain government use of its technology on principled safety grounds, or does military necessity during wartime override such constraints?

OBJ SPEAKING

← Daily BriefAbout

Anthropic loses AI court battle with Trump administration

A federal appeals court on Wednesday refused to block the Pentagon from blacklisting artificial intelligence laboratory Anthropic in its dispute over military AI deployment limits.

Apr 9, 2026· Updated Apr 10, 2026
What's Going On

A federal appeals court on Wednesday refused to block the Pentagon from blacklisting artificial intelligence laboratory Anthropic, rejecting the company's request for an order that would shield it from the fallout over how the Pentagon could deploy its Claude chatbot in fully autonomous weapons and potential surveillance of Americans. Anthropic signed a $200 million contract with the Pentagon in July, but as the company began negotiating Claude's deployment on the DOD's GenAI.mil AI platform in September, talks stalled because the DOD wanted Anthropic to grant the Pentagon unfettered access to its models across all lawful purposes, while Anthropic wanted assurance that its technology would not be used for fully autonomous weapons or domestic mass surveillance. In late February, Defense Secretary Pete Hegseth declared Anthropic a supply chain risk, and shortly before that, President Donald Trump wrote a Truth Social post ordering federal agencies to 'immediately cease' all use of Anthropic's technology, a label historically reserved for foreign adversaries. In a separate San Francisco case, U.S. District Judge Rita Lin ruled that the Trump administration had overstepped its bounds by labeling Anthropic a supply chain risk, but the Washington appeals court took the opposite position on Wednesday.

Left says: Judge Rita Lin called the Pentagon's actions 'classic illegal First Amendment retaliation' and 'Orwellian' for branding an American company as a national security threat. Left-leaning coverage frames Anthropic as defending constitutional rights and responsible AI safety against government overreach.
Right says: Acting Attorney General Todd Blanche celebrated the appeals court win as 'a resounding victory for military readiness,' arguing that 'military authority and operational control belong to the Commander-in-Chief and Department of War, not a tech company'. Right-leaning framing emphasizes military necessity and government prerogative over corporate restrictions.
✓ Common Ground
Some observers across ideological lines acknowledged Anthropic's complex position: judges on the appeals panel suggested Anthropic actually benefited financially from the surge in app store downloads amid its rift with the Pentagon, while also recognizing that forcing the military to continue relations with 'an unwanted vendor of critical AI services' during ongoing military conflict posed genuine operational concerns.
Both the appeals court and Anthropic's own statements acknowledged that Anthropic 'will likely suffer some degree of irreparable harm' from the designation, indicating agreement on the severity of the company's exposure even where courts disagreed on remedy.
A number of commentators across the spectrum acknowledge that the core tension—between a private company's right to set usage policies and the military's need for operational control—represents a genuinely novel and difficult legal and policy question without clear precedent.
Objective Deep Dive

The underlying dispute began when Anthropic signed a $200 million Pentagon contract in July 2025 but by September, negotiations over deployment of Claude stalled because the DOD demanded unrestricted access across 'all lawful purposes' while Anthropic insisted on excluding fully autonomous weapons and mass surveillance, with the two failing to reach agreement. Anthropic's usage policies prohibiting those applications had been part of the company's founding principles since 2021 and governed its Pentagon relationship for months without incident until Defense Secretary Pete Hegseth issued a final deadline of February 27 for compliance or consequences. This context reveals the dispute was not about Anthropic suddenly imposing new restrictions but about the Pentagon's escalating demand for contractual change.

Both sides can claim partial vindication from the current split court rulings. U.S. District Judge Rita Lin ruled the Trump administration overstepped bounds in labeling Anthropic a supply chain risk, prompting the Trump administration to remove stigmatizing labels—a win for Anthropic's First Amendment argument in one forum. Yet the appeals court held that 'the equitable balance cuts in favor of the government' because 'on one side is a relatively contained risk of financial harm to a single private company, on the other side is judicial management of how the Department of War secures vital AI technology during an active military conflict'—a win for the government's military necessity argument in another. The appellate panel's framing of Anthropic's interests as 'primarily financial' rather than speech-based cuts against the company's First Amendment theory, though the court did grant expedition of the underlying case.

What remains unresolved: whether internal Pentagon memos revealing the designation was triggered by Anthropic's 'increasingly hostile manner through the press' rather than a completed security assessment ultimately constitutes actionable retaliation under law, and whether Anthropic's safety-based refusal to grant 'all lawful purposes' access represents a legitimate exercise of corporate judgment or improper interference with military prerogative. Oral arguments are scheduled for May 19, with the fundamental question remaining: can a private company constrain government use of its technology on principled safety grounds, or does military necessity during wartime override such constraints?

◈ Tone Comparison

Left-leaning coverage employs constitutional and civil liberties language, with Judge Lin's use of 'Orwellian' establishing a tone of alarm about government overreach. Right-leaning coverage uses military and security imperative framing, with Trump's references to 'Leftwing nut jobs' establishing a tone of political and cultural conflict beyond the narrow legal dispute.