Trump administration orders agencies to drop Anthropic contracts

A federal judge ruled that Anthropic's supply-chain-risk designation is likely contrary to law and arbitrary, blocking the Trump administration's ban.

Objective Facts

Trump wrote on Truth Social that most agencies must immediately stop using Anthropic's AI, but gave the Pentagon a six-month period to phase out the technology that is already embedded in military platforms. The Pentagon insisted Anthropic agree to give the military unrestricted access to its AI model, and Hegseth had set a deadline of 5:01 p.m. Friday for it to agree to drop its guardrails. Hegseth wrote that effective immediately, no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic. In late March, U.S. District Judge Rita Lin ruled that the supply-chain-risk designation is likely both contrary to law and arbitrary and capricious. The legal victory temporarily blocks both Trump's presidential directive and Hegseth's designation, though the underlying dispute over AI safety guardrails versus unrestricted military use remains unresolved.

Left-Leaning Perspective

Left-leaning outlets focused heavily on what they characterized as political retaliation and abuse of government power. Sen. Mark Warner, the Virginia Democrat who is vice chair of the Senate Select Committee on Intelligence, condemned Trump's action, saying the efforts by Trump and Hegseth to intimidate and disparage Anthropic pose an enormous risk to U.S. defense readiness and the willingness of the U.S. private sector and academia to work with the intelligence community and Defense Department consistent with their own values and legal ethics. Senator Elizabeth Warren opened an investigation into the Pentagon's decision to designate Anthropic a supply chain risk, and is probing OpenAI's contract with the Defense Department since it appears to lack necessary safeguards to protect Americans from government surveillance. Sens. Ed Markey and Chris Van Hollen said the Pentagon's threats to punish an American AI company for refusing to surrender basic safeguards represent a chilling abuse of government power. Progressive commentators argued the administration was weaponizing national security designations for ideological reasons. One legal analysis argued Anthropic was excommunicated from doing business with the government in brazen violation of its First Amendment and due process rights because it declined to allow its AI to be used for hypothetically lawful purposes like mass surveillance of Americans and fully autonomous lethal weapons. Dean Ball, a primary author of the White House AI Action Plan who left the administration, called the Pentagon's move attempted corporate murder, arguing it strikes at a core principle of the American republic about private property that has traditionally been especially dear to conservatives. Left-leaning coverage emphasized that Anthropic's safeguards were minimal and reasonable. Retired Air Force Gen. Jack Shanahan said Claude is already being widely used across the government including in classified settings and Anthropic's red lines are reasonable, noting that frontier AI systems are not ready for prime time in national security settings, particularly not for fully autonomous weapons.

Right-Leaning Perspective

Right-leaning outlets and Trump administration officials framed the dispute as Anthropic refusing necessary military capability and imposing ideological constraints on national defense. Defense Secretary Pete Hegseth claimed Anthropic delivered a master class in arrogance and betrayal, stating the Pentagon must have full, unrestricted access to Anthropic's models for every lawful purpose in defense of the Republic. Pentagon spokesman Sean Parnell said Anthropic's unwillingness to go along with the military's demands was jeopardizing critical military operations and potentially putting warfighters at risk, and the Pentagon must have full, unrestricted access to the models for every lawful purpose. The Trump administration explicitly framed Anthropic's safety requirements as ideological and anti-American. Defense Secretary Pete Hegseth said Department of War AI will not be woke and will work for them, claiming they are building war-ready weapons and systems, not chatbots for an Ivy League faculty lounge, and characterized Anthropic's demands for AI safety and guarantees of nonmilitary use as seemingly woke. In his memo Accelerating America's Military AI Dominance, Hegseth wrote that the Pentagon must not employ AI models that incorporate ideological tuning and must utilize models free from usage policy constraints that may limit lawful military applications. Right-wing supporters also emphasized competitive market dynamics. Elon Musk and other Tech Right figures saw the dispute as a competitive battle, with Musk wanting xAI to be first in line for government contracts after Hegseth announced a deal with xAI to use Grok under the Pentagon's preferred all lawful use terms.

Deep Dive

Anthropic's Claude was the first AI model to work on the military's classified networks, and the company struck a contract worth up to $200 million with the Pentagon last summer. As the company began negotiating Claude's deployment on the Pentagon's GenAI.mil platform in September, talks stalled when the Pentagon wanted Anthropic to grant unrestricted access for all lawful purposes, while Anthropic wanted assurance that technology would not be used for fully autonomous weapons or domestic mass surveillance. Trump's comments came just over an hour before the Pentagon's deadline for Anthropic to allow unrestricted military use of its AI technology or face consequences. The dispute reveals a fundamental disagreement about power and responsibility in AI deployment. The Pentagon argues that the military already operates under its own rules and oversight, and cannot have mission decisions constrained by a vendor's terms of service, particularly in gray areas where definitions of surveillance and autonomy can be contested. Anthropic argues that without guardrails to block AI-powered mass surveillance on Americans or weapons that can strike without human input, there is a risk of Claude making fatal mistakes or operating in a way that clashes with democratic values. Notably, in an apparent about-face, the Pentagon appeared to accept from OpenAI the same safety terms it rejected for Anthropic, with OpenAI CEO Sam Altman writing that his company's most important safety principles are prohibitions on domestic mass surveillance and human responsibility for the use of force including autonomous weapons systems, and that the Pentagon agrees with these principles. What to watch: A final verdict in the case could still be months away, and there is progress with the White House but not with the Pentagon. Whether the Trump administration appeals the court ruling and whether political pressure continues on military contractors and other government agencies remains unresolved, as does the underlying question of whether safety guardrails on frontier AI will be imposed by companies or mandated by government.

OBJ SPEAKING

Create StoryTimelinesVoter ToolsRegional AnalysisAll StoriesCommunity PicksUSWorldPoliticsBusinessHealthEntertainmentTechnologyAbout

Trump administration orders agencies to drop Anthropic contracts

A federal judge ruled that Anthropic's supply-chain-risk designation is likely contrary to law and arbitrary, blocking the Trump administration's ban.

Mar 26, 2026· Updated Apr 17, 2026
What's Going On

Trump wrote on Truth Social that most agencies must immediately stop using Anthropic's AI, but gave the Pentagon a six-month period to phase out the technology that is already embedded in military platforms. The Pentagon insisted Anthropic agree to give the military unrestricted access to its AI model, and Hegseth had set a deadline of 5:01 p.m. Friday for it to agree to drop its guardrails. Hegseth wrote that effective immediately, no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic. In late March, U.S. District Judge Rita Lin ruled that the supply-chain-risk designation is likely both contrary to law and arbitrary and capricious. The legal victory temporarily blocks both Trump's presidential directive and Hegseth's designation, though the underlying dispute over AI safety guardrails versus unrestricted military use remains unresolved.

Left says: Democratic Sen. Mark Warner called the preliminary injunction barring Anthropic's supply-chain-risk designation a victory for national security, arguing the Trump administration's actions were retaliatory and politically motivated rather than based on genuine security analysis.
Right says: Trump declared the United States will never allow a radical left woke company to dictate how the military fights, stating the Leftwing nut jobs at Anthropic made a disastrous mistake trying to strong-arm the Department of War.
✓ Common Ground
Both sides acknowledged that a growing number of workers from Anthropic's top rivals OpenAI and Google voiced support for Amodei's stand in open letters and other forums.
Some Trump administration officials, even those who see Anthropic as woke, also acknowledge Anthropic's tools as best-in-class for AI for national security purposes.
Both the Pentagon and members of Congress endorse that the Department does not intend to conduct mass surveillance or use autonomous weapons without humans in the loop.
Objective Deep Dive

Anthropic's Claude was the first AI model to work on the military's classified networks, and the company struck a contract worth up to $200 million with the Pentagon last summer. As the company began negotiating Claude's deployment on the Pentagon's GenAI.mil platform in September, talks stalled when the Pentagon wanted Anthropic to grant unrestricted access for all lawful purposes, while Anthropic wanted assurance that technology would not be used for fully autonomous weapons or domestic mass surveillance. Trump's comments came just over an hour before the Pentagon's deadline for Anthropic to allow unrestricted military use of its AI technology or face consequences.

The dispute reveals a fundamental disagreement about power and responsibility in AI deployment. The Pentagon argues that the military already operates under its own rules and oversight, and cannot have mission decisions constrained by a vendor's terms of service, particularly in gray areas where definitions of surveillance and autonomy can be contested. Anthropic argues that without guardrails to block AI-powered mass surveillance on Americans or weapons that can strike without human input, there is a risk of Claude making fatal mistakes or operating in a way that clashes with democratic values. Notably, in an apparent about-face, the Pentagon appeared to accept from OpenAI the same safety terms it rejected for Anthropic, with OpenAI CEO Sam Altman writing that his company's most important safety principles are prohibitions on domestic mass surveillance and human responsibility for the use of force including autonomous weapons systems, and that the Pentagon agrees with these principles.

What to watch: A final verdict in the case could still be months away, and there is progress with the White House but not with the Pentagon. Whether the Trump administration appeals the court ruling and whether political pressure continues on military contractors and other government agencies remains unresolved, as does the underlying question of whether safety guardrails on frontier AI will be imposed by companies or mandated by government.

◈ Tone Comparison

Left-leaning outlets emphasized judicial language about unconstitutionality and First Amendment retaliation, using phrases like the Trump administration's actions are invalidated by statutory limits and constitutional protections, with the Pentagon's pretextual effort to disguise punishment as national security enforcement highlighted by the fig leaf of administrative process, replaced instead with social media broadsides targeting Anthropic's corporate values and perceived ideology. Right-leaning coverage used more direct confrontational language, with Trump declaring the nation will never allow a radical left woke company to dictate terms, and warning that we will decide the fate of our country, not some out-of-control radical left AI company.