Pentagon Raises National Security Concerns About Anthropic's Foreign Workers
Objective Facts
The Pentagon raised national security concerns about Anthropic's use of foreign workers, including from China, in a March 17 court filing by Pentagon undersecretary Emil Michael. Michael stated that Anthropic employs a large number of foreign nationals to build and support its LLM products, including many from the People's Republic of China, and that this "increases the degree of adversarial risk should those employees comply with the PRC's National Intelligence Law." The Pentagon argues its concerns with Anthropic go beyond disagreements over domestic mass surveillance and autonomous weapons, and extend to broader national security risks. Anthropic was an early adopter of operational security techniques like research compartmentalization and audit trails, in part because they were the first AI lab to partner with the Pentagon, and the company last year disrupted a first-of-its-kind AI-orchestrated cyber Chinese espionage campaign on their platform and banned the PRC from their services. A hearing on whether to grant Anthropic temporary relief is set for March 24.
Left-Leaning Perspective
Left-leaning outlets and critics have contextualized the Pentagon's foreign worker concern as a supplementary argument in what they view as retaliatory action. Foreign-born workers make up a significant share of the top AI and tech talent in the U.S., with Chinese-origin researchers constituting roughly 38-40% of top AI talent at U.S. institutions as of 2023. This framing suggests the Pentagon's sudden focus on foreign workers appears inconsistent with broader industry practices. Critics note that insider threats are a genuine and tricky concern, but ironically, within the industry Anthropic is widely considered to be the most serious and proactive about policing insider threats from foreign nationals and otherwise. Left-leaning analysis emphasizes that the foreign worker argument emerged only after Anthropic's refusal to remove ethical guardrails on military AI use, suggesting it is tactical rather than foundational. The argument appears designed to provide national security cover for what critics frame as ideological retaliation. Outlets like TIME and CNN have reported that dozens of scientists and researchers at OpenAI and Google DeepMind filed amicus briefs supporting Anthropic, arguing that the supply chain risk designation could harm U.S. competitiveness in the industry and hamper public discussions about the risks and benefits of AI, with the group arguing that Anthropic's red lines raise legitimate concerns. Left-leaning perspectives emphasize what they perceive as the Pentagon's inconsistency: using Claude in active military operations in Iran while simultaneously branding it a security threat, and ignoring the fact that other U.S. AI companies also employ significant numbers of foreign workers without similar designations.
Right-Leaning Perspective
Right-leaning or defense-focused outlets frame the foreign worker concern as a legitimate security risk that merits elevated scrutiny in classified defense work. Pentagon undersecretary Emil Michael's declaration emphasized that the use of workers increases adversarial risk should those employees comply with the PRC's National Intelligence Law. This invokes concerns about the extraterritorial reach of Chinese law and compulsory cooperation requirements. Defense-focused outlets and analysts emphasize that Pentagon Chief Technology Officer Emil Michael explained the concern in stark terms, stating "We can't have a company that has a different policy preference pollute the supply chain so our war fighters are getting ineffective weapons, ineffective body armor, ineffective protection," and that this revealed the deeper concern inside the defense establishment. The Pentagon's argument here is that foreign workers with potential obligations to a hostile state represent an inherent operational vulnerability in wartime, separate from the broader governance question of who controls military AI policy. Right-aligned outlets highlight the Pentagon's statement that risks with other major U.S. AI companies that use foreign workers are reduced by the technical and security assurances of the other labs' leadership, along with their consistently responsible and trustworthy behavior when working with the Pentagon. This framing suggests Anthropic's unique resistance to Pentagon demands creates heightened risk when combined with foreign workforce composition.
Deep Dive
The Pentagon's March 17 foreign worker concern represents an escalation in its legal strategy against Anthropic, adding a new dimension to a dispute fundamentally rooted in governance and military AI use policy. The broader Pentagon-Anthropic conflict began in early 2026 when contract negotiations broke down over Anthropic's two red lines: refusal to permit fully autonomous weapons without human control, and refusal to enable mass surveillance of Americans. The Pentagon sought unrestricted access for "all lawful purposes." The foreign worker angle, introduced only after Anthropic sued to challenge the supply chain risk designation, suggests the Pentagon is building multiple lines of legal argument to defend its unprecedented designation of an American company as a security risk. Critically, the foreign worker concern highlights a genuine tension in the AI industry. Chinese-origin researchers constitute roughly 38-40% of top AI talent at U.S. institutions as of 2023. If foreign workforce composition is genuinely disqualifying for defense work, the Pentagon's selective application to Anthropic alone invites scrutiny. However, the Pentagon's argument that Anthropic's stated resistance to Pentagon authority creates elevated risk when workers have potential obligations to China is not frivolous—it reflects real counterintelligence concerns. The PRC's National Intelligence Law does grant Beijing legal authority to compel cooperation from its citizens. Yet the Pentagon's own framing acknowledges that risks with other major U.S. AI companies that use foreign workers are reduced by technical and security assurances of those labs' leadership and their consistently responsible behavior when working with the Pentagon. This suggests the Pentagon views the risk as contextual—elevated by Anthropic's governance stance, not by foreign workers per se. What remains unclear is whether the foreign worker concern is a genuine national security calculus or an additional legal hook to support the designation. Anthropic was an early adopter of operational security techniques like research compartmentalization and audit trails, in part because they were the first AI lab to partner with the Pentagon, and the company last year disrupted a first-of-its-kind AI-orchestrated cyber Chinese espionage campaign on their platform. This record complicates the Pentagon's assertion that Anthropic poses unusual risk. The March 24 hearing will likely determine whether courts accept the foreign worker argument as an independent basis for supply chain risk or view it as supplementary to the core dispute over military AI governance.