Trump administration plans major expansion of AI model security testing
Trump administration announced agreements with Google DeepMind, Microsoft and xAI for pre-deployment testing of AI models, marking a sharp reversal from its hands-off stance after concerns over Anthropic's Mythos cybersecurity model.
Objective Facts
On May 5-6, 2026, the Center for AI Standards and Innovation (CAISI) announced new agreements with Google DeepMind, Microsoft and xAI to conduct pre-deployment evaluations of frontier AI models before public release. The announcement builds on 2024 partnerships with OpenAI and Anthropic, with those agreements renegotiated to reflect Commerce Secretary Howard Lutnick's directives. The policy shift was driven by national security concerns over Anthropic's Mythos model and its ability to identify and exploit cyber security vulnerabilities. National Economic Council Director Kevin Hassett disclosed that the White House is studying an executive order creating an FDA-like approval process for AI models. This marks a dramatic reversal from the Trump administration's initial deregulatory approach, which was championed by former AI czar David Sacks.
Left-Leaning Perspective
Rumman Chowdhury, CEO of Humane Intelligence and former U.S. Science Envoy for AI, characterized the administration's shift as "a 180" for an administration "very explicitly been anti-any sort of regulation and also has explicitly tried to block states from enacting any kind of regulation." Chowdhury also warned that Trump's broader AI framework represents a "poison pill for states' rights" by "dictating congressional behavior and again targeting state-level regulation" while "expanding presidential authority further." The renewed push for evaluations is being framed less around AI ethics concerns and worry about existential dangers, which was a strong focus of the Biden Administration, and more around immediate national security risks, including cyberwarfare, infrastructure security, and geopolitical competition. Techdirt's analysis framed the policy reversal as emblematic of a broader Trump II pattern: coming in declaring things "stupid and worth ripping apart, only to later realize how important and structurally necessary those things were, and then rush to recreate them in a much sloppier, worse version." Science journal critics contend the Trump administration is engaged in "norm destruction—breaking expectations about transparent governance and public oversight" and has "advanced not the absence of AI regulation but its rearrangement, often by caprice: intensive state intervention operating through industrial policy, trade restrictions, immigration controls, equity stakes in private firms (selected by the state), the redirection of research funding."
Right-Leaning Perspective
National Economic Council Director Kevin Hassett expressed a desire for every AI lab to go through a safety review process before releasing new models, describing this as analogous to FDA drug approval procedures, while Daily Signal sources reported ongoing internal debates about how stringent the vetting should be. White House Chief of Staff Susie Wiles underscored that "President Trump is the most forward leaning president on innovation in American history" and that the goal is "ensure the best and safest tech is deployed rapidly to defeat any and all threats," framing the policy as security-focused rather than innovation-hampering. Right-leaning commentators acknowledge the legitimacy of questions about whether executive oversight of AI models is necessary and whether it could hurt U.S.-China competition, but note that implementation details matter significantly. The administration continues to emphasize that it sees beating China in the AI race as an existential priority and maintains fundamental skepticism of regulation, positioning security testing as a narrow national security measure rather than broad regulatory expansion. Susie Wiles reiterated that "when it comes to AI and cyber security, President Trump and his administration are not in the business of picking winners and losers" and that the goal is to "ensure the best and safest tech is deployed rapidly to defeat any and all threats."
Deep Dive
The Trump administration initially positioned itself as the opposite of the Biden White House on AI policy, criticizing what Trump's tech policy advisors saw as overly burdensome AI safety efforts and embracing an anti-regulation approach embodied by former AI czar David Sacks. However, driven by concerns about Anthropic's Mythos model—which has advanced ability to identify and exploit cyber security vulnerabilities—the administration is now considering oversight for advanced AI models. The limits of the deregulation model became clearer as the White House began considering government oversight for new AI models, exposing the central tension of Trump's AI policy: the same administration that welcomed Silicon Valley's direct hand in shaping policy also has to answer to lawmakers, agencies and public pressure over safety, market concentration and national security. The Trump administration spent its first year systematically dismantling every meaningful AI safety effort Biden had built; on Day 1, Trump rescinded President Biden's AI executive order which asked developers to perform safety evaluations. Yet the administration still sees beating China in the AI race as an existential priority and views regulation with deep skepticism, attempting to frame the new security testing as a narrowly-tailored response to specific national security threats rather than a return to broader regulatory oversight. The White House's consideration of a vetting process represents a sophisticated attempt to have it both ways: encouraging rapid growth of the AI economy while building a "firewall" against its most dangerous applications, setting up a defining political battle as the 2026 midterm elections approach. Internal differences within the Trump administration persist regarding how strong the vetting process should be, with some officials preferring a light touch to regulation while others want aggressive vetting. Legal analysis suggests the Trump administration may lack clear authority to mandate frontier model vetting, though existing CAISI and CISA tools enable a voluntary alternative; it remains unclear what legal authority would allow the president to mandate labs to undergo a vetting process and share information with government agencies. The next crucial decision will be whether any executive order creates truly voluntary cooperation or establishes de facto mandatory requirements for companies seeking market access.