White House Floats AI Vetting Regime Sparking Tech Industry Panic
Trump administration sharply split over plan to give U.S. spy agencies more sway in AI regulation.
Objective Facts
The New York Times reported the White House is considering vetting AI models before release, and Politico reported the White House floated an order creating a "vetting regime" requiring AI companies to be approved by government before releasing models. National Economic Council Director Kevin Hassett stated on Fox Business that the administration is studying possibly an executive order to give a clear roadmap for how future AIs that potentially create vulnerabilities should go through a process so they're released in the wild after they've been proven safe, just like an FDA drug. The administration is reportedly considering this oversight for advanced AI models, driven by concerns about the national security implications of Anthropic's new "Mythos" AI model and its ability to identify and exploit cyber security vulnerabilities. NIST announced three leading AI companies — Google DeepMind, Microsoft and xAI — agreed to share their models for government testing ahead of release. David Sacks, former White House AI czar, remains active in discussions about how the administration should respond to AI advances, as the administration is sharply split over the regulatory plan.
Left-Leaning Perspective
TechPolicy.Press reported the Trump administration is weighing an executive order to create a working group to develop options for federal government access to new AI models before release, with options under discussion including NSA, National Cyber Director, and director of national intelligence oversight, characterizing this as a meaningful shift from an administration that spent its first year dismantling Biden-era AI safety frameworks. TechPolicy.Press's analysis criticized that while Anthropic's decision to withhold Mythos due to cybersecurity risks is regarded as responsible, relying on the discretion of executives is not a sustainable safety regime, and replacing that with federal review staffed by intelligence community representatives and directly influenced by industry tech giants would not improve matters. The publication noted that most AI research capacity sits inside companies selling AI products, with nearly 80 percent of global AI computing power privately owned and nearly 70 percent of new AI PhDs going directly into the private sector. The Daily Signal reported the potential executive orders could help the Trump administration secure votes in Congress to pass its National Framework on AI, and since news of possible orders requiring AI vetting, Democrats have shown more interest in negotiating on the framework. However, left-leaning coverage generally emphasizes that the proposal may be insufficient without independent capacity and that Democrats remain skeptical of the administration's commitments to meaningful oversight.
Right-Leaning Perspective
Neil Chilson, head of AI policy at the Abundance Institute, and Adam Thierer, resident senior fellow with the R Street Institute, emphasized that "From day one, the Trump administration rejected the Biden-Harris approach to AI," declaring that "Adopting an FDA-style regulatory regime for AI would represent a shocking policy reversal by the Trump administration, and a major about-face on how America has approached software, online speech, and digital commerce". Juan Londoño and Jennifer Huddleston with the libertarian Cato Institute reported the White House is considering an executive order establishing a working group to devise a system for government to "approve" advanced models before launch, warning that such an approach would open the door to regulatory capture, restrictions on expression, and weaponization of government power to punish politically disfavored companies. The American Enterprise Institute stated a mandatory government AI vetting regime would likely do little to enhance security while creating significant harm to innovation and competition in a sector currently carrying more than its share of stability and growth in the American economy. White House Chief of Staff Susie Wiles wrote that when it comes to AI and cyber security, the administration is "not in the business of picking winners and losers," signaling against an FDA-like approval regime, though this statement itself reflects internal uncertainty about policy direction.
Deep Dive
The Trump administration has largely positioned itself as opposite the Biden White House on AI, criticizing what Trump's tech policy advisors saw as overly burdensome AI safety efforts, with former "AI and crypto czar" David Sacks best embodying this ethos, but the administration is now about to engage in a head-spinning policy pirouette driven by concerns about Anthropic's Mythos model's ability to identify and exploit cyber security vulnerabilities. Anthropic itself was labeled a national security threat by the administration after refusing to grant the Pentagon unrestricted use of its technology, a designation the company is now challenging in court. This background reveals why the current proposal has sparked such fierce disagreement: it contradicts core messaging from Trump's first 100 days while addressing a real security vulnerability that caught the administration off-guard. The challenge is compounded by the fact that much of the government's evaluation effort depends on cooperation from the same companies building the models, and in 2024 BIML identified 23 LLM security risks located inside the black box of frontier models managed by vendors themselves. Right-leaning critics correctly note the inconsistency with Trump's deregulatory brand, while left-leaning critics worry the proposal lacks independent capacity to be effective. Both perspectives contain valid concerns: conservatives fear regulatory capture and weaponization, while progressives fear the testing will remain dependent on industry self-reporting. The Washington Post reported on May 11 that David Sacks, former White House AI czar, remains active in discussions about how the administration should respond to AI advances, as the administration is sharply split over the regulatory plan. Watch for whether an executive order materializes and whether it contains binding approval mechanisms or remains advisory, as this will determine whether the shift represents genuine policy change or political theater.