White House considers AI vetting regime requiring pre-deployment government approval
White House considers FDA-style pre-deployment government vetting regime for AI models, reversing Trump's deregulatory stance amid Mythos cyber security concerns.
Objective Facts
This week, the New York Times reported the White House is considering vetting AI models before release, and Politico reported a day later that officials floated an order creating a 'vetting regime' requiring AI companies to be approved by the government before releasing models. National Economic Council Director Kevin Hassett proposed an executive order 'to give a clear roadmap' for future AIs that create vulnerabilities to go 'through a process so that they're released in the wild after they've been proven safe, just like an FDA drug.' On May 5, NIST announced Google DeepMind, Microsoft and xAI agreed to share models for government testing, building on earlier OpenAI and Anthropic partnerships. A White House official denied any policy reversal, stating the administration 'continues to balance advancing innovation and ensuring security.' Internal administration disagreements persist over vetting regime strength, with some preferring light-touch regulation and others favoring aggressive vetting.
Left-Leaning Perspective
According to The Daily Signal, since news of the AI vetting proposal, Democrats have shown more interest in negotiating on the Trump administration's National Framework on AI. RealClearPolitics published an op-ed calling the FDA-style vetting proposal 'a good idea that deserves serious consideration,' citing cybersecurity risks from AI-enabled attacks and deepfakes. Axios reported that a growing number of Democrats are putting AI regulation at the heart of their 2026 campaigns, making it a defining issue for the next Congress. Anthropic CEO Dario Amodei has deployed political resources aggressively, with Anthropic donating $20 million to support candidates who favor AI guardrails. Anthropic is following OpenAI's path in deepening its political footprint. However, specific left-leaning commentary directly evaluating this particular White House vetting proposal is limited in available reporting. Left-leaning coverage appears to frame the government vetting proposal as evidence that even Trump recognizes AI safety risks, using it as political leverage to encourage Democratic-led regulation. The coverage notably omits deep analysis of whether Democrats would support an FDA-style regime if Trump implements it, focusing instead on how the proposal creates political opportunity.
Right-Leaning Perspective
Libertarian Cato Institute analysts Jennifer Huddleston and Juan Londoño said Thursday that requiring pre-launch approval 'was criticized as heavy-handed and anticompetitive' when included in Biden's executive order. The American Enterprise Institute called the proposal 'a stunning reversal' and 'a mistake,' arguing 'a mandatory government AI vetting regime would likely do little to enhance security, while creating significant harm to innovation and competition in a sector that is currently carrying more than its share of stability and growth.' AEI emphasized compliance costs would entrench incumbents: 'Established incumbents such as Anthropic or OpenAI can absorb those costs much more easily than startups, for whom licensing can be a barrier to entry.' AEI argues that bottom-up innovation processes are preferable and that America should not 'pivot to this model for AI' as Europe has done. Helen Toner of Georgetown's Center for Security and Emerging Technology called the vetting regime 'not a well-considered whole of government, really thoroughly endorsed position,' saying Hassett's proposal was driven by one individual's strong views rather than coordinated policy. Right-leaning criticism frames the vetting regime as Biden-style regulation in Trump clothing, predicting it will stifle startups and entrench incumbents. The right omits serious engagement with Mythos-specific security concerns, dismissing them as pretext for broader regulatory power.
Deep Dive
The discussions represent a reversal for an administration that revoked Biden's AI safety executive order within hours of taking office in January 2025 and spent most of last year positioning itself as the industry's deregulatory champion. Anthropic's Mythos model—which can identify and exploit security vulnerabilities—appears to have been the catalyst for this policy shift, with the model creating national security concerns that overrode the administration's ideological preference for deregulation. Both the left and right present partial pictures. Conservatives correctly identify that pre-release vetting creates compliance burdens that incumbent firms can more easily absorb than startups, potentially reducing competition. However, they underestimate the genuine dual-use risk posed by models like Mythos: a system that can autonomously identify thousands of zero-day vulnerabilities represents a qualitatively different risk profile than previous AI systems, creating legitimate national security and financial sector concerns that transcend typical regulatory ideology. The left's support for vetting correctly identifies real security gaps but omits discussion of how licensing could entrench Anthropic and OpenAI against emerging competitors, which may explain why Anthropic itself has invested heavily in political advocacy for guardrails. The administration's internal divisions remain unresolved, with some officials preferring light-touch regulation while others want aggressive vetting. Helen Toner's observation that the proposal lacks whole-of-government endorsement suggests the policy outcome remains genuinely uncertain. The next phase to watch is whether Trump himself endorses an executive order, what scope it covers (only frontier models or broader AI), and whether it becomes a regulatory precedent that survives future administration changes.