Trump Administration Shifts to AI Model Vetting

Trump administration announces agreements with Google, Microsoft, and xAI to evaluate AI models before public release, and weighs executive order creating working group to establish formal AI model vetting procedures.

Objective Facts

The Center for AI Standards and Innovation announced agreements with Google DeepMind, Microsoft, and xAI on May 5, 2026, allowing the U.S. government to evaluate artificial intelligence models before they are publicly available. Beyond this announcement, the White House has been weighing creation of a new AI working group that would explore potential oversight procedures, including plans to vet models before they're released to the public. The administration's policy shift is driven by concerns about the national security implications of Anthropic's Mythos AI model, with its ability to identify and exploit cyber security vulnerabilities. The Trump administration spent its first year systematically dismantling every meaningful AI safety effort the Biden administration had built, rescinding President Biden's AI executive order on Day 1 that had asked developers to perform safety evaluations and report on models with potential military applications. Big Tech companies including OpenAI, Anthropic, and Google have been briefed on these plans and are voicing concerns, with executives worried that excessive regulation could undermine America's innovation race against China.

Left-Leaning Perspective

Progressive outlets and commentators like those at Progressive.org have criticized the Trump administration's March 2026 AI policy framework for urging Congress to preempt state AI safety laws and for proposing cuts to NIST budget by more than 40 percent, calling it a dangerous regression from Biden-era oversight efforts. AI security experts including Gary McGraw of the Berryville Institute of Machine Learning, cited in Fortune, argue that any regulatory guidance should systematically address 23 identified LLM security risks by opening the 'black box' of frontier models to scrutiny, expressing deep concern that 'the foxes might be asked to guard the chicken house even though they already designed and constructed it in secret.' Cornell professor Gregory Falco acknowledged in a Cornell Chronicle interview that Mythos and similar models represent real national security concerns, but warned that once government vetting processes are built, whoever holds power gets to shape how the vetting works, politicizing the process. Progressive critics argue that most Americans don't know their government is dismantling AI safety protections while the people building AI themselves warn of extinction-level risks. The left's framing emphasizes that the Trump administration's sudden embrace of vetting, while superficially positive, comes only after aggressive dismantling of Biden-era safeguards and leaves fundamental vulnerabilities unaddressed.

Right-Leaning Perspective

The American Enterprise Institute framed the reported executive order as 'a stunning reversal' that would be 'a mistake,' arguing a mandatory government AI vetting regime would create 'significant harm to innovation and competition in a sector that is currently carrying more than its share of stability and growth in the American economy.' White House Chief of Staff Susie Wiles, the administration's leading voice on this issue, stated that 'President Trump is the most forward leaning president on innovation in American history' and that the administration is 'not in the business of picking winners and losers' when it comes to AI and cyber security, framing the approach as empowering innovators rather than imposing bureaucratic constraints. Dean Ball, who served as a senior AI adviser in the Trump administration, told news outlets that executives worried excessive regulation could undermine America's innovation race against China, and said striking a regulatory balance amid rapid AI advances is 'not easy.' Saif Khan from the Institute for Progress told analysts that 'the pure, Silicon Valley venture-capital type of approach to AI policy just might be over in the Trump administration.' The right frames this shift as necessary security measure narrowly focused on specific cyberweapon risks (Mythos) rather than broader AI regulation, emphasizing that national security concerns have forced recognition that some models pose unique dangers requiring government access without blocking commercial release.

Deep Dive

The Trump administration's AI policy represents a sharp reversal from its initial posture: on Day 1 in 2025, Trump rescinded Biden's November 2023 AI Executive Order that required developers of high-risk systems to share safety test results with government, and his administration spent the first year systematically dismantling AI safety efforts. Vice President JD Vance famously told the AI Action Summit in Paris that the future would be won 'by building'—not 'by hand-wringing about safety.' This ideological position reflected Trump administration's belief that regulation threatens U.S. competitiveness with China. The catalyst for the policy reversal was Anthropic's Mythos model, an AI system with unprecedented ability to identify and exploit cyber security vulnerabilities, which triggered national security panic in Washington and coincided with the Pentagon's earlier designation of Anthropic as a supply-chain risk after the company refused to grant unrestricted Pentagon use of its technology. AI has crossed a threshold that no administration—not even one ideologically committed to staying out of its way—can afford to ignore, representing a sea change accelerated by a new class of models that can hunt down cybersecurity flaws with extraordinary speed and precision. The White House push, which includes the West Wing and the National Security Council, could result in an agreement within weeks at most. What each side gets right and what they miss: The right correctly identifies that frontier AI cybersecurity risks are novel and warrant some government access to assess capabilities, and that vetting need not mean blocking product releases. However, conservative commentators underestimate institutional capacity risks and downplay how vetting regimes historically become politicized. The left correctly points to government's technical capacity limitations and politicization dangers, but some progressive critics fail to acknowledge that models like Mythos genuinely present dual-use risks that pure self-regulation may not address. A key omission from both sides: the emerging question of whether government vetting creates perverse incentives by giving the Trump administration leverage over companies (like Anthropic) it has already designated as security threats for political reasons.

OBJ SPEAKING

Create StoryTimelinesVoter ToolsRegional AnalysisPolicy GuideAll StoriesCommunity PicksUSWorldPoliticsBusinessHealthEntertainmentTechnologyAbout

Trump Administration Shifts to AI Model Vetting

Trump administration announces agreements with Google, Microsoft, and xAI to evaluate AI models before public release, and weighs executive order creating working group to establish formal AI model vetting procedures.

May 6, 2026· Updated May 8, 2026
What's Going On

The Center for AI Standards and Innovation announced agreements with Google DeepMind, Microsoft, and xAI on May 5, 2026, allowing the U.S. government to evaluate artificial intelligence models before they are publicly available. Beyond this announcement, the White House has been weighing creation of a new AI working group that would explore potential oversight procedures, including plans to vet models before they're released to the public. The administration's policy shift is driven by concerns about the national security implications of Anthropic's Mythos AI model, with its ability to identify and exploit cyber security vulnerabilities. The Trump administration spent its first year systematically dismantling every meaningful AI safety effort the Biden administration had built, rescinding President Biden's AI executive order on Day 1 that had asked developers to perform safety evaluations and report on models with potential military applications. Big Tech companies including OpenAI, Anthropic, and Google have been briefed on these plans and are voicing concerns, with executives worried that excessive regulation could undermine America's innovation race against China.

Left says: Progressives argue most Americans don't know their government is dismantling AI safety protections while the people building AI warn of existential risks.
Right says: White House Chief of Staff Susie Wiles stated on X that when it comes to AI and cyber security, Trump administration is 'not in the business of picking winners and losers,' emphasizing rapid safe deployment without bureaucratic burden.
✓ Common Ground
Both sides of the policy debate acknowledge that Mythos and similar models are real national security concerns and that recent demonstrations of AI-enabled cyberattack capability have made security risks concrete in ways abstract debates never did.
Both sides report that sources at top AI companies are cooperating with the White House's oversight effort, with the Trump administration recognizing fast-growing model capabilities and tech labs recognizing the need to partner with government to avoid more draconian steps, potentially reaching agreement within weeks.
White House National Economic Council Director Kevin Hassett stated he was 'highly confident' in National Cyber Director Sean Cairncross's work to coordinate response, describing efforts to test Mythos 'left and right' before release to ensure it doesn't cause harm to American businesses or government.
Objective Deep Dive

The Trump administration's AI policy represents a sharp reversal from its initial posture: on Day 1 in 2025, Trump rescinded Biden's November 2023 AI Executive Order that required developers of high-risk systems to share safety test results with government, and his administration spent the first year systematically dismantling AI safety efforts. Vice President JD Vance famously told the AI Action Summit in Paris that the future would be won 'by building'—not 'by hand-wringing about safety.' This ideological position reflected Trump administration's belief that regulation threatens U.S. competitiveness with China.

The catalyst for the policy reversal was Anthropic's Mythos model, an AI system with unprecedented ability to identify and exploit cyber security vulnerabilities, which triggered national security panic in Washington and coincided with the Pentagon's earlier designation of Anthropic as a supply-chain risk after the company refused to grant unrestricted Pentagon use of its technology. AI has crossed a threshold that no administration—not even one ideologically committed to staying out of its way—can afford to ignore, representing a sea change accelerated by a new class of models that can hunt down cybersecurity flaws with extraordinary speed and precision. The White House push, which includes the West Wing and the National Security Council, could result in an agreement within weeks at most.

What each side gets right and what they miss: The right correctly identifies that frontier AI cybersecurity risks are novel and warrant some government access to assess capabilities, and that vetting need not mean blocking product releases. However, conservative commentators underestimate institutional capacity risks and downplay how vetting regimes historically become politicized. The left correctly points to government's technical capacity limitations and politicization dangers, but some progressive critics fail to acknowledge that models like Mythos genuinely present dual-use risks that pure self-regulation may not address. A key omission from both sides: the emerging question of whether government vetting creates perverse incentives by giving the Trump administration leverage over companies (like Anthropic) it has already designated as security threats for political reasons.

◈ Tone Comparison

Left-leaning outlets use cautionary language emphasizing contradiction and institutional weakness ('dismantling,' 'foxes guarding chicken house,' politicization risks), while right-leaning voices frame the shift as pragmatic security-first policy using terms like 'stunning reversal' but emphasizing narrow focus on specific cyberweapon risks rather than broad regulation. Neither side fully celebrates the policy change.