Trump Administration Reshaping AI Security Policy Ahead of Xi Summit

Trump administration abandons deregulatory AI stance, pivots toward pre-deployment security vetting ahead of Xi summit amid Mythos cybersecurity concerns.

Objective Facts

The Trump Administration, which had positioned itself as the opposite of the Biden White House by criticizing AI safety efforts and embracing an anti-regulation approach, is now reportedly engaging in a policy reversal, considering oversight for advanced AI models driven by national security concerns about Anthropic's Mythos model with its ability to identify and exploit cyber security vulnerabilities. Kevin Hassett, director of the National Economic Council, said the administration is studying a possible executive order that would create "a clear road map" for how advanced AI systems should be evaluated before release, comparing the process to FDA drug approval where systems are "released to the wild after they've been proven safe". The Commerce Department's Center for AI Standards and Innovation (CAISI) announced new agreements with Google DeepMind, Microsoft and xAI to conduct "pre-deployment" evaluations of frontier AI models, with CAISI having already conducted 40 evaluations including on some models not yet released, building on earlier agreements with Anthropic and OpenAI. Trump and Xi are slated to meet in Beijing on May 14-15, 2026, and the two leaders are expected to discuss AI for the first time amid mounting alarm over cyber risks posed by frontier models like Anthropic's Mythos, with a senior U.S. official stating they will explore whether to open formal lines of communication on AI safety and security risks. Stanford researchers concluded in this year's annual AI report that "The U.S.-China AI model performance gap has effectively closed," a significant shift that shapes the strategic context for these discussions.

Left-Leaning Perspective

Representatives Beyer, Matsui, Lieu, Jacobs and Delaney introduced the Guaranteeing and Upholding Americans' Right to Decide Responsible AI Laws and Standards (GUARDRAILS) Act in March 2026, which would repeal Trump's AI National Policy Framework executive order and block efforts to impose a moratorium on state-level AI regulation. According to Axios, a growing number of Democrats are putting AI regulation at the heart of their 2026 campaigns, making it a defining issue for the next Congress, viewing it as an opportunity given the Trump administration's hands-off approach. The Center for American Progress characterized Trump's AI National Policy Framework executive order as "a legally unjustifiable incursion into the rights of states," arguing the administration is wrongfully directing the federal government to act against states with "onerous" state AI laws by challenging them in court and withholding federal funding, primarily the BEAD program. Despite this, some state legislators from both parties want to retain their ability to craft their own AI legislation, with 280 state lawmakers across the country signing a letter opposing legislation that would curtail state AI laws. A Science magazine analysis noted that federal preemption by executive action, rather than through congressional action, "replaces diffuse but locally accountable policy with concentrated but unaccountable governance," arguing the approach represents "an aggressive assertion of federal authority that forecloses democratic experimentation at the state level". Progressive critics emphasize that state-level AI regulation addresses concerns Biden's approach prioritized—consumer protection, worker displacement, and algorithmic discrimination—which Trump's administration downplays.

Right-Leaning Perspective

Fortune reported that while former Trump "AI and crypto czar" David Sacks embodied the administration's initial policy of anti-regulation, the Trump Administration is now "about to engage in a head-spinning policy pirouette". Neil Chilson, head of AI policy at the Abundance Institute, and Adam Thierer, resident senior fellow at the R Street Institute, wrote in response to Hassett's FDA-like approval concept that "From day one, the Trump administration rejected the Biden-Harris approach to AI," implying the recent shift contradicts established administration principles. Kevin Hassett responded to concerns about mandatory vetting by stating on CNBC that "The White House — nobody has an idea that we should do something like bring in a giant new bureaucracy to approve AIs," seeking to reassure conservatives that the oversight approach remains limited. White House Chief of Staff Susie Wiles attempted to quell industry fears, writing on X that the administration is "not in the business of picking winners and losers" and has "one goal; ensure the best and safest tech is deployed rapidly to defeat any and all threats", framing security concerns as the driver rather than ideological regulation. Vice President JD Vance's February 2025 statement at an AI gathering in Paris—that "excessive regulation of the AI sector could kill a transformative industry just as it's taking off" and "The AI future is not going to be won by hand-wringing about safety"—illustrates the tension between the administration's original deregulatory stance and its current security-focused pivot. The Register noted that "the Trump yes-men are framing this shift as a response to escalating cybersecurity and national‑security risks rather than as a broader embrace of EU‑style AI regulation," showing conservatives are characterizing the policy change as narrowly security-focused rather than a fundamental regulatory shift.

Deep Dive

The Trump administration's shift toward AI security vetting represents a collision between the administration's foundational deregulatory philosophy and concrete national security concerns that transcend ideology. When Trump took office in January 2025, his first act was rescinding Biden's AI Executive Order 14110, positioning deregulation as the path to American AI dominance. David Sacks served as the administration's AI czar embodying this ethos, while Vice President JD Vance stated in February 2025 that "excessive regulation of the AI sector could kill a transformative industry" and "The AI future is not going to be won by hand-wringing about safety." Yet within 15 months, the administration is "now reportedly considering oversight for advanced AI models" driven by "concerns about the national security implications of Anthropic's new Mythos AI model, with its ability to identify and exploit cyber security vulnerabilities." The policy reversal reflects competing interests within the Trump administration. National security officials want more sway in AI regulation as cybersecurity threats from advanced models become concrete. Even the Trump-aligned America First Policy Institute called CAISI "chronically underfunded," acknowledging that current evaluation capacity is insufficient for the scale of models emerging. The White House doesn't want to be held responsible for political repercussions if a devastating AI-enabled cyberattack were to occur, creating pressure for visible security measures. However, the administration is careful to frame this not as regulation but as necessary security vetting. Hassett emphasized the approach doesn't envision "a giant new bureaucracy to approve AIs," and Susie Wiles stated the administration is "not in the business of picking winners and losers." This positioning attempts to reconcile security imperatives with the administration's pro-innovation identity. What remains unresolved is whether this represents genuine policy evolution or tactical maneuvering ahead of the Xi summit. Trump and Xi are expected to discuss AI for the first time, with a senior U.S. official stating they will explore opening formal lines of communication on AI safety and security risks—suggesting the administration wants a unified front and clear security frameworks for negotiation. The domestic consensus is narrower: Conservative AI policy experts question whether the shift aligns with the administration's original rejection of Biden's cautious approach, while Democrats see it as inadequate without stronger consumer protections and state authority preservation. With Stanford researchers concluding the U.S.-China AI performance gap has "effectively closed," the administration faces acute pressure to maintain dominance while managing risks—a tension that may prove difficult to resolve through voluntary industry cooperation alone.

Regional Perspective

Trump and Xi are expected to discuss AI at their summit this week, with a striking difference in how eager the public in China and the U.S. are for AI adoption. Stanford researchers concluded in their annual AI report that "The U.S.-China AI model performance gap has effectively closed," a significant shift from previous years when the U.S. maintained clear technological superiority. This technological convergence fundamentally changes the negotiating dynamic heading into the summit. Chinese government's actual willingness to make robust AI safety commitments is low, as Beijing views AI safety dialogues primarily as an opportunity to expand China's access to U.S. technology and close the AI gap—a dynamic on full display in 2024 when the U.S. sent technical experts to outline shared risks while China sent diplomats to complain about export controls on AI chips. CFR expert James M. Lindsay argues that "Beijing will not negotiate in good faith on AI safety" and that "A narrowly scoped dialogue paired with maximum pressure on export controls is the only way to shift Beijing's calculus and secure long-term AI safety." This structural asymmetry—where China seeks technology access while the U.S. seeks safety commitments—shapes expectations for the upcoming talks. The Diplomat suggests that "a future Trump-Xi meeting could propel a joint commitment to publish and periodically update national safety frameworks that cover a few shared elements – pre-deployment testing, incident response, and basic transparency about high-risk uses," indicating some possibility for limited cooperation. However, while there is "objectively an imperative for collaboration on guardrails between the two countries," the Trump administration's new security vetting approach may be aimed as much at establishing domestic U.S. AI control before negotiating with Beijing as at reaching genuine bilateral agreements. The administration's pivot toward pre-deployment evaluation gives it concrete leverage in framing itself as the responsible AI actor ahead of the summit.

OBJ SPEAKING

Create StoryTimelinesVoter ToolsRegional AnalysisPolicy GuideAll StoriesCommunity PicksUSWorldPoliticsBusinessHealthEntertainmentTechnologyAbout

Trump Administration Reshaping AI Security Policy Ahead of Xi Summit

Trump administration abandons deregulatory AI stance, pivots toward pre-deployment security vetting ahead of Xi summit amid Mythos cybersecurity concerns.

May 11, 2026· Updated May 12, 2026
What's Going On

The Trump Administration, which had positioned itself as the opposite of the Biden White House by criticizing AI safety efforts and embracing an anti-regulation approach, is now reportedly engaging in a policy reversal, considering oversight for advanced AI models driven by national security concerns about Anthropic's Mythos model with its ability to identify and exploit cyber security vulnerabilities. Kevin Hassett, director of the National Economic Council, said the administration is studying a possible executive order that would create "a clear road map" for how advanced AI systems should be evaluated before release, comparing the process to FDA drug approval where systems are "released to the wild after they've been proven safe". The Commerce Department's Center for AI Standards and Innovation (CAISI) announced new agreements with Google DeepMind, Microsoft and xAI to conduct "pre-deployment" evaluations of frontier AI models, with CAISI having already conducted 40 evaluations including on some models not yet released, building on earlier agreements with Anthropic and OpenAI. Trump and Xi are slated to meet in Beijing on May 14-15, 2026, and the two leaders are expected to discuss AI for the first time amid mounting alarm over cyber risks posed by frontier models like Anthropic's Mythos, with a senior U.S. official stating they will explore whether to open formal lines of communication on AI safety and security risks. Stanford researchers concluded in this year's annual AI report that "The U.S.-China AI model performance gap has effectively closed," a significant shift that shapes the strategic context for these discussions.

Left says: Democratic critics argue the Trump administration's AI policies constitute a legally unjustifiable federal overreach that attacks state regulatory authority through litigation and funding restrictions. Democrats are positioning AI regulation as a defining campaign issue for 2026, seeing an opportunity to contrast with the Trump administration's hands-off approach.
Right says: The Trump administration maintains it doesn't envision a "giant new bureaucracy" for AI oversight, with Hassett emphasizing the approach isn't about establishing new approval agencies. Conservative commentators argue the administration has fundamentally rejected Biden's cautious AI approach from day one.
Region says: As Stanford researchers note the U.S.-China AI performance gap has effectively closed, the Trump-Xi summit becomes critical for determining whether the two powers will cooperate on AI safety or escalate competition. Beijing views AI safety dialogues as opportunities to access U.S. technology, having used previous 2024 talks to complain about export controls rather than engage substantively on safety issues.
✓ Common Ground
Both governments are considering formal AI discussions as part of the summit, an indication that AI development competition has emerged as a diplomatic priority alongside trade and security concerns, showing bipartisan and bilateral agreement that AI safety dialogue is necessary.
A Pew Research Center poll in 2025 found that 50% of Republicans and 51% of Democrats are more concerned than excited about AI technology's development and increasing popularity, indicating rare consensus about AI risk concerns across party lines.
Chatham House experts note that "both powers have an interest in opening the Strait of Hormuz and making progress on AI safety," suggesting shared recognition among analysts that AI safety discussions serve both nations' interests.
Several commentators across ideological lines acknowledge that state lawmakers continue to prefile AI-related legislation despite federal executive orders, with both Democratic and Republican state legislators seeking to retain regulatory authority, indicating shared concern about federal overreach regardless of partisan alignment.
Experts note "there is objectively an imperative for collaboration on guardrails between the two countries" on AI, with one of the few Biden-Xi agreements being a November 2023 accord to keep AI out of nuclear weapons systems, showing recognition that U.S.-China AI coordination on specific safety issues is mutually beneficial.
Objective Deep Dive

The Trump administration's shift toward AI security vetting represents a collision between the administration's foundational deregulatory philosophy and concrete national security concerns that transcend ideology. When Trump took office in January 2025, his first act was rescinding Biden's AI Executive Order 14110, positioning deregulation as the path to American AI dominance. David Sacks served as the administration's AI czar embodying this ethos, while Vice President JD Vance stated in February 2025 that "excessive regulation of the AI sector could kill a transformative industry" and "The AI future is not going to be won by hand-wringing about safety." Yet within 15 months, the administration is "now reportedly considering oversight for advanced AI models" driven by "concerns about the national security implications of Anthropic's new Mythos AI model, with its ability to identify and exploit cyber security vulnerabilities."

The policy reversal reflects competing interests within the Trump administration. National security officials want more sway in AI regulation as cybersecurity threats from advanced models become concrete. Even the Trump-aligned America First Policy Institute called CAISI "chronically underfunded," acknowledging that current evaluation capacity is insufficient for the scale of models emerging. The White House doesn't want to be held responsible for political repercussions if a devastating AI-enabled cyberattack were to occur, creating pressure for visible security measures. However, the administration is careful to frame this not as regulation but as necessary security vetting. Hassett emphasized the approach doesn't envision "a giant new bureaucracy to approve AIs," and Susie Wiles stated the administration is "not in the business of picking winners and losers." This positioning attempts to reconcile security imperatives with the administration's pro-innovation identity.

What remains unresolved is whether this represents genuine policy evolution or tactical maneuvering ahead of the Xi summit. Trump and Xi are expected to discuss AI for the first time, with a senior U.S. official stating they will explore opening formal lines of communication on AI safety and security risks—suggesting the administration wants a unified front and clear security frameworks for negotiation. The domestic consensus is narrower: Conservative AI policy experts question whether the shift aligns with the administration's original rejection of Biden's cautious approach, while Democrats see it as inadequate without stronger consumer protections and state authority preservation. With Stanford researchers concluding the U.S.-China AI performance gap has "effectively closed," the administration faces acute pressure to maintain dominance while managing risks—a tension that may prove difficult to resolve through voluntary industry cooperation alone.

◈ Tone Comparison

Fortune characterized the policy shift as a "head-spinning policy pirouette," signaling dramatic reversal. The Register noted that Trump's team is "framing this shift as a response to escalating cybersecurity and national-security risks rather than as a broader embrace of EU-style AI regulation," reflecting how the right contextualizes the change as narrow and security-focused while critics view it as broader ideological retreat.