White House Urges States to Block AI Regulations
White House releases National Policy Framework calling for federal preemption of state AI laws, prioritizing innovation over regulation.
Objective Facts
The White House said on Friday that Congress should "preempt state AI laws" that it views as too burdensome, laying out a broad framework for how it wants Congress to address concerns about artificial intelligence without curbing growth or innovation in the sector. The legislative blueprint outlines a half-dozen guiding principles for lawmakers, focusing on protecting children, preventing electricity costs from surging, respecting intellectual property rights, preventing censorship and educating Americans on using the technology. The four-page document fulfilled a request from President Donald Trump's December executive order on state AI laws, which directed White House science and technology adviser Michael Kratsios, along with Special Adviser for AI and Crypto David Sacks, to develop a national policy to preempt state laws. The framework asks for broad preemption of state laws on AI, a long-standing priority of the AI industry and the Trump administration. That priority has fallen short of legislative backing twice this Congress; it was removed from the GOP budget reconciliation bill last summer and never officially made it into the annual defense policy bill.
Left-Leaning Perspective
Despite growing alignment among Republicans, Democrats remain more skeptical of the Framework and represent a critical bloc for any bipartisan legislative pathway. Members such as Reps. Yvette Clarke (D-N.Y.) and Don Beyer (D-Va.), along with Sen. Brian Schatz (D-Hawaii), have raised concerns regarding federal preemption, accountability and oversight, and Senate Committee on Commerce Ranking Member Maria Cantwell (D-Wash.) continues to advocate for a more structured approach grounded in standards, testing and public infrastructure investment. On March 20, 2026, Rep. Beyer, alongside Reps. Doris Matsui (D-Calif.), Ted Lieu (D-Calif.), Sara Jacobs (D-Calif.) and April McClain Delaney (D-Md.), introduced the Guaranteeing and Upholding Americans' Right to Decide Responsible AI Laws and Standards (GUARDRAILS) Act, which would repeal the Trump Administration's EO establishing a national AI policy framework and effectively block efforts to impose a moratorium on state-level AI regulation. Members of the House Democratic Commission on AI and the Innovation Economy have raised concerns that the framework emphasizes preemption without sufficiently pairing it with enforceable national safeguards. Critiques have focused on the absence of robust guardrails related to safety, labor impacts, and consumer protection. Five days after President Trump released the AI Framework, Sen. Bernie Sanders (I-VT) and Rep. Alexandria Ocasio-Cortez (D-NY-14) introduced the AI Data Center Moratorium Act, which would impose a nationwide pause on the construction and expansion of AI data centers until Congress enacts comprehensive federal safeguards. Robert Weissman, co-president of Public Citizen, described it as a "a national framework to protect Big Tech at the expense of everyday Americans" that "will be dead on arrival in Congress." Brad Carson, president of Americans for Responsible Innovation, warned that combining state preemption with opposition to open-ended industry liability amounts to "open season on the American public." Democratic opposition reflects skepticism that the framework will deliver real safeguards or accountability for AI harms without binding enforcement mechanisms.
Right-Leaning Perspective
House leadership immediately offered their support for the proposal. "Over the last few months, I have worked diligently with the White House, conservative leaders, child safety advocates, members of the creative community, and AI innovators to develop legislation that can garner bipartisan support and accomplish the President's goals," Blackburn said. Rep. Kat Cammack, R-Fla., cited concerns about state laws stifling innovation – another major administration concern – as the biggest rationale for federal preemption. "The idea that we're not going to have any sort of framework or guardrails, that's just not realistic," Cammack said. "But there's also the real concern that I certainly have, that we're going to push innovators out of the space, and so you can't get to the point where it's just become impossible to do anything, and that's going to require some preemption." Those priorities won praise from the AI industry, which has pushed Congress and the administration to move toward preemption. Daniel Castro, director of the Center for Data Innovation, a group whose supporters include several major tech firms, said in a statement that the framework avoids the "worst instincts in today's AI debate" including "alarmism" about unemployment and worries that AI training infringes on copyright. The right frames preemption as essential for enabling innovation and preventing a regulatory patchwork that would cripple startups and competitiveness against China. More than 50 Republican state lawmakers sent a letter earlier this month urging the White House to stop its efforts to block state-level AI regulations, arguing that "state-led efforts are fully consistent with conservative principles" and with the administration's "stated goals of promoting human flourishing while accelerating innovation." However, this intra-Republican split reflects a distinct federalist position rather than wholesale rejection of the framework.
Deep Dive
The White House, as well as several prominent figures in AI, say navigating a patchwork of state regulations could slow down innovation and affect America's competitiveness in the global AI race with China, which they say will have implications for the economy and national security. State-by-state regulation by definition creates a patchwork of 50 different regulatory regimes that makes compliance more challenging, particularly for start-ups. However, Congress has repeatedly declined to enact comprehensive federal preemption of state AI laws, including rejecting such an approach in the One Big Beautiful Bill Act and the National Defense Authorization Act. This legislative history suggests preemption faces real political obstacles even within Republican ranks. The framework reflects a genuine tension between two legitimate policy concerns. The right correctly identifies that complying with 50 different state regimes creates real compliance burdens for developers and that fragmented rules could slow innovation. Yet critics on the left identify a real gap: the framework provides few binding safeguards against AI harms before preemption takes effect. The framework's silence on some areas is notable. It makes no reference to the risks of AI systems' bias, nor does it seek to mitigate that harm through quality or testing requirements. It does not discuss civil rights, except for the prioritization of some free speech rights. And it makes no mention of the need to monitor performance of AI models or their deployment after they are created. Additionally, Last summer, the Trump Administration urged Congress to adopt a temporary federal "moratorium" preempting certain state AI laws, but Congress ultimately declined to pursue that approach. Despite increased legislative activity, the path forward for federal AI regulation remains unclear due to competing proposals, jurisdictional complexity, and ongoing divisions, particularly over preemption, across and within both parties. The framework has energized both pro-preemption Republicans and anti-preemption Democrats to introduce competing bills, but the Senate's composition and the 2026 elections create uncertainty about what Congress will actually pass. The most likely outcome in the near term appears to be narrow bills on child safety or deepfakes rather than comprehensive preemption.