White House Urges States to Block AI Regulations

White House releases National Policy Framework calling for federal preemption of state AI laws, prioritizing innovation over regulation.

Objective Facts

The White House said on Friday that Congress should "preempt state AI laws" that it views as too burdensome, laying out a broad framework for how it wants Congress to address concerns about artificial intelligence without curbing growth or innovation in the sector. The legislative blueprint outlines a half-dozen guiding principles for lawmakers, focusing on protecting children, preventing electricity costs from surging, respecting intellectual property rights, preventing censorship and educating Americans on using the technology. The four-page document fulfilled a request from President Donald Trump's December executive order on state AI laws, which directed White House science and technology adviser Michael Kratsios, along with Special Adviser for AI and Crypto David Sacks, to develop a national policy to preempt state laws. The framework asks for broad preemption of state laws on AI, a long-standing priority of the AI industry and the Trump administration. That priority has fallen short of legislative backing twice this Congress; it was removed from the GOP budget reconciliation bill last summer and never officially made it into the annual defense policy bill.

Left-Leaning Perspective

Despite growing alignment among Republicans, Democrats remain more skeptical of the Framework and represent a critical bloc for any bipartisan legislative pathway. Members such as Reps. Yvette Clarke (D-N.Y.) and Don Beyer (D-Va.), along with Sen. Brian Schatz (D-Hawaii), have raised concerns regarding federal preemption, accountability and oversight, and Senate Committee on Commerce Ranking Member Maria Cantwell (D-Wash.) continues to advocate for a more structured approach grounded in standards, testing and public infrastructure investment. On March 20, 2026, Rep. Beyer, alongside Reps. Doris Matsui (D-Calif.), Ted Lieu (D-Calif.), Sara Jacobs (D-Calif.) and April McClain Delaney (D-Md.), introduced the Guaranteeing and Upholding Americans' Right to Decide Responsible AI Laws and Standards (GUARDRAILS) Act, which would repeal the Trump Administration's EO establishing a national AI policy framework and effectively block efforts to impose a moratorium on state-level AI regulation. Members of the House Democratic Commission on AI and the Innovation Economy have raised concerns that the framework emphasizes preemption without sufficiently pairing it with enforceable national safeguards. Critiques have focused on the absence of robust guardrails related to safety, labor impacts, and consumer protection. Five days after President Trump released the AI Framework, Sen. Bernie Sanders (I-VT) and Rep. Alexandria Ocasio-Cortez (D-NY-14) introduced the AI Data Center Moratorium Act, which would impose a nationwide pause on the construction and expansion of AI data centers until Congress enacts comprehensive federal safeguards. Robert Weissman, co-president of Public Citizen, described it as a "a national framework to protect Big Tech at the expense of everyday Americans" that "will be dead on arrival in Congress." Brad Carson, president of Americans for Responsible Innovation, warned that combining state preemption with opposition to open-ended industry liability amounts to "open season on the American public." Democratic opposition reflects skepticism that the framework will deliver real safeguards or accountability for AI harms without binding enforcement mechanisms.

Right-Leaning Perspective

House leadership immediately offered their support for the proposal. "Over the last few months, I have worked diligently with the White House, conservative leaders, child safety advocates, members of the creative community, and AI innovators to develop legislation that can garner bipartisan support and accomplish the President's goals," Blackburn said. Rep. Kat Cammack, R-Fla., cited concerns about state laws stifling innovation – another major administration concern – as the biggest rationale for federal preemption. "The idea that we're not going to have any sort of framework or guardrails, that's just not realistic," Cammack said. "But there's also the real concern that I certainly have, that we're going to push innovators out of the space, and so you can't get to the point where it's just become impossible to do anything, and that's going to require some preemption." Those priorities won praise from the AI industry, which has pushed Congress and the administration to move toward preemption. Daniel Castro, director of the Center for Data Innovation, a group whose supporters include several major tech firms, said in a statement that the framework avoids the "worst instincts in today's AI debate" including "alarmism" about unemployment and worries that AI training infringes on copyright. The right frames preemption as essential for enabling innovation and preventing a regulatory patchwork that would cripple startups and competitiveness against China. More than 50 Republican state lawmakers sent a letter earlier this month urging the White House to stop its efforts to block state-level AI regulations, arguing that "state-led efforts are fully consistent with conservative principles" and with the administration's "stated goals of promoting human flourishing while accelerating innovation." However, this intra-Republican split reflects a distinct federalist position rather than wholesale rejection of the framework.

Deep Dive

The White House, as well as several prominent figures in AI, say navigating a patchwork of state regulations could slow down innovation and affect America's competitiveness in the global AI race with China, which they say will have implications for the economy and national security. State-by-state regulation by definition creates a patchwork of 50 different regulatory regimes that makes compliance more challenging, particularly for start-ups. However, Congress has repeatedly declined to enact comprehensive federal preemption of state AI laws, including rejecting such an approach in the One Big Beautiful Bill Act and the National Defense Authorization Act. This legislative history suggests preemption faces real political obstacles even within Republican ranks. The framework reflects a genuine tension between two legitimate policy concerns. The right correctly identifies that complying with 50 different state regimes creates real compliance burdens for developers and that fragmented rules could slow innovation. Yet critics on the left identify a real gap: the framework provides few binding safeguards against AI harms before preemption takes effect. The framework's silence on some areas is notable. It makes no reference to the risks of AI systems' bias, nor does it seek to mitigate that harm through quality or testing requirements. It does not discuss civil rights, except for the prioritization of some free speech rights. And it makes no mention of the need to monitor performance of AI models or their deployment after they are created. Additionally, Last summer, the Trump Administration urged Congress to adopt a temporary federal "moratorium" preempting certain state AI laws, but Congress ultimately declined to pursue that approach. Despite increased legislative activity, the path forward for federal AI regulation remains unclear due to competing proposals, jurisdictional complexity, and ongoing divisions, particularly over preemption, across and within both parties. The framework has energized both pro-preemption Republicans and anti-preemption Democrats to introduce competing bills, but the Senate's composition and the 2026 elections create uncertainty about what Congress will actually pass. The most likely outcome in the near term appears to be narrow bills on child safety or deepfakes rather than comprehensive preemption.

OBJ SPEAKING

← Daily BriefAbout

White House Urges States to Block AI Regulations

White House releases National Policy Framework calling for federal preemption of state AI laws, prioritizing innovation over regulation.

Mar 20, 2026· Updated Apr 5, 2026
What's Going On

The White House said on Friday that Congress should "preempt state AI laws" that it views as too burdensome, laying out a broad framework for how it wants Congress to address concerns about artificial intelligence without curbing growth or innovation in the sector. The legislative blueprint outlines a half-dozen guiding principles for lawmakers, focusing on protecting children, preventing electricity costs from surging, respecting intellectual property rights, preventing censorship and educating Americans on using the technology. The four-page document fulfilled a request from President Donald Trump's December executive order on state AI laws, which directed White House science and technology adviser Michael Kratsios, along with Special Adviser for AI and Crypto David Sacks, to develop a national policy to preempt state laws. The framework asks for broad preemption of state laws on AI, a long-standing priority of the AI industry and the Trump administration. That priority has fallen short of legislative backing twice this Congress; it was removed from the GOP budget reconciliation bill last summer and never officially made it into the annual defense policy bill.

Left says: Democratic lawmakers have continued to raise concerns that the current framework from the Trump administration prioritizes preemption without establishing sufficiently robust federal safeguards. On March 20, 2026, Rep. Beyer, alongside Reps. Doris Matsui (D-Calif.), Ted Lieu (D-Calif.), Sara Jacobs (D-Calif.) and April McClain Delaney (D-Md.), introduced the Guaranteeing and Upholding Americans' Right to Decide Responsible AI Laws and Standards (GUARDRAILS) Act, which would repeal the Trump Administration's EO establishing a national AI policy framework and effectively block efforts to impose a moratorium on state-level AI regulation.
Right says: House leadership immediately offered their support for the proposal. Those priorities won praise from the AI industry, which has pushed Congress and the administration to move toward preemption. Patrick Hedger, director of policy for industry group NetChoice, said in a statement that the framework shows that the White House knows "what is at stake and what it will take to win the future," going on to add that "a light-tough regulatory environment" is required for AI innovation.
✓ Common Ground
Several voices on both sides acknowledge the importance of protecting children from AI harms and preventing electricity costs from surging.
Disagreements over AI policy go well beyond Republican vs. Democrat, and they overlap with broader tech policy debates that Congress has never been able to solve. Some Republicans at state level and Democrats in Congress share concerns about excessive federal overreach on federalism grounds.
Many of the framework's recommendations find support in other laws, proposed bills, or longstanding bipartisan policies. The framework's broad calls for federal preemption are also softened by its articulated carveouts, including for laws of general applicability that protect children, prevent fraud, and protect consumers. Most state AI laws address those issues.
Both sides recognize the need for some federal action on AI, though they differ sharply on scope and whether preemption is the vehicle for it.
Objective Deep Dive

The White House, as well as several prominent figures in AI, say navigating a patchwork of state regulations could slow down innovation and affect America's competitiveness in the global AI race with China, which they say will have implications for the economy and national security. State-by-state regulation by definition creates a patchwork of 50 different regulatory regimes that makes compliance more challenging, particularly for start-ups. However, Congress has repeatedly declined to enact comprehensive federal preemption of state AI laws, including rejecting such an approach in the One Big Beautiful Bill Act and the National Defense Authorization Act. This legislative history suggests preemption faces real political obstacles even within Republican ranks.

The framework reflects a genuine tension between two legitimate policy concerns. The right correctly identifies that complying with 50 different state regimes creates real compliance burdens for developers and that fragmented rules could slow innovation. Yet critics on the left identify a real gap: the framework provides few binding safeguards against AI harms before preemption takes effect. The framework's silence on some areas is notable. It makes no reference to the risks of AI systems' bias, nor does it seek to mitigate that harm through quality or testing requirements. It does not discuss civil rights, except for the prioritization of some free speech rights. And it makes no mention of the need to monitor performance of AI models or their deployment after they are created. Additionally, Last summer, the Trump Administration urged Congress to adopt a temporary federal "moratorium" preempting certain state AI laws, but Congress ultimately declined to pursue that approach.

Despite increased legislative activity, the path forward for federal AI regulation remains unclear due to competing proposals, jurisdictional complexity, and ongoing divisions, particularly over preemption, across and within both parties. The framework has energized both pro-preemption Republicans and anti-preemption Democrats to introduce competing bills, but the Senate's composition and the 2026 elections create uncertainty about what Congress will actually pass. The most likely outcome in the near term appears to be narrow bills on child safety or deepfakes rather than comprehensive preemption.

◈ Tone Comparison

The right employs optimistic language about "innovation," "breakthroughs," and "winning the AI race," treating regulation skeptically as a threat to growth. The left uses language emphasizing "accountability," "safeguards," and "harms," treating deregulation skeptically as capitulation to industry. Right-leaning coverage highlights economic competitiveness concerns; left-leaning coverage emphasizes consumer protection and worker impacts.

✕ Key Disagreements
Scope of Federal Preemption
Left: Democratic lawmakers believe the framework prioritizes preemption without establishing sufficiently robust federal safeguards.
Right: The framework urges Congress to adopt a federally unified, innovation-oriented regime centered on preemption of state AI laws and a "light-touch" regulatory approach.
Accountability and Liability for Developers
Left: Critics like Brendan Steinhauser, CEO of The Alliance for Secure AI, said the proposed regulations provide "no path to accountability" for harms caused by the technology.
Right: The White House proposed broad preemption of state AI laws and against "open-ended liability" for AI firms.
Role of States in AI Regulation
Left: "Until federal action ensures safe and responsible AI development, deployment, and use, states must retain the ability to implement policies to protect the American public," Beyer said.
Right: The framework says "states should not be permitted to regulate AI development, because it is an inherently interstate phenomenon with key foreign policy and national security implications."
Data Center Expansion and Energy Costs
Left: Sen. Bernie Sanders and Rep. Alexandria Ocasio-Cortez introduced the AI Data Center Moratorium Act, which would impose a nationwide pause on the construction and expansion of AI data centers until Congress enacts comprehensive federal safeguards.
Right: The Trump administration has pushed for the development of AI data centers, including by furthering a Biden-era policy of identifying federal land on which to build AI infrastructure. The framework attempts to walk this line, calling for AI infrastructure in a fashion to "strengthen American communities" through economic growth while protecting against "harmful impacts."