Trump Administration Pushes AI Preemption Framework to States
Trump administration releases national AI policy framework pushing for broad federal preemption of state AI laws while proposing limited carve-outs for child safety and other areas.
Objective Facts
On March 20, 2026, the Trump administration released its National Policy Framework for Artificial Intelligence, outlining recommendations intended to establish a nationally uniform approach to AI regulation. The Framework reflects a policy preference for a sector-specific, federally led regulatory model with significant preemption of state AI laws. The Framework calls on Congress to preempt state AI laws that 'impose undue burdens,' while preserving state authority in certain areas, including traditional police powers to protect children and prevent fraud, state zoning authority over AI infrastructure, and requirements governing a state's own use of AI in procurement and public services. The four-page framework calls on lawmakers to limit the ability of states to set their own rules for the technology, setting up a renewed clash with states and Congress over the future of AI regulation. House Democrats introduced the Guaranteeing and Upholding Americans' Right to Decide Responsible AI Laws and Standards (GUARDRAILS) Act, which would repeal the Trump Administration's EO establishing a national AI policy framework and effectively block efforts to impose a moratorium on state-level AI regulation, with Sen. Schatz expected to introduce companion legislation in the Senate.
Left-Leaning Perspective
Progressive outlets and Democratic lawmakers condemned the Trump framework as capitulation to Big Tech. Common Dreams reported that critics said the framework gave Silicon Valley a massive gift by coming out in favor of barring state regulation of the technology, while Rep. Yvette Clarke (D-NY) described it as being 'written by Big Tech, for Big Tech'. Matt Stoller, an antitrust researcher and author of the BIG newsletter, argued that the Trump AI framework should be one of the first things a future Democratic president throws in the garbage after taking office. Weissman stated that "Trump's AI framework is a hollow document with only one tough and meaningfully binding provision, delivering Big Tech's top policy priority: It aims to preempt all state laws and rules dealing with AI," noting that "Preemption would effectively mean no US regulation of AI at all" and that while states' actions to regulate AI are inadequate, they are at least "trying to meet the novel and enormous challenges of the moment." Democratic lawmakers focused their opposition on the absence of federal safeguards. Members of the House Democratic Commission on AI and the Innovation Economy raised concerns that the framework emphasizes preemption without sufficiently pairing it with enforceable national safeguards, with critiques focused on the absence of robust guardrails related to safety, labor impacts, and consumer protection. Rep. Don Beyer said "Until federal action ensures safe and responsible AI development, deployment, and use, states must retain the ability to implement policies to protect the American public". Public Knowledge criticized the administration's position that regulation-impedes-innovation, calling it backwards and warning that broad deregulatory preemption could further polarize an already-skeptical public. Left-leaning coverage largely omitted or downplayed the administration's carve-outs for child safety and other traditional state powers, instead emphasizing the sweeping nature of preemption language and portraying it as a complete dismantling of state regulatory authority.
Right-Leaning Perspective
Conservative outlets and Republican lawmakers praised the framework as necessary for American competitiveness and innovation. Sen. Marsha Blackburn stated "Instead of pushing AI amnesty, President Trump rightfully called on Congress to pass federal standards and protections to solve the patchwork of state laws that has hindered AI innovation" and that Congress must "establish one federal rulebook for AI to protect children, creators, conservatives, and communities across the country and ensure America triumphs over foreign adversaries in the global race for AI dominance". The framework's priorities won praise from the AI industry, with Patrick Hedger, director of policy for industry group NetChoice, saying the framework shows the White House knows "what is at stake and what it will take to win the future" and that "a light-tough regulatory environment" is required for AI innovation. White House Office of Science and Technology Policy Director Michael Kratsios reiterated the administration's stance as outlined in the National Policy Framework for AI and argued the administration's AI policy framework allows states to regulate some things about the technology — like child safety and state government procurement — while ensuring a national standard. Republican arguments centered on the practical necessity of uniform standards. Supporters of federal preemption argue that companies need a single rulebook rather than fifty conflicting ones, and Rep. Kat Cammack, R-Fla., cited concerns about state laws stifling innovation as the biggest rationale for federal preemption, warning that "we're going to push innovators out of the space" if preemption is not implemented, noting that "particularly startups — but the companies are not going to be able to have a framework for 50 different states and really survive". Right-leaning coverage emphasizes the "patchwork" problem and national security/competitiveness angles while downplaying concerns about consumer protection or the loss of state authority.
Deep Dive
The Trump Administration's March 20, 2026 National Policy Framework for Artificial Intelligence represents the culmination of a sustained push begun in December 2025 to centralize AI regulation at the federal level. The Framework builds on the December 2025 Executive Order on AI, which directed the administration to develop legislative recommendations for Congress while deploying executive tools, including an AI Litigation Task Force, to identify and challenge state AI laws viewed as inconsistent with federal policy. The framework asks for broad preemption of state laws on AI, a long-standing priority of the AI industry and the Trump administration, though that priority has fallen short of legislative backing twice this Congress — it was removed from the GOP budget reconciliation bill last summer and never officially made it into the annual defense policy bill. The framework reflects a deliberate political calculation balancing competing interests. On one hand, it preserves explicit carve-outs for child safety, data center zoning, and state procurement—recognizing strong bipartisan and public support for these areas. Trump's executive order contained a specific carve-out preserving state child safety laws, a clear nod to the political reality that protecting children was a non-negotiable priority for key lawmakers. On the other hand, it seeks to eliminate state authority over AI development, bias mitigation requirements, and third-party liability—areas where the administration views state laws as ideologically driven or economically harmful. The new effort may have to contend with opposition from Republican lawmakers at the state level, with more than 50 Republican state lawmakers sending a letter to Trump expressing that they are 'deeply concerned' by the administration's efforts to interfere with state AI legislation, warning that federal preemption strips states of their sovereignty and leaves them unable to respond to emerging local harms, after the Trump administration sent a letter in February to the Republican leader of the Utah Senate opposing a state bill requiring AI developers to publish public safety and child protection plans. What remains genuinely unresolved is whether the framework's proposed preemption can survive legal challenge and achieve congressional passage. The Executive Order itself acknowledges the absence of a federal regulatory framework, which complicates the administration's contention that state AI laws conflict with federal law, and Congress has twice rejected efforts to enact preemption provisions, which could suggest that Congress does not intend to foreclose state regulation of AI, making the viability of the administration's legal theories uncertain. The prospects for near-term passage of comprehensive federal AI legislation face significant headwinds, including a narrow window before midterm elections, bipartisan opposition to state preemption, differing views between the House and Senate, and the sheer scope and complexity of such an endeavor. Multiple states have indicated willingness to defend their AI laws in court, and Democratic control of the House (on key committees) means that any legislation would require genuine compromise on the scope and substance of preemption—something the framework's current language does not clearly provide.