New York enacts nation's strongest AI transparency law
Governor Hochul signed legislation to require AI frameworks for AI frontier models, setting a nation-leading standard for AI transparency and safety.
Objective Facts
On December 19, 2025, Governor Kathy Hochul signed the Responsible AI Safety and Education (RAISE) Act, requiring large AI developers to create and publish information about their safety protocols and report incidents to the State within 72 hours of determining that an incident occurred. The law also creates an oversight office within the Department of Financial Services that will assess large frontier developers and enable greater transparency. The final version of the act defines 'large developers' as those persons with more than $500 million in revenue. Some AI safety advocates have criticized the amendments as considerably watering down the RAISE Act compared to what the New York State legislature had initially passed. Hochul signed the RAISE Act eight days after President Trump issued an executive order on December 11, 2025, directing the Department of Justice to challenge state AI laws deemed to conflict with a 'minimally burdensome' national AI policy, and on January 9, 2026, the Department of Justice announced the establishment of an AI Litigation Task Force.
Left-Leaning Perspective
Assemblymember Alex Bores declared victory, stating 'Today is a major victory in what will soon be a national fight to harness the best of AI's potential and protect Americans from the worst of its harms. New York now has the strongest AI transparency law in the country. This bill moves beyond California's SB53 in significant ways, and sets the stage for greater disclosure, learning, and legislative action in years to come.' State Senator Andrew Gounardes similarly praised the law as 'an enormous win for the safety of our communities, the growth of our economy and the future of our society,' declaring 'Big tech oligarchs think it's fine to put their profits ahead of our safety—we disagree.' Left-aligned sponsors presented the legislation as 'a focused, forward-looking bill that requires safety plans and incident reporting for the most powerful AI models' and noted it 'passed the legislature with overwhelming bipartisan support and is supported by 84% of New Yorkers.' However, progressive critics alleged that Governor Hochul 'completely rewrote' the bill during negotiations, with 'the entirety of the bill...crossed out, with replacement text added that is substantially similar to a separate AI safety law passed in California' that 'was weakened significantly, with input from OpenAI and other Big Tech firms and lobbyists.' Left-leaning critics from The American Prospect characterized stakeholders as 'apoplectic' about the changes and argued that 'Hochul's gutting of the RAISE Act may make efforts like Trump's to preempt state AI rules obsolete,' noting that 'Big Tech is playing a federal-state game, lobbying for Trump administration support to create obstacles to state AI rules while fighting those rules in state capitols' and that 'in New York, these companies appear to have won one round.'
Right-Leaning Perspective
The bipartisan super PAC Leading the Future announced it would target Democratic congressional candidate Alex Bores over his championing of the RAISE Act, which 'would require large AI companies to publish safety and risk protocols and disclose serious safety incidents.' Leading the Future's coordinators characterized the bill as 'a clear example of the patchwork, uninformed, and bureaucratic state laws that would slow American progress and open the door for China to win the global race for AI leadership,' with the PAC representing the view that 'federal AI laws should preempt regulations implemented by specific states.' Right-leaning tech executives and lawmakers argued that 'a "patchwork" of state AI policies will hinder innovation and put the U.S. at risk of falling behind its adversaries like China,' though others, including Assemblymember Bores, counter that 'the federal government moves too slowly to keep up with the rapid pace of AI development.' Republican Senator Ted Cruz and House leadership praised Trump's executive order addressing the issue as 'a necessary interim step,' with the administration framing it 'as a response to what it views as an urgent crisis: a rapidly fracturing AI regulatory landscape driven by state action that risks undermining economic growth, job creation, national security, and U.S. competitiveness vis-à-vis China.' The Trump administration characterized the RAISE Act alongside Colorado's AI law as 'woke' blue state laws that pose a threat to technological innovation and national security due to the 'patchwork' of state policies, even though the RAISE Act focuses specifically on catastrophic risks from frontier models rather than broader bias and discrimination concerns.
Deep Dive
The RAISE Act represents the first major state AI legislation signed into law following President Trump's December 11, 2025, executive order directing federal agencies to challenge state AI laws. After the legislature passed the bill in June 2025, Governor Hochul did not immediately sign it, and the tech industry lobbied against the bill during this period; Hochul initially proposed a near-complete rewrite modeled on California's TFAIA, but legislators resisted the extent of the changes, and the two sides ultimately agreed on a version that used the California law as a base but preserved several provisions that went beyond it, including the 72-hour incident reporting timeline and the creation of a dedicated enforcement office. What each perspective gets right: Supporters correctly identify that many major AI companies have voluntarily committed to create safety and security plans, but there is currently no legal requirement that they have such plans, that they be reasonable, or that they are followed in practice, and writing these common-sense protections into law ensures no company is incentivized to cut corners or otherwise put short-term profits over safety. Critics correctly note that the RAISE Act and California's law are signs pointing to an emerging national standard for developers training large and advanced AI models, and even a single state law requiring public transparency has national significance because public information in New York or California is also available in other states, and public knowledge about AI-related safety and harm may provide more of an evidentiary foundation for future regulation. However, supporters understate the law's modifications from the original legislative version, while critics may overstate the extent to which these modifications undermine the law's utility as a transparency mechanism. What each perspective omits or downplays: Progressive critics focus heavily on Hochul's late-stage amendments without acknowledging that New York's alignment with California on AI safety may lift some perceived patchwork burdens off major AI developers, as OpenAI and Anthropic expressed support for the RAISE Act and indicated that having similar legislation in two large state economies is good for the policy landscape overall. Right-leaning opponents downplay that only 12% of all AI-related bills introduced during the 2025 state legislative sessions became law, and 81% of those enacted laws contained no mandates for private AI companies, suggesting the current state of regulation is not the dire "patchwork" the administration claims. What to watch: On January 9, 2026, the Department of Justice announced the establishment of an AI Litigation Task Force, and the RAISE Act's long-term viability is uncertain while New York asserts its right to protect New Yorkers from 'frontier' risks and the federal government is attempting to preempt all state authority over AI regulation. On March 20, 2026, the Trump Administration released its 'National Policy Framework for Artificial Intelligence,' but prospects for near-term passage of comprehensive federal AI legislation face significant headwinds, including bipartisan opposition to state preemption.
