New York RAISE Act Sets Strongest AI Transparency Standard
New York Governor Kathy Hochul signed into law a chapter amendment that is the final version of the Responsible AI Safety and Education Act (RAISE Act), New York's new law regulating frontier AI models, setting the strongest AI transparency law in the country.
Objective Facts
On March 27, 2026, New York Governor Kathy Hochul signed into law a chapter amendment that is the final version of the Responsible AI Safety and Education Act (RAISE Act), New York's new law regulating frontier AI models. The New York Governor originally signed the RAISE Act into law on December 19, 2025, following the state's legislature passing the bill in June 2025. The Governor negotiated with lawmakers to secure a chapter amendment, which was introduced on January 6, 2026, passed the second chamber of the legislature on March 11, 2026, and was finally signed into the law by the Governor on March 27, 2026. The agreed-upon chapter amendments require large AI developers to create and publish information about their safety protocols, and report incidents to the State within 72 hours of determining that an incident occurred. It also creates an oversight office within the Department of Financial Services that will assess large frontier developers and enable greater transparency, which will issue reports annually. While the chapter amendment brings the final version of the RAISE Act more in line with California's TFAIA, including aligning with some key definitions, other provisions of the RAISE Act are a marked departure from TFAIA, including the requirement to disclose incidents within 72 hours (TFAIA has a 15-day timeline).
Left-Leaning Perspective
Assemblymember Alex Bores, who championed the RAISE Act alongside State Senator Andrew Gounardes, framed the finalized law as a major achievement for AI safety governance. Bores stated: "Today is a major victory in what will soon be a national fight to harness the best of AI's potential and protect Americans from the worst of its harms. New York now has the strongest AI transparency law in the country. This bill moves beyond California's SB53 in significant ways, and sets the stage for greater disclosure, learning, and legislative action in years to come". He added that "In New York, we defeated last-ditch attempts from AI oligarchs to wipe out this bill... And we defeated Trump's — and his donors' — attempt to stop RAISE through executive action greenlighting a Wild West for AI". Progressive AI safety advocates took a more measured view, seeing the final law as both an important step and an incomplete one. The Center for Democracy and Technology's Travis Hall said transparency is "a baseline for any form of oversight and accountability for the development and deployment of AI tools" and applauded California and New York, but cautioned "they should be seen as the starting point, not the finish line for legislation". However, some AI safety advocates criticized the amendments as considerably watering down the RAISE Act compared to what the New York State legislature had initially passed, reflecting frustration that industry lobbying had weakened the original June 2025 legislative version. Left-leaning coverage emphasized the political defiance of the Trump administration's anti-regulation executive order and celebrated the protections for catastrophic AI harms, while simultaneously acknowledging that the law had been diluted from stronger versions due to tech industry pressure.
Right-Leaning Perspective
Tech industry and conservative-aligned figures strongly opposed the RAISE Act's transparency and reporting requirements as a threat to American AI competitiveness. The super PAC "Leading the Future" (LTF), led by Zac Moffatt and Josh Vlasto, accused Bores of pushing "ideological and politically motivated legislation" that would "handcuff" the U.S., stating the bill was "a clear example of the patchwork, uninformed, and bureaucratic state laws that would slow American progress and open the door for China to win the global race for AI leadership". Some lawmakers and tech executives argued that a "patchwork" of state AI policies will hinder innovation and put the U.S. at risk of falling behind adversaries, while others, including Bores, said that the federal government moves too slowly to keep up with the rapid pace of AI development. The Trump administration explicitly challenged the law through federal action. The Trump administration released its National Policy Framework for Artificial Intelligence on March 20, 2026, outlining recommendations intended to establish a nationally uniform approach to AI regulation, with recommendations including preemption of state AI laws across seven pillars. The Commerce Department's evaluation flagged laws in Colorado, California, and New York for particular scrutiny, with the evaluation feeding into the DOJ task force, which is expected to begin filing federal legal challenges by summer 2026. Attorney General Pam Bondi established the Department of Justice's AI Litigation Task Force on January 9, 2026, tasked with challenging state AI laws in federal court on grounds including unconstitutional burdens on interstate commerce. Right-aligned commentary prioritized global competitiveness and innovation speed, portraying transparency mandates as burdensome regulations that would slow U.S. AI development relative to international competitors.
Deep Dive
The RAISE Act's finalization on March 27, 2026, represents a compressed cycle of AI policy development: a bill passed legislatively in June 2025, initially signed by Governor Hochul in December 2025 with promised amendments, then finalized with amendments on March 27, 2026. This reflects the collision of three competing forces: state-level demand for AI transparency, industry opposition to perceived regulatory overreach, and federal efforts to preempt state authority. The original June 2025 legislative version included stricter penalties ($10-30 million) and required safety protocol publication before model release; the final version reduced penalties to $1-3 million and decoupled publication timing from release deadlines, concessions that satisfied industry concerns about regulatory burden while maintaining core transparency and 72-hour incident reporting. Both sides gained and lost ground. Progressives and AI safety advocates achieved codified 72-hour incident reporting and mandatory transparency frameworks—stricter than California's 15-day window—plus a dedicated state oversight office within DFS. However, they lost the mandatory pre-deployment publication requirement and the higher penalty structure, reducing the law's teeth. Industry secured more workable timelines and lower penalties, accepting transparency obligations they had initially opposed but reframing them as promoting market efficiency and level playing fields. Notably, both OpenAI and Anthropic publicly supported the final RAISE Act alongside California's similar law, signaling that a two-state bicoastal standard was preferable to a chaotic patchwork of 50 different regimes. The immediate legal question is whether the Trump administration's DOJ will sue to preempt the RAISE Act under the Dormant Commerce Clause or other constitutional grounds. The Commerce Department's March 2026 evaluation flagged New York specifically for scrutiny, setting the stage for federal litigation likely to span 2-3 years. This creates operational uncertainty for frontier AI companies: they must comply with New York law as written pending litigation, while simultaneously preparing arguments against its constitutionality. The final RAISE Act language includes compliance pathways (allowing developers to satisfy federal standards in lieu of state ones if designated by NYDFS), which may partially insulate it from preemption claims by demonstrating flexibility. What neither side fully confronts is the underlying tension between rapid technological capability growth and the pace of regulatory science and policy adjustment.