AI Frontier Model Regulation in New York

Governor Hochul signed into law a chapter amendment finalizing the Responsible AI Safety and Education Act (RAISE Act), New York's law regulating frontier AI models.

Objective Facts

On March 27, 2026, Governor Hochul signed a chapter amendment finalizing the Responsible AI Safety and Education Act (RAISE Act), New York's law regulating frontier AI models. Hochul had originally signed the RAISE Act in December 2025 with a memorandum indicating concerns about broad compliance obligations, and reached an agreement with the legislature to make clarifications, which became effective on January 1, 2027 when Hochul signed the amendments on March 27, 2026. The amendments require large AI developers to create and publish information about their safety protocols and report incidents to the State within 72 hours, with penalties up to $1 million for first violations and up to $3 million for subsequent violations. The amended RAISE Act could run afoul of federal efforts to limit state regulation of AI, as President Trump's Executive Order titled 'Ensuring a National Policy Framework for Artificial Intelligence' directs federal agencies to challenge state AI laws deemed to impede a 'minimally burdensome national standard' for AI regulation, and the New York and California laws could be prime targets for such a challenge.

Left-Leaning Perspective

State Senator Andrew Gounardes, the Senate sponsor of the RAISE Act, said during amendment negotiations that he felt pressure from the venture capital community, with Andreessen Horowitz and industry groups using a tag-team approach and parachuting into New York to finance online ads disparaging the regulation. Assemblymember Alex Bores, the Assembly co-sponsor, framed the amendments as a victory, saying the deal moved the law beyond California's SB 53 and proved that SB 53 is not the ceiling on AI safety, as some in industry had tried to claim it was. The Center for Democracy and Technology, a civil society organization, applauded California and New York on the passage of bills that take initial steps towards the mitigation of some of the harms AI systems can engender, while noting they should be seen as the starting point, not the finish line for legislation.

Right-Leaning Perspective

The Chamber of Progress, an industry group, stated that the amended version significantly scales back the regulatory burden on upstart competitors while strengthening consumer protections, and characterized Governor Hochul as striking the right balance. OpenAI Chief Global Affairs Officer Chris Lehane expressed support for the law, stating that while they continue to believe a single national safety standard for frontier AI models established by federal legislation remains the best way to protect people and support innovation, the combination of New York and California is a big step in the right direction. Some lawmakers and tech executives have argued that a patchwork of state AI policies will hinder innovation and put the U.S. at risk of falling behind adversaries like China, suggesting federal preemption would be preferable.

Deep Dive

New York's RAISE Act represents the second major state AI safety law enacted in the U.S., following California's SB 53. AI safety advocates like Anthropic argued that governments must put in place regulation of frontier models by April 2026 at the latest, and that while they would prefer federal action, the federal legislative process will not be fast enough to address risks on the timescale they are concerned about, making state regulation necessary. The law targets the specific angle of frontier AI developer transparency and safety, focusing on models trained with extraordinary computational power that could pose catastrophic risks like bioweapon creation or large-scale cyberattacks. The March 27, 2026 amendments represent a critical turning point where the original legislature-passed version was substantially rewritten. Hochul's version removed the requirement that companies not release unsafe AI models and changed the threshold for applicability from a computational cost of $100 million to revenue of $500 million, which would exclude tech companies making less profit but still developing AI with high computational power. Industry-friendly provisions transplanted into New York's law matched California's text by about 95%, and a subtle wording change altered the meaning of the reporting requirement from safety incidents to critical safety incidents, changing when reporting is required. However, after Hochul proposed completely rewriting the bill with exact wording from a weaker California law, legislators negotiated back in measures that went beyond the West Coast version. The regulatory landscape has shifted dramatically since March 27. Within eight days of Hochul's March 27 signing, two major regulatory forces collided: on March 20, the Trump Administration published its National Policy Framework calling for preempting state AI laws, creating direct tension with Hochul's March 27 signing of the RAISE Act, which asserts precisely the kind of governance the framework wants Congress to preclude. The Commerce Department's evaluation of problematic state laws will not itself invalidate those laws—that will require litigation, and any meaningful relief depends on the DOJ filing suit and a court granting an injunction, a process that could take months or years, and state laws that have already taken effect remain fully enforceable absent court action. What happens next depends on federal litigation that could take years to resolve, during which time New York's law takes effect on January 1, 2027.

OBJ SPEAKING

Create StoryTimelinesVoter ToolsRegional AnalysisAll StoriesCommunity PicksUSWorldPoliticsBusinessHealthEntertainmentTechnologyAbout

AI Frontier Model Regulation in New York

Governor Hochul signed into law a chapter amendment finalizing the Responsible AI Safety and Education Act (RAISE Act), New York's law regulating frontier AI models.

Apr 27, 2026
What's Going On

On March 27, 2026, Governor Hochul signed a chapter amendment finalizing the Responsible AI Safety and Education Act (RAISE Act), New York's law regulating frontier AI models. Hochul had originally signed the RAISE Act in December 2025 with a memorandum indicating concerns about broad compliance obligations, and reached an agreement with the legislature to make clarifications, which became effective on January 1, 2027 when Hochul signed the amendments on March 27, 2026. The amendments require large AI developers to create and publish information about their safety protocols and report incidents to the State within 72 hours, with penalties up to $1 million for first violations and up to $3 million for subsequent violations. The amended RAISE Act could run afoul of federal efforts to limit state regulation of AI, as President Trump's Executive Order titled 'Ensuring a National Policy Framework for Artificial Intelligence' directs federal agencies to challenge state AI laws deemed to impede a 'minimally burdensome national standard' for AI regulation, and the New York and California laws could be prime targets for such a challenge.

Left says: Progressive sponsors claim the bill moves beyond California's SB53 in significant ways and sets the stage for greater disclosure and legislative action, while civil society advocates see it as an important but preliminary step.
Right says: Tech industry members have lobbied for New York to adopt the California law rather than the RAISE Act, which would create a national standard and make it harder to enact stronger measures.
✓ Common Ground
New York's alignment with California on AI safety may lift some perceived patchwork burdens off major AI developers, and both OpenAI and Anthropic expressed support for the RAISE Act, with both indicating that having similar legislation in two large state economies is good for the policy landscape.
Governor Hochul and lawmakers succeeded in creating a regulatory framework using California law as the base with selected changes to strengthen it, suggesting some agreement that California's model provides an acceptable starting point.
Multiple observers across the spectrum acknowledged the law covers only companies with more than $500 million in revenue and takes effect January 1, 2027, suggesting acceptance of narrow scope and delayed implementation.
Objective Deep Dive

New York's RAISE Act represents the second major state AI safety law enacted in the U.S., following California's SB 53. AI safety advocates like Anthropic argued that governments must put in place regulation of frontier models by April 2026 at the latest, and that while they would prefer federal action, the federal legislative process will not be fast enough to address risks on the timescale they are concerned about, making state regulation necessary. The law targets the specific angle of frontier AI developer transparency and safety, focusing on models trained with extraordinary computational power that could pose catastrophic risks like bioweapon creation or large-scale cyberattacks.

The March 27, 2026 amendments represent a critical turning point where the original legislature-passed version was substantially rewritten. Hochul's version removed the requirement that companies not release unsafe AI models and changed the threshold for applicability from a computational cost of $100 million to revenue of $500 million, which would exclude tech companies making less profit but still developing AI with high computational power. Industry-friendly provisions transplanted into New York's law matched California's text by about 95%, and a subtle wording change altered the meaning of the reporting requirement from safety incidents to critical safety incidents, changing when reporting is required. However, after Hochul proposed completely rewriting the bill with exact wording from a weaker California law, legislators negotiated back in measures that went beyond the West Coast version.

The regulatory landscape has shifted dramatically since March 27. Within eight days of Hochul's March 27 signing, two major regulatory forces collided: on March 20, the Trump Administration published its National Policy Framework calling for preempting state AI laws, creating direct tension with Hochul's March 27 signing of the RAISE Act, which asserts precisely the kind of governance the framework wants Congress to preclude. The Commerce Department's evaluation of problematic state laws will not itself invalidate those laws—that will require litigation, and any meaningful relief depends on the DOJ filing suit and a court granting an injunction, a process that could take months or years, and state laws that have already taken effect remain fully enforceable absent court action. What happens next depends on federal litigation that could take years to resolve, during which time New York's law takes effect on January 1, 2027.

◈ Tone Comparison

Progressive sponsors like Bores used combative language, stating that New York defeated last-ditch attempts from AI oligarchs to wipe out the bill and defeated Trump's attempt to stop RAISE through executive action. Trump used emphatic capitalization and crisis language, warning that overregulation by the states threatens the economy and that China will catch the U.S. if separate regulatory regimes are not prevented.