New York Governor Hochul signs RAISE Act requiring AI safety frameworks

Governor Hochul signed the RAISE Act into law December 19, 2025, requiring large AI developers to create and publish safety protocols and report incidents within 72 hours.

Objective Facts

Governor Kathy Hochul signed the Responsible AI Safety and Education Act (RAISE Act) into law on December 19, 2025. The law requires large AI developers to create and publish information about their safety protocols and report incidents to the State within 72 hours of determining that an incident occurred. It also creates an oversight office within the Department of Financial Services that will assess large frontier developers and enable greater transparency. The state legislature originally passed the RAISE Act in June, but Hochul used nearly all the time available to her before signing it. The final text approved by Hochul dials back developer requirements compared to the text passed by the New York State Legislature this summer, after pressure from the technology industry.

Left-Leaning Perspective

Progressive lawmakers and AI safety advocates celebrated the RAISE Act signing but criticized Governor Hochul for scaling back the original legislation. State Senator Andrew Gounardes called the law "an enormous win for the safety of our communities, the growth of our economy and the future of our society," characterizing it as laying "groundwork for a world where AI innovation makes life better instead of putting it at risk". Assemblymember Alex Bores framed it as "a major victory in what will soon be a national fight to harness the best of AI's potential and protect Americans from the worst of its harms". However, The American Prospect criticized Hochul's approach harshly. The outlet reported that Hochul "completely rewrote a bill passed by the state legislature intended to regulate artificial intelligence models," substituting it with language favored by Big Tech interests. AI safety advocates characterized the amendments as "considerably watering down the RAISE Act compared to what the New York State legislature had initially passed". Left-leaning outlets and advocates emphasized both the victory and its limitations. Bores stated they "defeated last-ditch attempts from AI oligarchs to wipe out this bill" and "defeated Trump's – and his donors – attempt to stop RAISE through executive action", framing the final law as a floor for AI safety regulation rather than a ceiling. City & State reported that Bores claimed they "moved it beyond SB 53 and proved that SB 53 is not the ceiling on AI safety", signaling continued commitment to stronger measures. Left-leaning coverage largely omitted discussion of concerns that the final law primarily relies on transparency obligations rather than substantive safety mandates, and downplayed the degree to which the signed version departed from the original legislative intent on metrics like reporting timelines and penalty amounts.

Right-Leaning Perspective

Conservative and tech industry voices opposed the RAISE Act as an example of burdensome state-level regulation that threatens innovation and competitiveness. The tech industry super PAC Leading the Future, backed by AI leaders, accused Assemblymember Bores of pushing "ideological and politically motivated legislation" that would "slow American progress and open the door for China to win the global race for AI leadership". The Computer and Communications Industry Association opposed the bill, arguing it holds developers liable for actions they cannot control and limits safe research, recommending the state focus on "clear, workable rules that build public trust" rather than measures that would "stifle innovation and place a significant compliance burden". The Trump Administration pursued a clear pattern of seeking to limit state-level regulation, first through the proposed One Big Beautiful Bill Act with a 10-year moratorium on state AI regulations. The Trump administration's position, articulated through its December 2025 executive order, emphasizes a unified federal approach. The White House argues that "State-by-State regulation by definition creates a patchwork of 50 different regulatory regimes that makes compliance more challenging, particularly for start-ups". The administration contends that state laws are increasingly requiring entities to "embed ideological bias within models," citing Colorado's ban on algorithmic discrimination as an example that "may even force AI models to produce false results". The White House declares that "My Administration must act with the Congress to ensure that there is a minimally burdensome national standard — not 50 discordant State ones". Right-leaning voices and tech advocates noted that while the signed RAISE Act is weaker than originally passed, it still creates compliance burdens and regulatory fragmentation that merit federal preemption.

Deep Dive

The RAISE Act signing reflects a fundamental tension in AI governance between state-level safety regulation and federal-level innovation incentives. The legislature passed the original bill in June 2025, but Governor Hochul delayed signing for six months, highlighting the delicate balance between safety and innovation that state legislatures are grappling with. After industry pressure collided with bill sponsors' desire for guardrails, the final law dials back developer requirements compared to the legislative version but tracks closely with California's approach. The signed version represents a compromise: it establishes transparency and reporting obligations stronger than California's (72-hour versus 15-day reporting windows) but removes the original bill's ban on unsafe model deployment and reduces penalties significantly. What each perspective gets right and overlooks: Progressives correctly identify that federal AI regulation has stalled and states face real pressure to act, and that some form of AI safety oversight is necessary. The American Prospect and bill sponsors accurately documented how industry lobbying weakened the bill. However, they underemphasize that transparency-based regulation itself is a legitimate approach to AI safety, and that the signed law still establishes mandatory incident reporting requirements stricter than California's. Conservatives and industry advocates correctly note that fragmented state regulations create compliance challenges, particularly for smaller companies, and that uniform standards would reduce business uncertainty. Yet they downplay that the signed law itself incorporates meaningful transparency obligations, and that federal preemption without federal regulation would leave a vacuum. The Trump administration's claims about "ideological bias" in state AI laws are debatable—Colorado's algorithmic discrimination law aims at preventing unlawful discrimination, not embedding "bias." What comes next: The law takes effect January 1, 2027. The more pressing question is whether Trump's executive order and threatened AI Litigation Task Force will actually challenge the RAISE Act in court. The final executive order text narrowed from the leaked draft by expressly prohibiting federal preemption of state laws relating to child safety, and other carve-outs that temper its scope, suggesting some legal caution. The ultimate outcome depends on congressional action—a previous attempt at federal preemption through the One Big Beautiful Bill Act passed the House but was rejected by the Senate due to bipartisan concerns about erosion of state authority, indicating this remains contested terrain.

OBJ SPEAKING

Create StoryTimelinesVoter ToolsRegional AnalysisAll StoriesCommunity PicksUSWorldPoliticsBusinessHealthEntertainmentTechnologyAbout

New York Governor Hochul signs RAISE Act requiring AI safety frameworks

Governor Hochul signed the RAISE Act into law December 19, 2025, requiring large AI developers to create and publish safety protocols and report incidents within 72 hours.

Apr 14, 2026
What's Going On

Governor Kathy Hochul signed the Responsible AI Safety and Education Act (RAISE Act) into law on December 19, 2025. The law requires large AI developers to create and publish information about their safety protocols and report incidents to the State within 72 hours of determining that an incident occurred. It also creates an oversight office within the Department of Financial Services that will assess large frontier developers and enable greater transparency. The state legislature originally passed the RAISE Act in June, but Hochul used nearly all the time available to her before signing it. The final text approved by Hochul dials back developer requirements compared to the text passed by the New York State Legislature this summer, after pressure from the technology industry.

Left says: Gounardes framed the law as "an enormous win for the safety of our communities" laying "groundwork for a world where AI innovation makes life better instead of putting it at risk", though progressive outlets criticized Hochul's watering down of the original bill.
Right says: The Trump administration insists that "there is a minimally burdensome national standard — not 50 discordant State ones", and the final executive order uses softer language about economic inefficiencies of regulatory patchwork rather than directly attacking state laws.
✓ Common Ground
OpenAI and Anthropic expressed support for the RAISE Act while also calling for federal legislation, indicating that some voices on both sides recognize value in having uniform state frameworks while still advocating for federal preemption.
Both OpenAI and Anthropic's head of external affairs Sarah Heck told the New York Times that "two of the largest states in the country have now enacted AI transparency legislation signals the critical importance of safety and should inspire Congress to build on them", suggesting consensus that federal action is ultimately necessary.
A Fox News opinion piece noted that both states are recognizing "that a fragmented, state-by-state patchwork isn't sustainable" and that their moves "create a clear path forward for federal action", indicating even some conservative voices acknowledge the patchwork problem.
Multiple sources note the delicate balance between safety and innovation that state legislatures are grappling with, with Hochul's office acknowledging both AI's benefits in "groundbreaking scientific advances" and "the potential for serious risks", showing shared concern about balancing competing interests.
Objective Deep Dive

The RAISE Act signing reflects a fundamental tension in AI governance between state-level safety regulation and federal-level innovation incentives. The legislature passed the original bill in June 2025, but Governor Hochul delayed signing for six months, highlighting the delicate balance between safety and innovation that state legislatures are grappling with. After industry pressure collided with bill sponsors' desire for guardrails, the final law dials back developer requirements compared to the legislative version but tracks closely with California's approach. The signed version represents a compromise: it establishes transparency and reporting obligations stronger than California's (72-hour versus 15-day reporting windows) but removes the original bill's ban on unsafe model deployment and reduces penalties significantly.

What each perspective gets right and overlooks: Progressives correctly identify that federal AI regulation has stalled and states face real pressure to act, and that some form of AI safety oversight is necessary. The American Prospect and bill sponsors accurately documented how industry lobbying weakened the bill. However, they underemphasize that transparency-based regulation itself is a legitimate approach to AI safety, and that the signed law still establishes mandatory incident reporting requirements stricter than California's. Conservatives and industry advocates correctly note that fragmented state regulations create compliance challenges, particularly for smaller companies, and that uniform standards would reduce business uncertainty. Yet they downplay that the signed law itself incorporates meaningful transparency obligations, and that federal preemption without federal regulation would leave a vacuum. The Trump administration's claims about "ideological bias" in state AI laws are debatable—Colorado's algorithmic discrimination law aims at preventing unlawful discrimination, not embedding "bias."

What comes next: The law takes effect January 1, 2027. The more pressing question is whether Trump's executive order and threatened AI Litigation Task Force will actually challenge the RAISE Act in court. The final executive order text narrowed from the leaked draft by expressly prohibiting federal preemption of state laws relating to child safety, and other carve-outs that temper its scope, suggesting some legal caution. The ultimate outcome depends on congressional action—a previous attempt at federal preemption through the One Big Beautiful Bill Act passed the House but was rejected by the Senate due to bipartisan concerns about erosion of state authority, indicating this remains contested terrain.

◈ Tone Comparison

Left-leaning outlets used moral language framing regulation as protection against corporate greed, with Gounardes describing "Big tech oligarchs" putting profits over safety. Right-leaning and industry voices used economic language emphasizing compliance burdens, innovation stifling, and competitive disadvantage, treating regulation as economically damaging rather than a safety issue. Both sides employed absolute language about whose position represents the "right" approach.