New York Governor Signs RAISE Act to Require AI Transparency and Safety Protocols

Governor Kathy Hochul signed legislation to require AI frameworks for AI frontier models, setting a nation-leading standard for AI transparency and safety.

Objective Facts

Governor Kathy Hochul signed legislation on December 19, 2025 to require AI frameworks for AI frontier models with provisions requiring large AI developers to create and publish information about their safety protocols, and report incidents to the State within 72 hours of determining that an incident occurred. The law creates an oversight office within the Department of Financial Services that will assess large frontier developers and enable greater transparency, with annual reports. The Attorney General can bring civil actions against large frontier developers for failure to submit required reporting or making false statements, with penalties up to $1 million for the first violation and up to $3 million for subsequent violations. The law takes effect January 1, 2027. The final text dials back developer requirements compared to the version passed by the New York State Legislature this summer, but tracks closely with California's Transparency in Frontier Artificial Intelligence Act after pressure from the technology industry collided with bill sponsors' desires to put stronger guardrails on AI.

Left-Leaning Perspective

The American Prospect reported that Governor Hochul completely rewrote the bill passed by the state legislature, substituting it with language favored by Big Tech interests that have held fundraisers for her in recent weeks. The original bill would have put the onus on frontier model developers to create plans to make their models safer, proactively report "critical safety incidents," and ban models deemed unsafe through testing from being released. Venture capitalist Ron Conway and a Tech:NYC representative have opposed the bill for months, and Tech:NYC told The New York Times that the organization wanted the RAISE Act to more closely resemble the California AI safety law. Some AI safety advocates criticized the amendments as considerably watering down the RAISE Act compared to what the New York State legislature had initially passed. While the RAISE Act required any "safety incident" involving an AI model to be disclosed within 72 hours, Hochul's substitute contains a time limit of 15 days unless there's an imminent risk. Critics believe that big tech companies appear to have won one round in New York, which could rebound across the country and make it harder for states to combat the Trump-assisted effort to insulate AI from scrutiny. The new bill turns the safety and security "protocol" that model developers must write, publish on their website, and comply with into a "framework" that describes the developer's general approach; the RAISE Act requirement that models with an "unreasonable risk of critical harm" be prohibited from release is absent in Hochul's changes.

Right-Leaning Perspective

The super PAC Leading the Future is backed by high-profile names in tech, including OpenAI President Greg Brockman, Palantir co-founder Joe Lonsdale, venture firm Andreessen Horowitz and AI startup Perplexity. The PAC called the prior version of the bill a "clear example of the patchwork, uninformed, and bureaucratic state laws that would slow American progress and open the door for China to win the global race for AI leadership." The group largely represents the view of the Trump administration, that federal AI laws should preempt regulations implemented by specific states, an effort mostly meant to undermine big blue states like California and New York. President Donald Trump publicly supported unified federal regulations for AI to avoid a state-by-state patchwork, arguing that "China will easily catch us in the AI race" otherwise, and stating that state overregulation "is threatening to undermine investment in AI and the U.S. economy." The debate around the RAISE Act comes as President Donald Trump has signed an executive order meant to prevent states from regulating AI at the state level. The Department of Commerce is expected to consider the RAISE Act "burdensome" and in conflict with the executive order's stated goals, asserting that the New York and California laws represent first steps in the "patchwork" of state laws that could stifle AI innovation. The Computer and Communications Industry Association recommended the state focus on "clear, workable rules that build public trust and support research rather than measures that would push innovation elsewhere," warning that overly broad or complex laws "stifle innovation and place a significant compliance burden on both AI developers and users."

Deep Dive

The state legislature originally passed the RAISE Act in June, but Hochul used nearly all the time available to her before signing it. The signing delay highlights the delicate balance between safety and innovation many state legislatures are grappling with as they consider AI proposals. The RAISE Act was signed soon after President Trump issued an executive order authorizing federal lawsuits against states that pass AI laws viewed as hindering innovation. This timing creates pressure on both sides—Democratic lawmakers pushed for maximum safety guardrails before signing, while tech interests and the Trump administration lobbied to weaken requirements. With the final text, the 72-hour reporting period remains, and developers are still required to publish safety plans, although it is no longer a requirement to do so before releasing models. After Governor Hochul proposed completely rewriting the bill with exact wording from a weaker California law, legislators negotiated back in measures that went beyond the West Coast version. The actual outcome represents a compromise: bill sponsors preserved the 72-hour reporting requirement and gained a dedicated oversight office with rulemaking authority (broader than California's approach), but lost the prohibition on releasing unsafe models and agreed to a revenue-based applicability threshold rather than compute-cost metrics. Some AI safety advocates criticized these amendments as considerably watering down the RAISE Act compared to what the New York State legislature had initially passed. What remains unresolved is federal preemption. The RAISE Act may face federal opposition following a December 11, 2025 executive order seeking unified national AI regulation; the Department of Commerce is expected to consider the RAISE Act "burdensome" and in conflict with the executive order's goals, and whether the U.S. attorney general brings a lawsuit challenging the RAISE Act remains to be seen but appears likely. Although Governor Hochul signed the original version of the bill passed in June, lawmakers have agreed to approve her final bill changes after returning to session in Albany after the first of the new year. The law's ultimate durability depends on whether federal courts rule that states retain authority to regulate AI independently or defer to the Trump administration's assertion of federal preemption.

OBJ SPEAKING

Create StoryTimelinesVoter ToolsRegional AnalysisAll StoriesCommunity PicksUSWorldPoliticsBusinessHealthEntertainmentTechnologyAbout

New York Governor Signs RAISE Act to Require AI Transparency and Safety Protocols

Governor Kathy Hochul signed legislation to require AI frameworks for AI frontier models, setting a nation-leading standard for AI transparency and safety.

Apr 25, 2026
What's Going On

Governor Kathy Hochul signed legislation on December 19, 2025 to require AI frameworks for AI frontier models with provisions requiring large AI developers to create and publish information about their safety protocols, and report incidents to the State within 72 hours of determining that an incident occurred. The law creates an oversight office within the Department of Financial Services that will assess large frontier developers and enable greater transparency, with annual reports. The Attorney General can bring civil actions against large frontier developers for failure to submit required reporting or making false statements, with penalties up to $1 million for the first violation and up to $3 million for subsequent violations. The law takes effect January 1, 2027. The final text dials back developer requirements compared to the version passed by the New York State Legislature this summer, but tracks closely with California's Transparency in Frontier Artificial Intelligence Act after pressure from the technology industry collided with bill sponsors' desires to put stronger guardrails on AI.

Left says: The American Prospect argues that Governor Hochul rewrote a bill passed by the state legislature, substituting it with language favored by Big Tech interests. Despite these concerns, bill sponsor Alex Bores claimed, "we moved it beyond SB 53 and proved that SB 53 is not the ceiling on AI safety, as some in industry were trying to claim it was, but merely a first step."
Right says: Tech-backed opponents representing the Trump administration argue that federal AI laws should preempt state regulations, an effort largely targeting big blue states like California and New York. Lawmakers and tech executives have argued that a "patchwork" of state AI policies will hinder innovation and put the U.S. at risk of falling behind adversaries like China.
✓ Common Ground
Both OpenAI and Anthropic expressed support for the RAISE Act, with OpenAI Chief Global Affairs Officer Chris Lehane telling the New York Times that "the combination of the Empire State with the Golden State is a big step in the right direction," even while stating they continue to believe a single national safety standard remains the best approach.
A number of commentators across perspectives acknowledge that the federal government moves too slowly to keep up with the rapid pace of AI development.
Both bill sponsors and Governor Hochul acknowledged in their statements that AI is driving "groundbreaking scientific advances leading to life-changing medicines, unlocking new creative potential, and automating mundane tasks" while recognizing that "experts and practitioners in the field readily acknowledge the potential for serious risks."
Objective Deep Dive

The state legislature originally passed the RAISE Act in June, but Hochul used nearly all the time available to her before signing it. The signing delay highlights the delicate balance between safety and innovation many state legislatures are grappling with as they consider AI proposals. The RAISE Act was signed soon after President Trump issued an executive order authorizing federal lawsuits against states that pass AI laws viewed as hindering innovation. This timing creates pressure on both sides—Democratic lawmakers pushed for maximum safety guardrails before signing, while tech interests and the Trump administration lobbied to weaken requirements.

With the final text, the 72-hour reporting period remains, and developers are still required to publish safety plans, although it is no longer a requirement to do so before releasing models. After Governor Hochul proposed completely rewriting the bill with exact wording from a weaker California law, legislators negotiated back in measures that went beyond the West Coast version. The actual outcome represents a compromise: bill sponsors preserved the 72-hour reporting requirement and gained a dedicated oversight office with rulemaking authority (broader than California's approach), but lost the prohibition on releasing unsafe models and agreed to a revenue-based applicability threshold rather than compute-cost metrics. Some AI safety advocates criticized these amendments as considerably watering down the RAISE Act compared to what the New York State legislature had initially passed.

What remains unresolved is federal preemption. The RAISE Act may face federal opposition following a December 11, 2025 executive order seeking unified national AI regulation; the Department of Commerce is expected to consider the RAISE Act "burdensome" and in conflict with the executive order's goals, and whether the U.S. attorney general brings a lawsuit challenging the RAISE Act remains to be seen but appears likely. Although Governor Hochul signed the original version of the bill passed in June, lawmakers have agreed to approve her final bill changes after returning to session in Albany after the first of the new year. The law's ultimate durability depends on whether federal courts rule that states retain authority to regulate AI independently or defer to the Trump administration's assertion of federal preemption.

◈ Tone Comparison

Left-leaning outlets use language suggesting defeat and capitulation—The American Prospect's headline phrases Hochul "Caves to Big Tech"—while focusing on what was lost from the original bill. Right-wing and tech industry language emphasizes competitive risk and innovation burden—describing the bill as "patchwork, uninformed, and bureaucratic." Bill sponsors adopt a triumphalist tone, claiming they "defeated last-ditch attempts from AI oligarchs to wipe out this bill."