New York RAISE Act Sets Strongest AI Transparency Standard

New York finalized the RAISE Act on March 27, 2026, creating the nation's strongest AI transparency standard requiring frontier developers to disclose safety protocols and report incidents within 72 hours.

Objective Facts

On March 27, 2026, New York Governor Kathy Hochul signed the final chapter amendment to the RAISE Act, bringing it into closer alignment with California's Transparency in Frontier Artificial Intelligence Act while requiring frontier developers to publish transparency reports before deploying new frontier models. The final version of the RAISE Act maintains a marked departure from California's law by requiring disclosure of incidents within 72 hours, while California's law provides a 15-day reporting window. The amendment replaces the prior compute-based definition of a "large frontier developer" with a revenue-based threshold identical to the one in California's law (developers that exceeded $500 million in annual gross revenue in the preceding calendar year). The New York Attorney General will enforce the law by imposing civil penalties up to $1 million for a first violation and up to $3 million for subsequent violations, including when a large frontier developer fails to file a required document, makes false statements, fails to report an incident, or fails to comply with its own frontier AI Framework.

Left-Leaning Perspective

Democratic lawmakers and progressive AI safety advocates frame the RAISE Act as a victory for public protection. State Senator Andrew Gournardes, the bill's sponsor in the upper chamber, called it "an enormous win for the safety of our communities" and argued that "Big tech oligarchs think it's fine to put their profits ahead of our safety — we disagree," emphasizing that "tech innovation and safety don't have to be at odds". Governor Hochul positioned the law as "building on California's recently adopted framework, creating a unified benchmark among the country's leading tech states as the federal government lags behind, failing to implement common-sense regulations that protect the public". Progressives argue that transparency is the essential foundation for future AI oversight. For AI safety advocates, the measure represents "the floor for AI regulation," with supporters noting that "Transparency is a baseline for any form of oversight and accountability for the development and deployment of AI tools". The 72-hour incident reporting requirement is highlighted as stricter than California's 15-day window, signaling New York's commitment to rapid public accountability. However, progressive voices are divided on whether the law went far enough. Some AI safety advocates have criticized the amendments as considerably watering down the RAISE Act compared to what the New York State legislature had initially passed. The original version had higher penalties ($10-30 million vs. the final $1-3 million) and would have required safety protocols before model deployment, rather than allowing publication afterward.

Right-Leaning Perspective

Industry actors and the Trump administration support the RAISE Act as a pragmatic step toward uniform federal standards, though they prioritize preemption. OpenAI and Anthropic expressed support for the RAISE Act, with OpenAI's Chief Global Affairs Officer Chris Lehane stating "While we continue to believe a single national safety standard for frontier A.I. models established by federal legislation remains the best way to protect people and support innovation, the combination of the Empire State with the Golden State is a big step in the right direction". This framing positions state laws as an interim measure until federal action can establish uniform rules. The Trump administration and conservative voices oppose the regulatory patchwork created by state-level mandates. President Trump's executive order warned that "excessive State regulation thwarts" innovation and that "State-by-State regulation by definition creates a patchwork of 50 different regulatory regimes that makes compliance more challenging, particularly for start-ups," calling for "a minimally burdensome national standard". The administration asserted that it is "the policy of the United States to sustain and enhance the United States' global AI dominance through a minimally burdensome national policy framework for AI". Right-leaning critics highlight the industry's successful push to moderate the original bill as evidence that draconian regulation can be resisted. The final text dials back developer requirements compared to the text passed by the legislature, with changes coming about after pressure from the technology industry collided with bill sponsors' desires to put stronger guardrails on AI. The shift from compute-based to revenue-based thresholds and the reduction in penalties reflect industry influence.

Deep Dive

The Governor negotiated with lawmakers to secure a chapter amendment introduced on January 6, 2026, passed the second chamber of the legislature on March 11, 2026, and was finally signed into law on March 27, 2026, establishing a range of detailed obligations for developers of frontier models and granting rulemaking authority to a new office within the New York Department of Financial Services. The two-year journey reveals a fundamental tension in AI governance: the need for public accountability versus competitive pressures from the tech industry. The original legislature-passed text from June 2025 contained more stringent requirements, but the final text approved by Hochul dials back developer requirements after pressure from the technology industry collided with bill sponsors' desires to put stronger guardrails on AI. When Governor Hochul signed the initial version in December 2025, she issued an accompanying memorandum stating that "the bill, as drafted, would impose broad compliance obligations on large-scale models without adequate specificity," and that she had reached an agreement with the legislature to make certain clarifications to the bill that would become effective on January 1, 2027, addressing her concerns by aligning the RAISE Act with California's Transparency In Frontier Artificial Intelligence Act. This compromise demonstrates how the final rule represents negotiation between safety advocates and industry. What remains unresolved is whether state-level transparency frameworks will survive federal challenge. The New York law establishes another unique state standard for AI model transparency and safety, which is in tension with the Trump Administration's efforts to promote comprehensive federal policy. Although the regulatory landscape is uncertain, companies should continue to comply with applicable state AI laws because the Executive Order itself does not overturn existing state law — that can only be done by an act of Congress or the courts, and until the relevant legal challenges are resolved, state laws remain enforceable, and companies could face potential penalties for noncompliance. The next critical juncture will be whether the DOJ's AI Litigation Task Force successfully challenges the law in court, or whether Congress passes federal preemption legislation.

OBJ SPEAKING

Create StoryTimelinesVoter ToolsRegional AnalysisAll StoriesCommunity PicksUSWorldPoliticsBusinessHealthEntertainmentTechnologyAbout

New York RAISE Act Sets Strongest AI Transparency Standard

New York finalized the RAISE Act on March 27, 2026, creating the nation's strongest AI transparency standard requiring frontier developers to disclose safety protocols and report incidents within 72 hours.

Apr 26, 2026
What's Going On

On March 27, 2026, New York Governor Kathy Hochul signed the final chapter amendment to the RAISE Act, bringing it into closer alignment with California's Transparency in Frontier Artificial Intelligence Act while requiring frontier developers to publish transparency reports before deploying new frontier models. The final version of the RAISE Act maintains a marked departure from California's law by requiring disclosure of incidents within 72 hours, while California's law provides a 15-day reporting window. The amendment replaces the prior compute-based definition of a "large frontier developer" with a revenue-based threshold identical to the one in California's law (developers that exceeded $500 million in annual gross revenue in the preceding calendar year). The New York Attorney General will enforce the law by imposing civil penalties up to $1 million for a first violation and up to $3 million for subsequent violations, including when a large frontier developer fails to file a required document, makes false statements, fails to report an incident, or fails to comply with its own frontier AI Framework.

Left says: State Senator Gournardes called the RAISE Act an "enormous win for safety" while criticizing big tech companies for prioritizing profits over public safety, asserting that innovation and safety are compatible. Progressives view the transparency standard as a necessary foundation for accountability, though some progressive advocates believe the final amendments weakened the original legislation.
Right says: Tech companies and the Trump administration prefer federal preemption over state-by-state regulation, with major AI developers supporting a national standard while acknowledging state laws as interim necessity. Industry pressure successfully scaled back the original legislative proposal.
✓ Common Ground
Both progressive and business-aligned voices acknowledge that New York and California alignment on AI safety may lift perceived patchwork burdens, with OpenAI and Anthropic expressing support for having similar legislation in two large state economies.
Tech companies and progressive policymakers agree that frontier model developers (OpenAI, Anthropic, Google, Meta) have become substantial enough to warrant some form of oversight and public accountability.
Both industry participants like Anthropic and regulatory bodies recognize the need for "rigorous transparency frameworks," with Anthropic noting that governments "start to require frontier AI developers to create and publish frameworks for assessing and managing catastrophic risks".
There is broad recognition across the political spectrum that the absence of federal AI standards has created urgency for state action, even among those who prefer federal preemption.
Objective Deep Dive

The Governor negotiated with lawmakers to secure a chapter amendment introduced on January 6, 2026, passed the second chamber of the legislature on March 11, 2026, and was finally signed into law on March 27, 2026, establishing a range of detailed obligations for developers of frontier models and granting rulemaking authority to a new office within the New York Department of Financial Services. The two-year journey reveals a fundamental tension in AI governance: the need for public accountability versus competitive pressures from the tech industry.

The original legislature-passed text from June 2025 contained more stringent requirements, but the final text approved by Hochul dials back developer requirements after pressure from the technology industry collided with bill sponsors' desires to put stronger guardrails on AI. When Governor Hochul signed the initial version in December 2025, she issued an accompanying memorandum stating that "the bill, as drafted, would impose broad compliance obligations on large-scale models without adequate specificity," and that she had reached an agreement with the legislature to make certain clarifications to the bill that would become effective on January 1, 2027, addressing her concerns by aligning the RAISE Act with California's Transparency In Frontier Artificial Intelligence Act. This compromise demonstrates how the final rule represents negotiation between safety advocates and industry.

What remains unresolved is whether state-level transparency frameworks will survive federal challenge. The New York law establishes another unique state standard for AI model transparency and safety, which is in tension with the Trump Administration's efforts to promote comprehensive federal policy. Although the regulatory landscape is uncertain, companies should continue to comply with applicable state AI laws because the Executive Order itself does not overturn existing state law — that can only be done by an act of Congress or the courts, and until the relevant legal challenges are resolved, state laws remain enforceable, and companies could face potential penalties for noncompliance. The next critical juncture will be whether the DOJ's AI Litigation Task Force successfully challenges the law in court, or whether Congress passes federal preemption legislation.

◈ Tone Comparison

Left-leaning framing uses moral and populist language—Gournardes labeled tech leaders as "oligarchs" prioritizing "profits ahead of our safety"—positioning the debate as protection of the public interest. Right-wing framing emphasizes economic competitiveness and innovation freedom, with the Trump administration warning of "excessive" regulation creating regulatory "patchwork" that threatens US AI dominance. Both sides claim to value innovation, but left positions it as compatible with safety mandates while right sees mandates as impediments.