Colorado Introduces Compromise AI Law to Replace Original Regulation
Colorado introduced SB 189, a compromise bill that drops requirements for AI companies to disclose how their systems work, while maintaining consumer notification and appeal rights, and delays implementation to January 2027.
Objective Facts
Colorado introduced Senate Bill 189 on Friday, a compromise measure that would drop the requirement that companies developing and deploying AI must disclose how their systems work to make decisions on hiring, loans, and housing, while still requiring companies to notify consumers when AI is being used in such consequential decisions. The bill also delays the law's implementation to January 2027 from June 2026. After months of closed-door negotiations, top Democrats are backing the proposal, which closely resembles a draft released by an AI policy group convened by Governor Jared Polis on the recommendation of the Colorado Chamber of Commerce. Dennis Dougherty, leading the AFL-CIO in Colorado as part of the People for Responsible Technology coalition, said the bill provides a path to hold developers and businesses accountable. Bryan Leach, CEO of Ibotta, called it a marked improvement over the original 2024 bill and a much more practical approach.
Left-Leaning Perspective
Senate Majority Leader Robert Rodriguez, a Denver Democrat who authored the original 2024 AI law, characterized SB 189 as 'more of a notice bill' rather than the more comprehensive approach he originally crafted. The People for Responsible Technology, a coalition led by Dennis Dougherty of the Colorado AFL-CIO that includes AARP Colorado, the ACLU of Colorado, and the Colorado Education Association, expressed it is 'cautiously optimistic' about the bill, stating 'It provides a path to hold developers and businesses using AI accountable when the technology makes consequential decisions for everyday Coloradans.' Dougherty emphasized that the coalition would keep watch on disclosures to workers, patients and consumers, asserting that 'Coloradans deserve transparency and accountability when Big Tech affects our lives.' The leftward critique centers on what was eliminated: the affirmative duties requiring developers and deployers to actively prevent discrimination, and the removal of upfront disclosure requirements about the AI system's purpose, data sources, and extent of personal information being processed. The original 2024 law sought to make developers and deployers undertake comprehensive assessments of discrimination risks and disclose potential algorithmic problems, whereas SB 189 focuses more on ensuring people know AI is being used and allowing them to fix information in adverse decisions. The original SB-205's provisions required AI companies and businesses to conduct risk assessments, take reasonable steps to protect users from discrimination, and publish detailed information about how AI is used in decision-making processes. Left-leaning coverage notes these are significantly weakened under SB 189, though consumer advocates accepted the compromise given the pressure from industry and the threat of federal preemption.
Right-Leaning Perspective
Bryan Leach, CEO of Ibotta, the Denver-based shopping app, said SB 189 'is a marked improvement over the original bill that was passed' and feels it's 'a much more practical approach,' though he remains unhappy with some measures. The bill is described as representing the culmination of two years of efforts to fix a 2024 law viewed by technology leaders and others as too burdensome, with both a task force and governor-appointed working group working on solutions that largely got incorporated into the new bill. Industry's core position centers on concerns about how much responsibility the original law places on companies developing and deploying AI, with months of pushback from the AI industry over Colorado's 2024 requirement that companies check for and reduce bias. Leach specifically objects to the bill's expiration of the 'right-to-cure' provision after three years, arguing 'It's reasonable for AI deployers to have a chance to fix a violation before facing fines and other penalties for a deficiency they might not be aware of.' The compromise approach focuses on transparency and consumer rights rather than requiring companies to proactively prevent algorithmic discrimination through risk management programs and impact assessments. During earlier failed negotiations, Senate Majority Leader Rodriguez acknowledged that while business, consumer protection advocates, labor and educators came together, 'big tech didn't like the bill because they don't like the liability.'
Deep Dive
When the Colorado Artificial Intelligence Act passed in May 2024, it made national headlines as the first of its kind in the U.S., representing a comprehensive attempt to govern 'high-risk' artificial intelligence systems across various industries before they could cause real-world harm. Governor Jared Polis signed it reluctantly, but less than a year later, the governor is supporting a federal pause on state-level AI laws. SB 189 is the culmination of two years of efforts to fix the 2024 law, which represents the most comprehensive AI regulation in the nation but is viewed by technology leaders and others as being too burdensome. The core technical dispute centers on liability: prior negotiations fell apart mainly over how liable AI developers and deployers should be when their technology leads to discrimination. In April, Elon Musk's AI company xAI sued Colorado to block the original law, with the lawsuit joined by the U.S. Department of Justice, which sought to intervene and support xAI. Musk's lawyers argued the law is 'unconstitutionally vague' and 'invites arbitrary enforcement,' while the DOJ argued the law 'constrains the information that AI systems convey, obligates AI developers and deployers to discriminate, and enforces the state-mandated discrimination with onerous policy, assessment, and disclosure requirements that will disproportionately burden small businesses and start-ups.' The compromise in SB 189 significantly addresses industry concerns while maintaining Rodriguez's core requirement that consumers be notified when AI affects consequential decisions. Rodriguez acknowledged that he originally wanted a more stringent framework including developer testing of algorithms for discrimination, but believes the liability requirements that remain will serve as incentives for companies to ensure products don't cause harm. The legislature has nine days remaining in its 120-day session (ending May 13) to pass SB 189, with first committee hearing set for May 6 before the Senate Business, Labor, and Technology Committee. The critical question going forward is whether SB 189's shift from proactive risk management to reactive transparency adequately protects Colorado consumers from algorithmic discrimination, or whether it represents an industry victory that largely gutts the original law's protective intent.