← Back to Blog

Colorado Just Delayed Its AI Law Because Nobody Could Figure Out How to Comply

· Don Ho

Last updated: February 7, 2026

Last updated: April 2026

Colorado delayed enforcement of its AI Act (SB 24-205), the first comprehensive state-level AI anti-discrimination law, because businesses could not determine which systems qualified as “high-risk,” no impact assessment methodology was specified, and developer disclosure requirements conflicted with trade secret protections. Colorado SB 24-205, the Colorado AI Act, was supposed to be the first comprehensive state-level AI anti-discrimination law in the country. Signed by Governor Jared Polis in May 2024, it was scheduled to take effect on February 1, 2026. Then Colorado pushed back the enforcement date because the companies it applied to couldn’t figure out how to comply.

Governor Polis had signaled this might happen when he signed the bill. His signing statement was unusual: he said the law was “well-intentioned” but expressed concern that it might be “too burdensome” for businesses and could stifle innovation. He asked the legislature to revisit it before it took effect. They did. They delayed it.

What the Law Requires

The Colorado AI Act targets “high-risk AI systems” that make or substantially contribute to “consequential decisions” about consumers. Consequential decisions include determinations about employment, education, financial services, essential services, housing, insurance, and legal matters.

Deployers (companies using AI systems) must implement a risk management policy, conduct impact assessments before deploying high-risk AI, provide consumers with notice that AI is being used in consequential decisions, allow consumers to appeal AI-driven decisions, and report discovered instances of algorithmic discrimination to the Colorado Attorney General.

Developers (companies that build AI systems) must provide deployers with documentation about the system’s intended uses, known limitations, and the data used for training, as well as a summary of the types of discriminatory outcomes the system has been tested against.

Why Compliance Was “Impossible”

The business community’s objections fell into three categories.

First, the definitions were too vague. What counts as a “high-risk AI system”? The law defines it by the domain of application (employment, insurance, etc.) rather than by the technical characteristics of the system. A simple rules-based eligibility checker might qualify as a “high-risk AI system” if it operates in a covered domain, while a complex machine learning model making low-stakes recommendations might not. Companies couldn’t determine which of their systems were covered. (This is the same definitional problem that plagues every AI law on the books.)

Second, the impact assessment requirements were undefined. The law requires deployers to conduct impact assessments, but it didn’t specify what an adequate impact assessment looks like. No template, no minimum standards, no safe harbor for following a particular framework. Companies were told to assess the risk of algorithmic discrimination without guidance on what methodology to use.

Third, the developer disclosure obligations were commercially impractical. AI vendors were expected to provide detailed information about training data, known limitations, and discrimination testing. Many AI systems (including large language models from OpenAI, Anthropic, and Google) are built on proprietary training data that the vendors consider trade secrets. Requiring disclosure of training data composition creates a conflict between compliance and intellectual property protection that neither the law nor its implementing regulations resolved.

The xAI Lawsuit

While the legislature was debating the delay, Elon Musk’s xAI filed a federal lawsuit challenging the Colorado AI Act on First Amendment grounds. The complaint argues that requiring AI developers to disclose information about their systems’ outputs is compelled speech, and that the law’s restrictions on AI-driven decisions burden protected expression.

The lawsuit (filed in the U.S. District Court for the District of Colorado) adds constitutional uncertainty to an already confused regulatory picture. Even if the law’s enforcement date is delayed, the constitutional questions remain. A federal court ruling that AI output constitutes protected speech would have implications far beyond Colorado.

What the Delay Tells Us About AI Regulation

Colorado’s experience is a preview of what happens when AI legislation moves faster than AI compliance infrastructure. It’s one chapter in a growing state-by-state regulatory patchwork that’s creating compliance chaos nationally. Regulators know AI discrimination is a problem. The EEOC, FTC, CFPB, and HUD have all flagged it. But translating that concern into workable compliance requirements is harder than passing a law.

The core challenge: you can’t audit what you can’t define, and you can’t define what you don’t understand. Most legislators voting on AI bills don’t understand how machine learning works, what training data looks like, or how algorithmic bias manifests in production systems. They write laws based on policy goals (prevent discrimination) without the technical specificity needed for compliance (test for disparate impact using these methodologies on these protected characteristics at this threshold).

Illinois learned this with BIPA. The biometric privacy law created massive litigation because the statutory damages provision ($1,000 per negligent violation, $5,000 per intentional violation) combined with a private right of action and vague requirements. Companies that were technically compliant still got sued because “compliance” wasn’t clearly defined. Colorado risked the same trajectory.

What to Do Now

Don’t stop preparing. The delay is not a repeal. The Colorado AI Act is still law. When enforcement begins (the new date hasn’t been finalized, but 2027 is likely), companies that started preparing early will be ready. Companies that treated the delay as permission to do nothing will scramble.

Use the NIST AI Risk Management Framework as your baseline. Since Colorado didn’t specify a compliance methodology, the NIST AI RMF is the closest thing to a recognized standard. Adopt it voluntarily now, and you’ll be ahead of whatever Colorado eventually specifies. A 5-layer AI compliance stack built on the RMF gives you a structured starting point.

Colorado delayed because companies couldn’t figure out compliance. The framework exists. Take the ACRA to start.

Inventory your AI systems by domain. Even if the exact definition of “high-risk” gets revised, the covered domains (employment, insurance, lending, housing, education) are unlikely to change. If you use AI in those areas, you’re probably covered. Start the impact assessment process.

Watch the xAI lawsuit. If a federal court rules that AI output is protected speech, it reshapes every AI regulation in the country, not just Colorado’s. Track the case and brief your leadership on the possible outcomes.

Colorado tried to be first. Being first is hard. But the issues the law addresses (algorithmic discrimination in high-stakes decisions) aren’t going away. Other states are watching, and they’ll learn from Colorado’s mistakes. States are already using AI in their own enforcement operations, which means the next version won’t just be clearer — it’ll be enforced by the same technology it regulates. The next version will be more specific and harder to dodge.


“The law got delayed” is not a compliance strategy. Kaizen AI Lab builds AI governance programs that survive the next version of the law, not just the current one. Talk to us.


Kaizen AI Labs

Ready to Deploy AI in Your Business?

Schedule a discovery call with our AI consulting team. We'll map your operations, identify leverage points, and show you exactly where AI moves the needle.

Book a Consulting Call
AI

Adjacent Media by Kaizen Labs

Is Your Brand Visible to the Bots?

Get a free GEO audit and find out if your brand is being cited, found, or completely invisible in AI-generated answers. Then let's fix it.

Get a Free GEO Audit
GEO