Musk's xAI Just Filed a Constitutional Challenge Against Colorado's AI Law. Every GC Should Be Watching.
Last updated: April 12, 2026
By Don Ho, Esq. | April 12, 2026 Last updated: April 2026
xAI filed the first major constitutional challenge to a state AI anti-discrimination law in April 2026, arguing that Colorado’s SB 24-205 violates the First Amendment by compelling AI developers to embed the state’s preferred views into model outputs, a theory that could invalidate AI bias regulations nationwide if it succeeds. The complaint asks for a declaratory judgment that the law is unconstitutional and an injunction preventing Attorney General Phil Weiser from enforcing it before the June 30 effective date.
The law requires developers of “high-risk” AI systems, those used in employment, housing, education, health care, and financial services, to build safeguards against algorithmic discrimination. It imposes disclosure requirements, risk-mitigation obligations, and gives the AG enforcement authority. The law has had a troubled path to enforcement, with the state already delaying the original effective date because companies couldn’t figure out compliance. xAI says the law violates the First Amendment by forcing developers to embed the state’s preferred views into AI outputs and penalizing speech the state dislikes.
This is the first major tech company to challenge a state AI anti-discrimination law on constitutional grounds. It will not be the last.
What xAI Is Actually Arguing
The complaint makes three core claims.
First, xAI argues SB 24-205 compels speech. The company says the law would require it to alter Grok, its flagship AI model, to produce outputs that conform to Colorado’s views on fairness and equity rather than being “maximally truth seeking.” In xAI’s framing, the law dictates what an AI model can and cannot say, which is a form of government-compelled speech.
Second, xAI claims the law is unconstitutionally vague. The statute uses terms like “algorithmic discrimination” and “high-risk” without definitions precise enough for developers to know what compliance actually requires. When the penalty for getting it wrong is enforcement by the AG, vagueness becomes a due process problem.
Third, xAI argues the law impermissibly burdens interstate commerce. Because AI models are developed, trained, and deployed nationally, a single state’s requirements effectively regulate development activity outside its borders. Colorado can’t force a company headquartered elsewhere to redesign a model that serves 50 states just to satisfy one state’s regulatory preferences.
Why This Matters Beyond Colorado
Colorado is not operating in isolation. At last count, 25 state AI laws have been enacted in 2026. Illinois and Texas have employment AI laws that took effect January 1. California’s regulatory agencies have issued similar requirements for employers deploying AI systems, and the California AG is already pursuing xAI on separate enforcement grounds. The Transparency Coalition’s April 10 legislative update shows 19 new AI laws passed in the last two weeks alone.
The xAI lawsuit is testing the constitutional floor for all of them.
If the First Amendment argument gains traction, it could invalidate disclosure and anti-discrimination requirements across multiple states. If the Commerce Clause argument succeeds, it would support the White House position that AI regulation belongs at the federal level, not in a patchwork of state legislatures.
The practical problem is timing. SB 24-205 takes effect June 30. The court will likely need to decide the injunction question within weeks. Whatever the ruling, it will send a signal to every other state with pending AI legislation.
The Patchwork Problem Is Real
xAI’s filing leans heavily on the federal preemption angle, citing White House executive orders and statements from AI advisor David Sacks criticizing state-by-state regulation. The argument: a Colorado-specific compliance requirement for an AI model that operates nationally is inherently unworkable.
There is something to this. An AI developer building a model for national deployment now needs to track requirements in Colorado, Illinois, Texas, California, Oregon, and potentially a dozen more states by year-end. Each state defines “high-risk” differently. Each state has different disclosure triggers. Each state has different enforcement mechanisms. The state-by-state regulatory patchwork is exactly the problem that both xAI and the White House are pointing to.
For a large company like xAI (which recently merged with SpaceX), the cost of compliance is manageable. For a legal tech startup with 15 employees building an AI contract review tool, the patchwork is a genuine barrier to market entry.
But the counterargument is just as strong. California Attorney General Rob Bonta has pointed out that Congress has spent years failing to pass comprehensive AI legislation. State AGs are filling a vacuum that Washington created. If states can’t regulate AI, and Congress won’t, the result is no regulation at all. Meanwhile, states are already using AI in their own compliance enforcement, which makes the irony even sharper.
The First Amendment Angle Is the One to Watch
The Commerce Clause and vagueness arguments are standard regulatory challenge fare. The First Amendment claim is the one that could reshape the field.
xAI is arguing that an AI model’s outputs are a form of speech, and that requiring those outputs to conform to anti-discrimination standards is compelled speech. If a court agrees, it would mean the government cannot require AI systems to produce non-discriminatory outputs without meeting strict scrutiny, the highest constitutional standard.
That’s a radical position. It would effectively constitutionalize AI model design decisions and make algorithmic bias much harder to regulate through any legislative mechanism.
Courts haven’t settled whether AI outputs qualify as protected speech. The Supreme Court has extended First Amendment protection to algorithmic curation in some contexts but hasn’t addressed the specific question of whether training an AI model constitutes expressive activity protected by the Constitution.
If xAI wins on this theory, the implications go well beyond Colorado. Every state AI law with anti-discrimination provisions would face the same challenge.
What to Do Now
If you are a GC or compliance officer at a company that deploys AI in hiring, lending, insurance underwriting, or any other “high-risk” category, do not wait for this lawsuit to resolve before acting.
First, map your state-by-state exposure. Know which laws apply to your AI deployments today and which take effect in the next six months. Colorado’s SB 24-205 is June 30. Other states are moving just as fast.
Second, build your compliance documentation now. Even if the xAI challenge succeeds in Colorado, other state laws will survive. The companies that get caught flat-footed are the ones that treated compliance as optional until the enforcement action arrived.
Third, watch the injunction ruling. If the court grants a preliminary injunction blocking SB 24-205, expect a wave of similar challenges in other states. If the court denies the injunction, expect accelerated compliance deadlines as the law goes live and the AG begins enforcement.
Fourth, do not assume federal preemption will save you. Congress is not close to passing a comprehensive AI law. The White House AI framework proposes preemption but has no enforcement mechanism. State regulation is the operating reality for 2026 and likely 2027. Build your compliance infrastructure for the world that exists, not the one the White House wishes existed.
Whether Colorado’s law survives or not, AI governance obligations are multiplying. Take the ACRA to see where you stand.
The xAI lawsuit is the opening shot in what will be a multi-year constitutional battle over who gets to regulate AI. The answer will determine whether AI companies operate under 50 different regulatory regimes or one. For every company deploying AI in high-stakes decisions, the outcome changes everything.