← Back to Blog

The White House Wants to Kill State AI Laws. Here's What That Actually Means.

· Don Ho

Last updated: April 11, 2026

By Don Ho, Esq. | April 11, 2026 Last updated: April 2026


The White House National Policy Framework for AI, released March 20, 2026, recommends that Congress preempt state AI laws across three categories: AI development regulation, developer liability for third-party misuse, and requirements on AI-assisted activities, effectively targeting Colorado’s AI Act, California’s SB 53, and every other state AI law on the books. The 30-page document reads like a wishlist for the tech industry, and its most consequential recommendation is the one that should alarm every state legislator, general counsel, and compliance officer in the country: broad federal preemption of state AI laws. The administration wants Congress to prevent states from regulating AI development, from holding AI developers liable for third-party misuse, and from imposing requirements on AI-assisted activities that wouldn’t apply to the same activities done without AI. Colorado’s AI Act, California’s SB 53, New York’s proposed chatbot liability bill. All of them are in the crosshairs. The patchwork of state AI laws that companies have spent the last two years navigating? The White House wants Congress to flatten it.

What the Framework Actually Says

The document lays out three specific categories of state regulation that the White House wants Congress to block.

First, states should not “regulate AI development.” That’s a direct shot at California’s SB 53, which requires large AI companies to publish frontier AI frameworks describing their risk management processes and report critical safety incidents to California’s Office of Emergency Services. The framework takes the position that regulating how AI models are built is a federal matter, not a state one.

Second, states should not “penalize AI developers for a third party’s unlawful conduct involving their models.” Colorado’s AI Act creates a duty of care for AI developers whose systems make consequential decisions, requiring them to protect consumers from discrimination risk even in downstream use cases. (xAI is already suing Colorado over that law on First Amendment grounds.) California passed a law last year saying that in civil cases alleging harm from AI, a defendant cannot argue that “the artificial intelligence autonomously caused the harm.” Both of these state-level liability expansions would be neutralized.

Third, states should not “unduly burden Americans’ use of AI for activity that would be lawful if performed without AI.” This echoes the “Right to Compute” bills advancing through several state legislatures, framing AI usage as an extension of existing freedoms. Colorado’s AI Act, which requires annual impact assessments and risk management programs for companies using AI in consequential decisions like hiring, lending, and healthcare, is the obvious target. The argument: if a human can make a lending decision without filing an impact assessment, why should an AI-assisted version of the same decision require one?

The Bipartisan Support That Makes This Dangerous

This isn’t a partisan proposal sitting in a drawer. House Speaker Mike Johnson endorsed it. Sen. Ted Cruz, who chairs the Senate Commerce Committee, backed it. Sen. Maria Cantwell, the ranking Democrat on that committee, said the framework “identifies key areas to address.” When the Speaker, the chair, and the ranking member all express support for the same legislative direction, the probability of action before the 2026 midterms goes up significantly. Meanwhile, at the federal enforcement level, the FTC is already building AI regulation through lawsuits, not rulemaking.

Michael Kratsios, the White House’s science and technology policy adviser, went further in a recent interview. He suggested the administration would also target state laws “banning particular verticals,” specifically referencing New York’s SB 7263, which would create liability for chatbot operators engaging in the unauthorized practice of law, medicine, or other licensed professions. Nevada and Illinois have already enacted laws extending unlicensed-practice prohibitions to AI chatbots. OpenAI is already getting sued under exactly these theories. Under the framework, those laws could be preempted too. This is the same tension playing out in California, where industry groups are fighting state AI bills on cost and competitiveness grounds.

The Carve-Outs Are Narrower Than They Look

The framework does include concessions. States would retain authority to enforce “laws of general applicability” against AI developers and users, including laws protecting children, preventing fraud, and protecting consumers. They’d keep zoning authority and the ability to regulate government use of AI in law enforcement and public services.

The phrase “general applicability” is the escape hatch, and it’s smaller than it appears. In legal usage, it typically refers to laws that apply broadly to a domain without specifically targeting that domain. A general consumer protection statute would qualify. A law specifically written to regulate AI chatbots probably would not. That means state legislatures could still enforce their existing fraud and consumer protection frameworks against AI companies, but they couldn’t write new laws specifically designed to address AI-specific harms without running into preemption.

The child safety carve-out has similar problems. The framework preserves state enforcement of “generally applicable laws protecting children, such as prohibitions on child sexual abuse material, even where such material is generated by AI.” But AI-focused child safety laws, like state bills regulating chatbot interactions with minors, could be preempted. This contradicts the executive order that created the framework, which explicitly said the federal framework must not propose preempting “otherwise lawful State AI laws” related to protecting children.

Why This Matters for Every GC and Compliance Officer

If you’re a general counsel at a company deploying AI in hiring, lending, insurance, or healthcare, here’s the practical problem. Colorado’s AI Act takes effect later this year. California’s automated decision-making regulations are live. You’ve spent the last 12 months building compliance programs for these state requirements. The White House just told Congress to preempt them.

That does not mean you should stop compliance work. Legislation takes time. The framework is a recommendation, not a law. But you should be tracking three things.

Watch the legislative calendar. If Cruz and Cantwell move a bill through the Commerce Committee before the midterms, preemption could become law faster than most people expect. The framework gives them a detailed blueprint.

Assess your state-level exposure. Map every state AI law that applies to your operations. Colorado, California, Illinois, Nevada, and New York are the current leaders. If federal preemption passes, some of those obligations disappear. If it doesn’t, enforcement is coming.

Federal or state, the compliance obligations are converging. Take the ACRA to map your exposure across jurisdictions.

Don’t bet on preemption as a compliance strategy. Companies that slow-walk state compliance because they expect federal preemption are taking a calculated risk. Colorado’s AI Act has a February 2026 enforcement date. If preemption legislation doesn’t pass until late 2026, you’ve had months of potential non-compliance exposure. Build for the strictest standard. If preemption comes, you’ll be over-compliant. That’s a much better problem than the alternative.

The White House framework is the clearest signal yet that the federal government wants to consolidate AI regulation at the national level. Whether that consolidation protects consumers or protects the industry depends entirely on what Congress writes. Based on what’s in this framework, the industry is getting most of what it asked for.


Don Ho, Esq. is Founder & CEO of Kaizen AI Lab, advising companies on operational growth strategies and the legal aspects of AI integration in their businesses.

Kaizen AI Labs

Ready to Deploy AI in Your Business?

Schedule a discovery call with our AI consulting team. We'll map your operations, identify leverage points, and show you exactly where AI moves the needle.

Book a Consulting Call
AI

Adjacent Media by Kaizen Labs

Is Your Brand Visible to the Bots?

Get a free GEO audit and find out if your brand is being cited, found, or completely invisible in AI-generated answers. Then let's fix it.

Get a Free GEO Audit
GEO