← Back to Blog

78 State Chatbot Bills. 58 Lawsuits. And a Federal Deadline Eight Days Away.

· Don Ho · 5 min read

Last updated: March 3, 2026

Last updated: April 2026

As of early 2026, 78 chatbot-related bills are active across 27 U.S. states, chatbot wiretap lawsuits grew 1,400% from 2021 to 2025, and federal preemption efforts have failed to slow the momentum, creating a multi-state compliance crisis for any company operating a customer-facing AI chatbot.

On March 11, 2026, two federal deadlines converge. The Secretary of Commerce must identify which state AI laws the Trump administration considers “burdensome” to national AI leadership. The FTC must issue a policy statement on when state AI laws may be preempted by federal action. Those two documents could reshape the compliance landscape that in-house counsel have been building toward for two years.

Meanwhile, the current state of play is this: 78 chatbot-related bills across 27 states in the first weeks of 2026 alone, adding to an already chaotic AI regulatory patchwork. An analysis of 284 deployer-facing AI lawsuits shows chatbot wiretap claims grew from 2 matters in 2021 to 30 in 2025. California’s SB 243, which took effect January 1, 2026, is already in force. Tennessee just passed a standalone law making it a Class A felony (15 to 60 years) to knowingly train AI to encourage suicide. Washington’s SB 5984 passed the Senate with treble damages up to $25,000 per chatbot disclosure violation.

If your company runs a customer-facing AI chatbot, the exposure is real and it is accelerating.

Three Regulatory Models, One Product

State chatbot legislation is not uniform, and the differences matter operationally. Three distinct frameworks have emerged.

Disclosure-first. California’s SB 243 requires operators to tell users they are interacting with AI, mandate periodic break reminders for minor users, and implement “reasonable measures” to prevent harmful content. The private right of action is $1,000 per violation. Washington’s SB 5984 follows a similar framework but with hourly disclosure intervals for minors and treble damages up to $25,000.

Use-restriction. New York’s S9051 goes further. For minor users specifically, it prohibits chatbots from using personal pronouns, expressing personal opinions, simulating emotional relationships, or prioritizing flattery over safety. The compliance challenge here is not procedural. It requires modifying what the system outputs for a user segment that is often difficult to reliably identify.

Criminal prohibition. Tennessee’s SB 1493 creates felony liability for knowingly training AI to encourage suicide or simulate human emotional relationships. SB 1580, which the Tennessee Senate passed unanimously, prohibits AI systems from representing themselves as qualified mental health professionals. Tennessee is now the first state with a standalone AI mental health prohibition.

Oregon’s SB 1546, which advanced to the Senate floor on February 12, goes further than any state currently on the books. The specifics are still developing, but the trajectory is clear: states are moving from soft disclosure requirements to hard behavioral prohibitions with criminal exposure.

The Litigation Wave Is Already Here

The legislative activity gets the headlines. The lawsuits are the actual threat.

Chatbot wiretap claims are filed under the Electronic Communications Privacy Act and state wiretap statutes. The theory: when a company’s chatbot transmits conversation data to a third-party AI vendor (the underlying model provider), that transmission may constitute interception without user consent. The plaintiff’s lawyers do not need to prove harm. They need to prove interception.

From 2 matters in 2021 to 30 in 2025 is a 1,400% increase in four years. That is not a trend that reverses itself.

The lawsuits are not all coming from fringe plaintiffs’ firms. Consumer protection practices at major litigation shops have built teams around AI chatbot claims. The case law is developing fast, and the early defendants are companies that never thought their customer service chatbot was a litigation risk.

The common fact pattern: a company deploys a chatbot powered by an external LLM (often OpenAI, Anthropic, or a vertical model). Conversation data gets transmitted to that vendor for inference. The terms of service did not clearly disclose this. A wiretap claim follows. The same platform dependency risk that hits developers also creates legal exposure when vendor data flows aren’t disclosed to end users.

What March 11 Changes (Or Doesn’t)

The Trump administration’s executive order on AI (January 2025) directed the Secretary of Commerce to identify state AI laws that burden national AI development. It also directed the FTC to clarify when federal standards preempt state regulation. Those deliverables are due March 11.

The administration has been broadly skeptical of state AI regulation. The White House AI framework has already signaled its posture. The Commerce report will almost certainly flag aggressive state laws like Oregon’s SB 1546 and New York’s S9051 as targets for federal preemption arguments.

But here is the complication: the executive order explicitly carved out “child safety protections” from federal preemption. Almost every chatbot bill in active legislation is framed around child safety. California’s SB 243 is a chatbot safety bill with youth-specific protections. Washington’s SB 5984 is built around minors. Tennessee’s laws focus on AI interactions with vulnerable users. The carveout for child safety creates a category of state law that may survive federal preemption pressure entirely.

What this means practically: do not assume March 11 produces a clean federal preemption of state chatbot laws. Even if Commerce flags certain laws, preemption requires either express statutory language or an irreconcilable conflict with federal law. Neither exists right now. State law enforcement can continue while federal preemption gets litigated for years.

78 bills. 58 lawsuits. Take the ACRA to assess your chatbot compliance risk.

Most companies approaching AI chatbot compliance treat it as a terms-of-service and disclosure problem. Add a banner. Update the privacy policy. Done.

That approach covers California’s SB 243 imperfectly and leaves New York and Tennessee exposure entirely unaddressed.

New York’s use restrictions require the system to behave differently for minor users. That means age verification (or a conservative assumption that all users could be minors in certain contexts) and a modified system prompt or output filter that changes what the chatbot says. That is an engineering requirement, not just a legal one.

Tennessee’s felony standard for knowingly training AI to encourage suicide — even as the White House killed a similar bill in Utah — requires companies to think about their fine-tuning and RLHF processes, not just their deployed product. If your team contributed to training data or custom fine-tuning of a model that surfaces harmful content to vulnerable users, the word “knowingly” is doing significant legal work.

Oregon’s advancing bill goes further than any state currently on the books. GCs whose companies operate at scale in Oregon need to be tracking SB 1546 specifically.

What to Do Now

Map your chatbot data flows. For every customer-facing AI chatbot, document where conversation data goes. If it transmits to an external LLM vendor, that is your wiretap exposure. Review whether your disclosure language covers that transmission clearly and conspicuously.

Run a state law inventory. If your company does business in California, Washington, Tennessee, New York, or Oregon, each of those states has active chatbot legislation that requires specific compliance actions now or in the near term. Assign each law to a named owner with a compliance deadline.

Assess your minor-user exposure. If your chatbot is accessible to users under 18 in any context, California’s $1,000-per-violation private right of action is already live. Washington’s treble damages provision is moving. Build your minor-user detection and behavioral modification plan before those laws become enforcement reality.

Watch March 11 but don’t bet on it. The Commerce report and FTC statement may shift the regulatory landscape. They may not. Plan as if state laws stand. If federal preemption materializes, you can ease off. The reverse is not true.

Audit your chatbot vendor contracts. If your LLM vendor is transmitting or retaining conversation data in ways you haven’t disclosed to users, you have a wiretap exposure that starts in the contract. Get clarity on data flow, retention, and training data practices from every vendor in your stack.

The companies that are going to get hit hardest in 2026 are the ones treating chatbot compliance as a one-time disclosure exercise. It is a live, multi-state, multi-theory liability problem that is getting bigger every week.


78 bills across 27 states, and your chatbot compliance plan is a banner that says “I’m an AI.” Kaizen AI Lab builds multi-state chatbot compliance frameworks that actually hold up under enforcement. Talk to us.

Kaizen AI Labs

Ready to Deploy AI in Your Business?

Schedule a discovery call with our AI consulting team. We'll map your operations, identify leverage points, and show you exactly where AI moves the needle.

Book a Consulting Call
AI

Adjacent Media by Kaizen Labs

Is Your Brand Visible to the Bots?

Get a free GEO audit and find out if your brand is being cited, found, or completely invisible in AI-generated answers. Then let's fix it.

Get a Free GEO Audit
GEO