← Back to Blog

Oregon's New AI Chatbot Law Lets Consumers Sue for $1,000 Per Violation. Your Customer Service Bot Might Qualify.

· Don Ho

Last updated: April 12, 2026

By Don Ho, Esq. | April 12, 2026

Last updated: April 2026

Oregon enacted SB 1546, which gives consumers a private right of action with $1,000 in statutory damages per violation against operators of AI chatbots that simulate sustained human-like relationships — and the definitions are broad enough to sweep in customer service bots that retain user data across sessions. The law takes effect January 1, 2027.

The bill passed both chambers with only two no votes. Nobody testified against it. And the definitions are broad enough that your company’s customer service chatbot could fall within scope even though the bill was written with AI companions in mind.

What the Law Actually Requires

SB 1546 targets “operators” of “AI companions.” An operator is any person that controls or makes available an AI companion or AI companion platform in Oregon. The law defines an AI companion as a system using artificial intelligence or generative AI that is designed to simulate a sustained, human-like relationship with a user by meeting three criteria.

The system must retain information from prior interactions to personalize engagement. It must ask unprompted questions that suggest or concern emotional topics. And it must sustain an ongoing dialogue concerning matters that are personal to the user.

If your system meets all three, the law imposes disclosure requirements (users must know they’re talking to AI), safety protocols for detecting self-harm and suicidal ideation, and specific protections for minors. Operators must implement reasonable measures to prevent the chatbot from producing sexually explicit conduct involving minors.

The enforcement mechanism is what separates Oregon from California and New York, which have their own chatbot laws. Oregon gives individual consumers the right to sue with $1,000 in statutory damages for each violation. No need to prove actual harm. No need to exhaust administrative remedies. File suit, prove the violation, collect.

The Definitional Trap for Business Chatbots

Oregon’s legislature wrote SB 1546 to regulate AI companions, the Character.AI and Replika-style products that simulate romantic or emotional relationships. The bill’s sponsor, Senator Lisa Reynolds, presented it as a child safety measure, citing statistics that 28% of U.S. teens use AI chatbots daily and connecting that usage to mental health risks.

The problem is the definition. It is broad enough to sweep in systems that were never intended as emotional companions.

Consider a customer service chatbot deployed by an e-commerce company. Modern chatbots are designed to recognize returning users across sessions, satisfying the first element. Many are programmed to ask proactive questions (“How was your recent purchase?” or “Can I help you find something?”), which could satisfy the second element depending on how a court interprets “suggest or concern emotional topics.” And if the chatbot maintains an ongoing thread about the customer’s preferences, orders, or account issues, that could satisfy the third element of “matters that are personal to the user.”

The bill does exempt systems that operate “solely” for customer service. But that word “solely” is doing a lot of work. If your customer service bot also makes personalized product recommendations, remembers past conversations, or asks follow-up questions that go beyond the immediate support request, a plaintiff’s attorney could argue it is not “solely” providing customer service.

The bill does not define “emotional topics.” It does not define “personal to the user.” Those ambiguities create litigation risk, and the statutory damages structure guarantees that plaintiffs’ attorneys will test those boundaries. The real-world stories of chatbot harm that motivated this legislation are serious enough that courts may interpret the definitions broadly.

The Private Right of Action Changes the Calculus

Most state AI laws are enforced by attorneys general. That creates a natural bottleneck. AGs have limited resources, competing priorities, and political considerations that filter which cases get pursued.

A private right of action removes the bottleneck entirely. Any consumer in Oregon who interacts with an AI system that arguably meets the definition can file suit. The $1,000 per-violation statutory damages provision means class action attorneys will look at this law and see a volume play. If a chatbot serves 100,000 Oregon users and fails to disclose it’s an AI, the theoretical exposure is $100 million before you count attorney’s fees.

Idaho signed a similar chatbot bill the same week (S 1297), but without the private right of action. That distinction matters enormously. Idaho’s law will be enforced when the AG decides to act. Oregon’s law will be enforced whenever a plaintiff’s attorney decides the math works.

Nebraska is advancing its own chatbot bill through the unicameral legislature. Hawaii, Missouri, and Tennessee have bills moving through committees. Louisiana and South Carolina introduced new chatbot bills this month. The question for every GC tracking this space: will other states follow Oregon’s private right of action model or Idaho’s AG enforcement model?

What 25 New AI Laws in Four Months Tells You

Oregon’s chatbot law is one data point in a much larger pattern. Since January, states have enacted 25 new AI laws. Another 27 bills have passed both chambers and are awaiting gubernatorial signatures. The pace is accelerating, not slowing.

The laws fall into distinct categories. High-risk system regulations (Colorado, Illinois, Texas). Chatbot disclosure and safety laws (Oregon, Idaho, California, New York). Employment AI restrictions. Health care AI requirements (Tennessee just signed one with its own private right of action). AI personhood prohibitions. Pricing algorithm regulations.

Each category creates a different compliance obligation. Each state defines its terms slightly differently. And each enforcement mechanism, whether AG enforcement, private right of action, or both, changes the risk profile for companies operating in that state.

The practical result is that any company deploying consumer-facing AI nationally needs a state-by-state compliance map that did not exist 12 months ago. The regulatory patchwork is growing weekly. The companies that will get hit first are the ones still operating on the assumption that AI regulation is a future problem.

If you’re running customer-facing AI, $1,000 per violation adds up fast. Take the ACRA to assess your chatbot exposure.

What to Do Now

Audit every customer-facing AI system you operate. Identify which ones retain user data across sessions, ask proactive questions, or maintain ongoing conversations. Map those systems against Oregon’s three-element definition and determine whether the “solely for customer service” exemption actually applies.

If your chatbot might fall within scope, start building the disclosure and safety protocols now. A 5-layer AI compliance stack can help structure the guardrails your chatbot needs. The law takes effect January 1, 2027. That sounds distant until you factor in the development cycles for modifying AI system behavior, legal review of new disclosure language, and QA testing across platforms.

Review your vendor contracts. If you’re using a third-party chatbot platform, determine whether the vendor or your company qualifies as the “operator” under Oregon law. Liability follows the operator, and the statute’s definition is broad enough that both the platform provider and the deploying company could face claims.

Budget for compliance across multiple states. Oregon is not the only chatbot law taking effect in the next 12 months. If you build your compliance infrastructure for Oregon alone, you will rebuild it for Nebraska, Hawaii, and whatever comes next. Build once for the emerging national standard.

The AI companion use case that motivated SB 1546 is legitimate. Minors interacting with emotionally manipulative AI systems without safety guardrails is a real problem. But the law Oregon wrote to solve that problem reaches further than its authors probably intended, and the private right of action ensures that every ambiguity in the statute will be tested in court. Meanwhile, the FTC is already enforcing against AI companies case-by-case — so even where state laws don’t reach, federal regulators are filling gaps.

And if you think the unauthorized-practice angle is separate from chatbot regulation, think again. Nippon Life just sued OpenAI for practicing law without a license — the theory being that ChatGPT crossed the line from information to legal advice. Oregon’s SB 1546 and that lawsuit are two sides of the same coin: AI systems acting like professionals without the license or the guardrails.

Kaizen AI Labs

Ready to Deploy AI in Your Business?

Schedule a discovery call with our AI consulting team. We'll map your operations, identify leverage points, and show you exactly where AI moves the needle.

Book a Consulting Call
AI

Adjacent Media by Kaizen Labs

Is Your Brand Visible to the Bots?

Get a free GEO audit and find out if your brand is being cited, found, or completely invisible in AI-generated answers. Then let's fix it.

Get a Free GEO Audit
GEO