← Back to Blog

A California Lawyer Just Got Hit With a $25K AI Sanction. The State Bar Is Watching.

· Don Ho · 7 min read

Last updated: February 19, 2026

Last updated: April 2026

By Don Ho

A California federal court ordered $25,000 in sanctions over AI-related work product errors in February 2026, and state bars across the country are shifting from educational guidance to active disciplinary proceedings against attorneys who use AI without meaningful human verification, signaling that the legal profession’s tolerance period for AI mistakes is over.

Not a hallucinated citation case. Not a fabricated precedent situation. A sanction tied to work product errors downstream of AI use that the opposing party had to redo. $25,000 is not a career-ending number. But the trajectory is.

What’s Actually Happening in the Courts

The Mata v. Avianca case in 2023 was the inflection point most people remember — and AI lawyer sanctions have hit record levels since then — the lawyer who filed a brief with six fictitious AI-generated case citations, got caught, paid $5,000 in sanctions. That case put “AI hallucinations” in the headlines.

What’s happened since is more systematic and, in some ways, more dangerous.

Courts aren’t just sanctioning hallucinated citations anymore. They’re sanctioning the broader pattern of AI-assisted work that bypasses attorney judgment. The $25K California sanction is in that category. Work product with errors. Opposing counsel spent time correcting what should have been verified before filing.

The number went up. The bar for triggering it went down.

At the same time, state bars have started moving from “educational guidance” to actual disciplinary proceedings. A Wisconsin DA’s AI-drafted filings got 74 criminal counts dismissed. A Walmart lawyer’s AI-generated brief was called a “perilous shortcut” by the judge. The shift happened quietly in late 2025, but as of early 2026, using public AI tools for client work without meaningful human verification is documented as a clear ethical violation in bar guidance across multiple states.

California’s Senate passed a bill this month specifically regulating lawyers’ use of AI — including an explicit bar on putting confidential client information into public generative AI tools.

The legal profession’s honeymoon period with AI is over.

The Liability Architecture Is Still Being Built

Here’s the part that makes me uncomfortable as a GC: the liability rules for AI-assisted legal work are still being written in real time, by courts that are making it up as they go.

Right now, three different liability frameworks are competing:

Framework 1: AI as tool, lawyer is fully liable. The tool doesn’t matter. If you sign off on work product, you own it. This is where most courts currently sit. It’s what produced the sanctions in Mata, in the California case, and in a dozen other cases you haven’t read about because they weren’t high-profile enough to get press coverage.

Framework 2: AI as process, proportional liability. Courts look at what verification steps the lawyer took, not just the output. If you used AI to research but verified every citation before filing, you’re in a different position than if you copied and pasted the AI’s output directly into the brief. A handful of courts are starting to look at the process, not just the result.

Framework 3: AI vendor liability, shared accountability. Judge Rakoff’s Heppner privilege ruling in the Southern District of New York earlier this year signaled that AI-generated documents may not be privileged when created with consumer AI tools — because the third-party vendor relationship breaks the privilege chain. That decision cuts both ways. If the privilege fails because the AI company saw your document, maybe the AI company has some accountability for the output too. Nobody has successfully litigated this theory yet, but it’s coming.

The confusion between these frameworks is the actual problem. Lawyers are making decisions about AI use inside a liability environment they can’t fully see.

What “Human-in-the-Loop” Actually Means

Every AI policy guidance I’ve seen from state bars says some version of “meaningful human oversight.” That phrase is doing a lot of work that nobody is unpacking.

In my experience deploying AI systems professionally, there are three versions of human-in-the-loop:

Version 1: Human on record. A lawyer signed the brief, therefore a human was “in the loop.” This is the version that’s getting people sanctioned. Signing something doesn’t mean reviewing it. Courts are catching up to this distinction fast.

Version 2: Human as checker. A lawyer reviews AI-generated work before it goes out. This sounds like the right answer. The problem is that it assumes the lawyer can catch the AI’s errors — which requires knowing what to look for. If the AI generates a plausible-sounding analysis in an area where the reviewing attorney isn’t deeply expert, the “checker” may not catch what’s wrong. This is the competence problem the bars haven’t fully addressed yet.

Version 3: Human as architect. The lawyer designs the AI workflow, sets the parameters, defines what verification steps happen before output leaves the system, and reviews the final product with specific knowledge of where the AI is likely to fail. This is what “meaningful oversight” actually requires. Almost nobody is doing this yet.

The liability exposure gap is between Version 1 and Version 3. Most lawyers currently operating somewhere in Version 1-2 territory believe they’re in Version 2-3. New Mexico judges are already flagging AI hallucinations as grounds for sanctions, and the standard is only tightening.

The Numbers That Should Worry Business Leaders

This isn’t only a lawyer problem. Any business using AI to produce client-facing work, regulatory filings, compliance documentation, or contract analysis has the same exposure architecture.

Consider: if your hallucination rate on production AI is 6%, and you’re processing 100 client deliverables a month, you have 6 errors going out the door monthly. Some of those errors are cosmetic. Some of them are material. A small percentage of the material ones will eventually surface in a dispute.

The question isn’t whether you’ll have an AI error. You will. The question is whether you built the system so that errors stay contained or whether you built it so that errors become liability events.

In 2024, the cost of catching an AI error before it left the building was low. In 2026, the cost of catching it after it left the building is escalating. The $25K sanction in California is one data point. The state bar disciplinary proceedings being initiated against attorneys for consumer AI use are another. The Colorado AI Act hitting enforcement this June is another.

The environment is moving. The businesses that built AI workflows on 2023 assumptions are going to find out in 2026 and 2027 that those assumptions are wrong.

Book a diagnostic to build your firm’s AI verification protocol.

A Practical Framework for 2026

Three things that change your liability position:

  1. Write your AI use policy down. This sounds obvious. Almost no firms or businesses have done it. A written policy that specifies what AI tools are approved, what they can be used for, what the verification requirements are, and who is accountable for outputs — that policy is your first line of defense when something goes wrong. It demonstrates process. Courts and bars care about process.

  2. Distinguish between AI as research and AI as output. Using AI to surface information that you then verify and synthesize is different from using AI to generate the final document that goes to a client or a court. Your policy needs to specify which uses require what level of verification. The more client-facing or legally consequential the output, the more verification it requires.

  3. Never put client confidential information into a consumer AI tool. Full stop. This one isn’t gray. California codified it in statute. The bar guidance in most states is explicit. The Rakoff privilege ruling makes the legal theory clear. Consumer AI tools process your input. That processing may constitute disclosure. Disclosure may waive privilege. If you’re using ChatGPT, Claude on a consumer plan, or any similar tool for client work, you’re taking a privilege risk that most clients haven’t consented to.

What Comes Next

Sanctions are getting larger. State bars are moving from guidance to enforcement. California, the state most likely to establish precedent, just passed legislation with explicit AI rules for attorneys.

The window where “we’re still figuring this out” works as a defense is closing. Courts sanctioning attorneys for AI errors aren’t treating it as an emerging area that requires special patience anymore. They’re treating it the way they treat any other professional competence question: you’re expected to know what you’re doing with the tools you use.

If you’re an attorney still using AI without a documented verification process, you’re not ahead of the curve. You’re behind it.

The $25K case won’t be the last one you read about this month.


The sanctions are getting bigger and the bar for triggering them is getting lower. Kaizen AI Lab builds AI verification protocols for firms that refuse to be the next cautionary tale. Talk to us.

Kaizen AI Labs

Ready to Deploy AI in Your Business?

Schedule a discovery call with our AI consulting team. We'll map your operations, identify leverage points, and show you exactly where AI moves the needle.

Book a Consulting Call
AI

Adjacent Media by Kaizen Labs

Is Your Brand Visible to the Bots?

Get a free GEO audit and find out if your brand is being cited, found, or completely invisible in AI-generated answers. Then let's fix it.

Get a Free GEO Audit
GEO