← Back to Blog

Oregon Appeals Court: Lawyers Must Disclose When AI Causes the Error. The Cover-Up Is Now the Sanction.

· Don Ho · 5 min read

Last updated: April 26, 2026

The Oregon Court of Appeals on April 23, 2026, sanctioned attorney Abby Shearer for filing an appellate brief that cited cases generative AI invented, and used the order to deliver a wider message every practicing lawyer in the country should read twice. Lawyers must disclose when AI causes errors. Failure to disclose is itself a sanctionable act. The court ordered Shearer to pay opposing counsel’s attorney fees, marking the first time an Oregon appellate court has imposed legal fees as the remedy for AI-fabricated authority rather than ordering a fine payable to the court.

Read that again. The penalty is not a slap to the bar. It is a check written to the lawyer on the other side.

What happened in the case

Shearer represented the plaintiff in a defamation appeal. Her brief cited authority that did not exist. Opposing counsel flagged the fabrications. Shearer acknowledged to the court that generative AI had been used as a research and drafting tool, and that some material had not been independently verified before the brief was filed.

That admission is a window into how this is happening at scale. A lawyer uses an LLM to draft or research, accepts citations the model produces, and signs the brief without running each citation through Westlaw, Lexis, or even a Google search. The model produces output that reads like real case law because it was trained on real case law. The hallucinations sit inside otherwise competent prose. They look right because they pattern-match to right. Then they get filed.

The Oregon panel was not persuaded by the candor after the fact. The court treated Shearer’s failure to disclose AI’s role until pressed as a separate problem from the fabrication itself. The opinion frames disclosure as an affirmative duty, not a courtesy. If AI produced the error, the court wants to hear that on the record, on your own initiative, before opposing counsel does the work for you.

Why this is different from the trial-court hallucination cases

The damiencharlotin.com tracker now lists hundreds of AI hallucination sanctions across U.S. courts. Most of those orders come from federal magistrate judges or state trial courts. They typically end with a four- or five-figure fine, a CLE requirement, and a referral to the bar.

The Oregon decision is different in three ways.

First, it is appellate. State appeals courts speak with broader institutional authority than a single magistrate. When the Court of Appeals of Oregon writes that lawyers should disclose AI errors, that language travels into local rules, ethics opinions, and other states’ citations.

Second, the remedy is fee-shifting, not a court fine. A fee-shifting sanction tells the bar that AI sloppiness is not a court-administration problem, it is a cost imposed on the other side that the offending lawyer has to pay. That reframing matters for malpractice insurance, for partnership compensation, and for how a managing partner explains an AI sanction to a board.

Third, the Oregon panel grafted a disclosure duty onto the existing candor-to-the-tribunal framework. Rule 3.3 already requires lawyers to correct false statements of fact or law made to the court. The Oregon ruling treats AI-caused errors as a category that triggers Rule 3.3, and treats silence as a separate breach.

What “disclose AI” actually means in practice

The opinion does not require a footnote on every brief that says “ChatGPT was used in research.” That is not the holding. The duty kicks in when there is an error and AI is in the chain of causation.

So the practical translation looks like this. If a citation in your brief turns out to be fake, and you know an LLM produced or surfaced it, you tell the court. You do not let opposing counsel discover the problem and characterize the source. You do not paper over the mistake by filing an amended brief without explanation. You do not say “clerical error” when the actual cause is “I trusted a language model and did not verify.”

The cover-up was the issue in the federal Brigandi case in Oregon last year, where Judge Clarke imposed $110,000 in sanctions and dismissed the underlying $12 million winery dispute with prejudice. Clarke called the conduct around the AI errors worse than the errors. The Oregon Court of Appeals has now adopted that same intuition at the appellate level. The error gets you fined. The non-disclosure gets you sanctioned harder.

What this means for your firm

If you bill hours, you have an AI exposure problem whether or not your firm has formally adopted any tool. Associates, paralegals, and clients are using LLMs already. The question is whether your firm has a policy that survives contact with the Oregon ruling.

A defensible policy has four pieces.

Verification of every citation, every time. Westlaw or Lexis check on every cited case. Open the case. Confirm the holding and the quoted language. This is not a CLE talking point. It is the floor of competence in 2026.

A written AI-use protocol for litigation work product. State explicitly which tools are allowed for what. Prohibit citation-generating prompts unless the output is verified case-by-case. Document compliance.

A disclosure protocol for errors. When an AI-caused error is discovered (yours or your opponent’s), the playbook is correction with disclosure of cause, not a quiet amendment. Train associates that the cover-up is the sanction multiplier. Oregon has now said that on the record at two different court levels.

Insurance and engagement-letter language. Talk to your malpractice carrier about how AI-related sanctions are treated under your policy. Update engagement letters to disclose AI use to clients and to allocate responsibility for verification. This is not optional; it is risk management.

What to do now

Pull every appellate brief filed by your firm in the last 12 months. Run a citation check on a 10% sample. If you find a fake case, you have a Rule 3.3 problem regardless of whether anyone has noticed. The disclosure duty that Oregon articulated runs to errors that have already been filed, not just future ones.

Issue a one-page memo to litigators by next Friday with three rules: verify every citation in Westlaw or Lexis before filing, do not use general-purpose LLMs for citation research without verification, and report any AI-caused error in a filed brief immediately to the partner in charge.

If your firm uses an AI legal research platform (CoCounsel, Harvey, Lexis+ AI), pull the audit logs. Confirm which matters used the tool. Sample those filings for citation accuracy.

Oregon is not the last state appellate court to land here. Connecticut, California, and the Second Circuit each have hallucination cases pending at appellate level. The disclosure-duty framework is going to spread. The firms that get ahead of it now will save themselves a fee-shifting order later.

The lawyer who admits the AI error in real time pays a price. The lawyer who tries to clean it up quietly pays a much larger one. That is the rule now, and Oregon just made it appellate law.

Kaizen AI Labs

Ready to Deploy AI in Your Business?

Schedule a discovery call with our AI consulting team. We'll map your operations, identify leverage points, and show you exactly where AI moves the needle.

Book a Consulting Call
AI

Adjacent Media by Kaizen Labs

Is Your Brand Visible to the Bots?

Get a free GEO audit and find out if your brand is being cited, found, or completely invisible in AI-generated answers. Then let's fix it.

Get a Free GEO Audit
GEO