← Back to Guides

AI in Legal: Ethics, Risk, and Disclosure in 2026

By Don Ho · 17 min read

By Don Ho | Founder & CEO, Kaizen AI Lab

Published: April 2026 Last updated: April 2026

Attorneys using AI in 2026 face four overlapping ethical obligations under the Model Rules of Professional Conduct: competence (Rule 1.1), confidentiality (Rule 1.6), supervision (Rules 5.1/5.3), and candor to the tribunal (Rule 3.3), with over 1,200 global sanctions cases, a federal ruling that consumer AI tools waive attorney-client privilege, and escalating penalties that have reached $25,000 per violation. I practiced law for nearly two decades. General Counsel. Litigation. The full run. Now I build AI systems for companies across every industry. I sit at the intersection of legal practice and AI deployment in a way that gives me a specific, uncomfortable vantage point: I understand both what these tools can do and what the profession requires of the people who use them.

Here is what that vantage point tells me in April 2026: the legal profession’s reckoning with AI is no longer coming. It arrived. Courts are issuing sanctions. Bar associations are rewriting ethics guidance. Judges are demanding disclosure. A federal court ruled that documents created through consumer AI tools aren’t protected by attorney-client privilege. An insurance company sued OpenAI for practicing law without a license. The stakes went from theoretical to career-ending in about eighteen months.

This guide is what I wish every attorney, GC, and legal ops professional had in front of them right now. The ethical obligations. The disclosure rules. The enforcement cases. The places where AI works and the places where it will get you sanctioned. And a practical framework for building an AI policy that protects your firm instead of exposing it.


Every state has adopted some version of the ABA Model Rules of Professional Conduct. Four of those rules now carry direct, specific implications for attorneys using AI. None of them mention AI by name. All of them apply anyway.

Competence (Rule 1.1)

Rule 1.1 requires lawyers to provide competent representation, defined as “the legal knowledge, skill, thoroughness, and preparation reasonably necessary for the representation.” Comment 8 to Rule 1.1 was amended in 2012 to include a duty to stay current with “the benefits and risks associated with relevant technology.”

That comment was written with e-discovery and cloud storage in mind. In 2026, it applies directly to AI. If you are using AI for legal research, drafting, or analysis, you are obligated to understand how the tool works, where it fails, and what its limitations are. “I didn’t know the AI would hallucinate citations” is not a defense. It is a competence violation.

The flip side matters too. A growing number of legal ethics scholars argue that the competence obligation will eventually require lawyers to use AI tools where they demonstrably improve the quality or efficiency of representation. Right now, that argument is academic. In three years, I don’t think it will be.

Confidentiality (Rule 1.6)

Rule 1.6 prohibits lawyers from revealing information relating to the representation of a client unless the client gives informed consent. “Revealing” includes transmitting client data to third-party platforms.

When you type a client’s case facts into ChatGPT, Claude, or Gemini, you are transmitting confidential information to a third-party company whose terms of service typically permit data collection, processing, and in some cases government disclosure. That is a Rule 1.6 problem regardless of whether the AI company actually reads your data.

The Heppner ruling made this concrete. Judge Rakoff in the Southern District of New York held that documents generated through Claude weren’t protected by attorney-client privilege because Anthropic’s privacy policy allowed data collection and government disclosure. The voluntary transmission of confidential information to a platform operating under those terms constituted waiver.

California’s Senate passed legislation in early 2026 explicitly barring lawyers from putting confidential client information into public generative AI tools. Multiple state bars have issued formal guidance to the same effect. This is not ambiguous.

Supervision (Rules 5.1 and 5.3)

Rule 5.1 requires partners and supervising lawyers to ensure the firm has measures giving “reasonable assurance” that all lawyers conform with the Rules. Rule 5.3 extends that obligation to nonlawyer assistants.

AI falls under Rule 5.3. The tool is a nonlawyer assistant. The supervising attorney is responsible for the output, exactly as they would be responsible for a paralegal’s work product. “The AI did it” is not a defense for the same reason “the paralegal did it” is not a defense. You supervise the work. You own the result.

The Walmart case illustrated the failure mode. An Indiana lawyer uploaded discovery responses into an AI tool, copied the output, and submitted it to the court without review. Judge Baker called it a “perilous shortcut around his responsibilities as a trained legal professional.” No hallucination was involved. The problem was the complete absence of professional judgment. The lawyer outsourced the analytical function of his job and submitted raw AI output as legal work product.

Supervision means designing a workflow where AI output gets reviewed by a competent attorney before it goes anywhere. It means knowing where the AI is likely to fail and checking those failure points specifically. It means treating AI output the way you would treat a first draft from a summer associate who is smart but occasionally makes things up.

Candor to the Tribunal (Rule 3.3)

Rule 3.3 prohibits lawyers from making false statements of fact or law to a tribunal, and from failing to correct false statements previously made. If AI generates a hallucinated citation and you submit it to a court, you have violated Rule 3.3. If you discover after filing that an AI-generated citation is fictitious, you have an obligation to correct the record.

The enforcement of this rule has been aggressive. Courts have sanctioned attorneys for AI-hallucinated citations in cases across the country. The Wisconsin DA who used AI to draft filings containing fabricated citations saw all 74 criminal counts dismissed. The Mata v. Avianca attorney paid $5,000 in sanctions for six fictitious case citations. The trajectory is consistent: courts treat AI-generated falsehoods exactly like attorney-generated falsehoods, because the signature on the filing is yours.


The Privilege Problem: Heppner and What It Means for Every Firm

The Heppner ruling deserves its own section because the implications extend far beyond one case in the SDNY.

Judge Rakoff’s analysis rested on three elements that apply to virtually every consumer AI platform on the market:

Third-party operation. The AI platform is operated by someone other than the attorney or client. Anthropic, OpenAI, Google. None of them are your agent. They are service providers with their own data practices and legal obligations.

Terms permit data access. Most AI platforms include provisions allowing the company to access, store, and process user inputs. Some include cooperation with government requests. When the terms say they can look at your data, you have introduced a third party into what was supposed to be confidential communication.

Voluntary disclosure. Nobody forced the attorney to use Claude. Voluntary disclosure to a third party is waiver. Full stop.

These three conditions exist for essentially every consumer AI tool. Heppner didn’t create new law. It applied existing privilege doctrine to new technology. That is precisely what makes it so difficult to challenge on appeal.

The broader implications extend to any organization using consumer AI for work that touches confidential, privileged, or regulated information. Law firms are the obvious case. But in-house legal departments, compliance teams, and any business processing client data through consumer AI tools face the same structural exposure.

Here is my position, and I’ll put it plainly: the only architecture that is truly privilege-safe is one where privileged data never leaves infrastructure you control. Self-hosted models. Zero-retention API agreements with explicit privilege protections in the data processing addendum. Anything else is a calculated risk. I reviewed the DPAs for three of my own deployments after Heppner. Two of them failed. If you haven’t done the same review, do it this week.


The Sanctions Landscape: Real Cases, Escalating Penalties

The sanctions numbers tell a clear story. In 2023, the typical penalty for AI-related filing errors was $5,000 and a stern lecture. By early 2026, a California federal court ordered $25,000 in sanctions over AI-related work product errors. The global AI sanctions tracker now counts over 1,200 cases worldwide. The numbers are going up. The threshold for triggering them is going down.

Three patterns have emerged.

The hallucination cases. These are the ones that make headlines. Lawyers submitting briefs with fabricated citations generated by AI. Mata v. Avianca was the 2023 inflection point. Since then, the pattern has repeated across jurisdictions: Texas, Colorado, Virginia, New Mexico, California. The Wisconsin DA case added a new dimension because the hallucinated citations came from a prosecutor, and the result was 74 dismissed criminal counts. The San Diego dog custody case added another: the trial court judge relied on the hallucinated citations without verifying them, and the appellate court had to clean up the mess.

The abdication cases. These are newer and arguably more dangerous. The Walmart case is the leading example. No hallucination was involved. The attorney simply skipped the professional judgment step entirely. Courts are recognizing that the problem with AI in legal practice is not limited to fabricated citations. The deeper problem is lawyers who stop practicing law and start forwarding AI output. The question is no longer “did the AI make something up?” The question is “did you actually practice law, or did you let the AI do it for you?”

The escalation pattern. Sanctions have moved from educational ($5,000 with a warning) to punitive ($25,000 in actual fee sanctions). State bars have moved from publishing guidance to initiating disciplinary proceedings. Multiple jurisdictions now treat unverified AI-assisted work product as a clear ethical violation. The trajectory points toward malpractice liability as the next frontier: clients suing lawyers for AI-related errors the same way they sue for any other professional negligence.


Disclosure Requirements: The Patchwork

As of April 2026, the disclosure landscape for AI in legal filings is a patchwork. That is the precise problem.

Federal courts. At least 25 federal district courts have adopted standing orders or local rules addressing AI use. The approaches vary. Some require mandatory disclosure of any AI assistance in filings. Others require certification that all citations have been independently verified. The Fifth Circuit adopted the most detailed framework so far: attorneys must identify AI tools used, certify human review of all output, and disclose any AI-generated language that appears verbatim in the filing.

State courts. State-level requirements vary even more widely. California, New York, and Texas have active rules or pending legislation. Many states have issued bar guidance without formal rule changes. The result is that your disclosure obligation depends entirely on which courthouse you are filing in.

State bar guidance. The ABA issued Formal Opinion 512 in July 2024, confirming that existing Model Rules apply to AI use without requiring new rules. Multiple state bars have followed with their own guidance. The Florida Bar, California State Bar, and New York City Bar Association have published detailed opinions. The core message is consistent: AI use is permitted, but the ethical obligations of competence, confidentiality, supervision, and candor apply in full.

Mandatory vs. voluntary. The split between courts that mandate disclosure and courts that don’t creates a strategic problem. If you’re filing in a court that requires AI disclosure, you disclose. If you’re filing in a court that doesn’t, the question becomes whether voluntary disclosure is strategically wise. My recommendation: disclose voluntarily. The risk of being caught concealing AI use is higher than the risk of disclosing it. If a judge discovers after the fact that you used AI and didn’t disclose it, you’ve added a credibility problem on top of whatever substantive issue brought you to court.

International divergence. The landscape outside the US is moving even faster in some jurisdictions. India’s Gujarat High Court banned AI in judicial decision-making entirely. EU member states are grappling with disclosure requirements under the EU AI Act’s transparency provisions. Canadian courts have adopted their own standing orders. For firms with cross-border practices, tracking disclosure requirements is now a compliance function in itself.


The Hallucination Problem: Why Verification Fails

Everyone knows AI hallucinates. The standard response is “just verify the output.” That response sounds reasonable and is, in practice, inadequate.

AI-fabricated legal citations are structurally invisible without affirmative verification against a primary legal database. The case names sound real. The citation formats are correct. The holdings are written in proper legal language. Everything passes the eye test. A tired attorney reviewing a brief at 10pm will read an AI-generated citation and think “that sounds right” because the model is specifically optimized to produce text that sounds right. Confidence and accuracy are completely disconnected in large language models.

The New Mexico cases illustrate this at scale. Judges in multiple New Mexico courts flagged AI-hallucinated citations in filed briefs. The attorneys who submitted them were not incompetent. They were competent practitioners who trusted a tool that produced plausible-looking output and did not verify it against a primary source.

The dog custody case in San Diego took it further. The hallucinated citations fooled not just the submitting attorney but the trial court judge, who relied on them in her ruling. The appellate court had to issue a footnote telling judges to verify citations in proposed orders submitted by counsel. When the hallucination problem reaches the bench, the verification framework has failed at every level.

What “Meaningful Oversight” Actually Requires

From my experience deploying AI systems across multiple industries, there are three versions of human-in-the-loop:

Human on record. A lawyer signed the brief, so a human was “in the loop.” This is the version that gets people sanctioned. Signing something does not mean reviewing it. Courts are catching up to this distinction fast.

Human as checker. A lawyer reviews AI-generated work before it goes out. This sounds right but assumes the reviewing attorney can catch the AI’s errors. If the AI generates a plausible-sounding analysis in an area where the reviewer is not deeply expert, the checker may not catch what is wrong. This is the competence gap that bar associations have not fully addressed.

Human as architect. The lawyer designs the AI workflow, sets parameters, defines verification steps, and reviews the final product with specific knowledge of where the AI is likely to fail. This is what meaningful oversight actually requires. It is also the version that almost nobody is doing.

The gap between version one and version three is where the liability lives.


I am not an AI skeptic. I build these systems for a living. AI provides real, measurable value in specific areas of legal practice. The key is knowing which areas.

Document review. This is the strongest current use case. AI can process thousands of documents for relevance, privilege, and issue coding faster and more consistently than human review teams. The error rates are comparable to or better than human reviewers, and the cost savings are substantial. Contract review for M&A due diligence, litigation document review, and regulatory response are all areas where AI has proven its value.

Research acceleration. AI is excellent at generating starting points for legal research. It can identify relevant areas of law, suggest search terms, and produce preliminary analysis of legal questions. The critical word is “starting points.” AI research output must be verified against primary sources before it informs any legal decision. Used correctly (as a research accelerator, not a research replacement), it reduces the time from question to answer significantly.

Contract analysis. AI tools can review contracts for standard terms, flag deviations from templates, identify risk provisions, and compare terms across multiple agreements. Vertical legal tech companies are attracting significant investment in this space. Patent analysis, NDA triage, and compliance review are areas where pattern-matching AI outperforms manual review.

Drafting assistance. AI can produce first drafts of routine legal documents: engagement letters, standard motions, discovery requests, corporate resolutions. The value is in eliminating blank-page time, not in producing final work product. Every AI-generated draft requires attorney review and revision before it leaves the firm.

Knowledge management. Firms generate enormous volumes of internal work product: memos, briefs, opinions, contracts. AI can make that institutional knowledge searchable and accessible in ways that traditional document management systems cannot.


The use cases above share a common characteristic: AI is doing pattern matching, information retrieval, or first-draft generation. The attorney retains decision-making authority and applies professional judgment before the output reaches anyone.

The danger zones are where attorneys cede that authority.

Case strategy. AI cannot evaluate the strategic implications of litigation decisions. It does not understand the opposing counsel’s tendencies, the judge’s preferences, the client’s risk tolerance, or the political dynamics of a regulatory proceeding. Lawyers who use AI to generate strategic recommendations and then follow those recommendations without independent analysis are practicing ventriloquism, not law.

Privileged communications. Anything involving client confidences belongs in a privileged environment. Consumer AI tools are not privileged environments. Enterprise AI tools with standard terms of service are not privileged environments either. If the data leaves infrastructure you control, the privilege analysis gets complicated. After Heppner, “complicated” is a charitable description.

Direct client advice. AI should never be the source of advice delivered to a client. Full stop. The attorney reads the AI’s analysis, applies professional judgment, and provides advice in their own voice based on their own assessment. The moment a lawyer copies an AI-generated analysis and sends it to a client as legal advice, they have created both a malpractice exposure and a potential UPL issue (because the AI, not the lawyer, produced the advice the client received).

Filing without review. The cases discussed throughout this article all share one failure mode: AI output that reached a court or opposing counsel without meaningful attorney review. Whether the issue is hallucinated citations, abdicated analysis, or missing professional judgment, the root cause is the same. AI produced something. A lawyer submitted it. The space between those two events is where the practice of law is supposed to happen.

Platform dependency. There is a risk that has nothing to do with ethics rules but is equally dangerous. Google shut down a lawyer’s entire digital life after he uploaded criminal defense documents to NotebookLM. His Gmail, Google Voice number, photos, and contacts were all terminated by an automated system. He had no way to contact clients for two days. If your practice depends on a consumer platform, your practice can disappear without warning.


The UPL Frontier: When the AI Itself Is the Lawyer

The unauthorized practice of law question moved from hypothetical to active litigation in March 2026 when Nippon Life Insurance Company of America sued OpenAI in the Northern District of Illinois. The allegation: ChatGPT told a policyholder that her attorney’s advice was wrong and guided her through legal actions that harmed Nippon Life.

The case matters because of the legal theory. Stanford’s CodeX Lab identified what it calls the “architectural negligence” argument: OpenAI designed ChatGPT to produce authoritative-sounding legal guidance, knew the model hallucinates, knew users would rely on that guidance for consequential decisions, and built no meaningful architecture to prevent the model from crossing the line between legal information and legal advice.

That line (between legal information and legal advice) is where the UPL doctrine lives. Legal information is “here is what the law says.” Legal advice is “here is what you should do.” Every state bars non-lawyers from providing legal advice. If a chatbot tells a user that their attorney is wrong and recommends specific legal actions, is the chatbot (or its operator) practicing law?

The California court that banned ChatGPT use by a stalking defendant added another dimension: courts are starting to treat AI access itself as something that can be regulated.

If the Nippon Life theory succeeds, the implications extend well beyond law. Every licensed profession where AI models give guidance (medicine, finance, tax preparation) faces the same structural question. And Anthropic is already putting legal AI directly inside Microsoft Word, which means the line between “AI tool” and “AI practicing law” is getting thinner every month.

Meanwhile, the legal tech market itself is being disrupted. Companies that built thin wrappers around foundation model APIs are vulnerable to the model providers building the same functionality natively. The legal tech investment thesis is shifting from “AI for lawyers” to “AI that replaces legal tech.” The firms navigating this need to understand both the practice implications and the market dynamics.


Building an AI Policy for Your Firm: A Practical Framework

If your firm does not have a written AI policy, you are operating on luck. Here is a framework based on what I have deployed for clients and in my own practice.

1. Inventory Your AI Usage

You cannot govern what you cannot see. Survey every attorney and staff member about which AI tools they use, for what purposes, and how frequently. Include consumer tools (ChatGPT, Claude, Gemini), legal-specific tools (Harvey, CoCounsel, Vincent AI), and embedded AI features in existing platforms (Microsoft Copilot, Google Workspace AI).

The results will surprise you. In every assessment I have conducted, actual AI usage exceeds reported AI usage by 40-60%. Partners who claim they “don’t use AI” are dictating to Siri, using AI-enhanced search in Westlaw, and sending emails with AI-suggested completions.

2. Classify by Risk

Not all AI use carries the same risk. Build a tiered classification:

Tier 1: Low risk. Internal administrative use. Calendar management. Email drafting for non-client communications. Document formatting. These uses do not implicate confidentiality or competence obligations.

Tier 2: Moderate risk. Legal research (with mandatory verification). First-draft generation for routine documents. Contract review against templates. These uses require attorney review before any output leaves the firm.

Tier 3: High risk. Any use involving client confidential information. Any use producing work product that will be filed with a court. Any use in privileged communications. These uses require approved tools with appropriate data processing agreements, mandatory verification protocols, and documented review chains.

3. Establish Approved Tools

Specify which AI tools are approved for each risk tier. Consumer tools (free-tier ChatGPT, Claude, Gemini) should be restricted to Tier 1 use only. Tier 2 and 3 use should be limited to enterprise tools with appropriate data processing addendums, zero-retention commitments, and audit capabilities.

For firms handling highly sensitive matters, consider self-hosted models for Tier 3 work. The cost has dropped significantly, and the privilege protection is worth the investment.

4. Mandate Verification Protocols

Every AI-generated citation must be verified against a primary legal database (Westlaw, LexisNexis, or the court’s own database) before submission. Every AI-generated legal analysis must be reviewed by a competent attorney in the relevant practice area before it informs any client advice or court filing.

Document the verification. “I reviewed it” is not sufficient. Create a checklist that specifically requires: confirmation of citation accuracy, review of legal reasoning against known law, assessment of whether the AI’s analysis applies to the specific facts and jurisdiction, and sign-off by the reviewing attorney.

5. Build Disclosure Protocols

Know the disclosure requirements for every court where your firm files. Maintain a current list of local rules and standing orders addressing AI use. Default to disclosure when the requirement is ambiguous.

For a deeper framework on building this kind of governance architecture, the 5-Layer AI Compliance Stack covers the full structure: Inventory, Classification, Guardrails, Documentation, and Testing. Firms that need to integrate their AI policy into a broader organizational governance framework or compliance program should read those guides for the complete architecture.

6. Train and Enforce

A policy that nobody reads is decoration. Train every attorney and staff member on the policy. Test comprehension. Make AI policy compliance part of annual reviews. When someone violates the policy, treat it the way you would treat any other ethical violation: seriously.

7. Review Quarterly

The regulatory landscape, the technology, and the case law are all moving fast. A policy written in January 2026 is already partially outdated by April. Build a quarterly review cycle that incorporates new case law, new regulatory guidance, and new AI capabilities.


The Bottom Line

AI in legal practice is not a technology question. It is an ethics question, a risk management question, and a professional responsibility question. The tools are powerful. The efficiency gains are real. The profession cannot afford to ignore them, and it cannot afford to use them carelessly.

The attorneys who will thrive in this environment are not the ones who adopt AI fastest. They are the ones who adopt it most carefully: with clear policies, verified workflows, appropriate architecture, and an unflinching understanding of where the ethical lines are.

The attorneys who will face sanctions, malpractice claims, and disciplinary proceedings are the ones who treat AI as a shortcut instead of a tool. Who submit output without review. Who put client confidences into consumer platforms. Who assume the AI is right because it sounds right.

Nineteen years of practicing law taught me one thing above all else: the profession tolerates mistakes. It does not tolerate laziness. AI amplifies both competence and negligence. The question for every firm in 2026 is simple. Which one are you amplifying?


Don Ho is Founder & CEO of Kaizen AI Lab, where he builds AI systems and governance frameworks for organizations across every industry. He practiced law for nearly two decades, serving as General Counsel, before moving full-time into AI consulting. Take the ACRA to assess your organization’s AI readiness.

Kaizen AI Labs

Ready to Deploy AI in Your Business?

Schedule a discovery call with our AI consulting team. We'll map your operations, identify leverage points, and show you exactly where AI moves the needle.

Book a Consulting Call
AI

Adjacent Media by Kaizen Labs

Is Your Brand Visible to the Bots?

Get a free GEO audit and find out if your brand is being cited, found, or completely invisible in AI-generated answers. Then let's fix it.

Get a Free GEO Audit
GEO