← Back to Blog

Judge Rakoff Says Your Claude Chats Aren't Privileged. Your Clients Are About to Learn the Hard Way.

· Don Ho · 5 min read

Last updated: April 20, 2026

Bradley Heppner got indicted on securities and wire fraud, hired counsel, received a grand jury subpoena, and then did what millions of people now do when they have a problem they can’t sleep over. He opened Claude. Over several sessions he generated about 31 documents analyzing the government’s likely theories, weighing defenses, and drafting legal arguments. He listed them on his privilege log. On February 10, 2026, Judge Jed Rakoff in the Southern District of New York ordered him to hand every one of them to the prosecution. The written memorandum followed on February 17. This week the Reuters story went viral in legal circles, and more than a dozen major law firms have now issued client alerts telling people to stop confiding in public chatbots. Read the order if you haven’t. United States v. Heppner is the first federal ruling that squarely applies privilege doctrine to an AI chat, and the answer is clean: public consumer AI is not a privileged channel.

The three-part test, applied without mercy

Rakoff used the traditional federal common-law privilege test. A communication is privileged only when it is (1) between a client and an attorney, (2) intended to be and kept confidential, and (3) for the purpose of obtaining or providing legal advice. Heppner’s Claude chats failed all three prongs.

Claude is not an attorney. It holds no license, owes no fiduciary duty, and cannot form a representation. Prong one was never close.

The version Heppner used was the public consumer tier of Claude, governed by Anthropic’s standard consumer privacy policy. That policy permitted the company to retain, review, and in some circumstances use the inputs. Heppner could not have reasonably expected confidentiality in the sense privilege requires. Prong two failed.

Heppner used Claude on his own. His lawyers did not direct him to the tool, supervise his prompts, or review the outputs before he generated them. The communications were not made “for the purpose of obtaining or providing legal advice” from counsel. Prong three failed.

Work product got the same treatment. The doctrine protects materials prepared by counsel or at counsel’s direction in anticipation of litigation. Heppner generated these reports himself, outside his attorneys’ knowledge. Nothing to protect.

Why this ruling matters more than the last five AI-in-court stories

We have been covering AI hallucinations and sanction orders for two years. Those cases punish lawyers. This one punishes clients, and it does so by putting their private deliberations in the prosecutor’s hands. Different mechanism, much bigger exposure.

Think about what sophisticated criminal, civil, regulatory, and employment defendants actually do with these tools. They paste in the subpoena and ask what it means. They describe the deal and ask where the weak points are. They draft apology emails, term sheet responses, and internal memos. They rehearse testimony. Every one of those conversations is now a potential exhibit. On the same day as Rakoff’s order, Magistrate Judge Anthony Patti in the Eastern District of Michigan declined to compel AI-chat production from a pro se plaintiff in a different posture, so there will be fact-specific variation. The Heppner rule still controls anytime a represented party uses a public tool without counsel’s direction.

The companion case worth flagging is Warner v. Gilbarco, Inc., 2026 WL 373043 (E.D. Mich.), also decided February 10. There the court preserved work-product protection for AI-assisted drafts filed by a pro se litigant on the theory that AI acted as a tool in the litigant’s hands. That reasoning may save work product even when privilege is lost. It does not save privilege itself, and it is not a federal circuit holding.

The enterprise-tier illusion

Every vendor will tell you their business or enterprise tier solves this. Enterprise contracts typically include no-training commitments, data processing agreements, and retention controls. Those terms improve the confidentiality posture. They do not create attorney-client privilege, because the AI is still not an attorney. No court has held that enterprise AI chats are privileged. Until one does, the best case is that enterprise AI plus explicit counsel direction plus work product doctrine protects drafting work from discovery. That is a narrower shield than most buyers assume when the sales rep shows up with a SOC 2 report.

The other problem is metadata. Even where the content is protected, a privilege log still has to describe it. If your clients are running prompts against outside tools, you do not necessarily know what exists, who saw it, how long it was retained, or where the audit log lives. Opposing counsel now knows to ask.

The discovery requests coming in 2026

Based on the Heppner briefing, expect the following to show up in your next round of discovery. Requests for production of all inputs and outputs from any generative AI tool the party used to analyze, respond to, or prepare for the matter. Interrogatories identifying each AI platform used, the account holder, the retention period, and whether counsel directed the use. Deposition questions about which prompts were run and what the outputs said. Subpoenas to the AI vendor for account records where preservation is at issue. Corporate defendants will see document requests reaching into employee Claude, ChatGPT, Gemini, and Copilot accounts, personal and work, on the theory that relevant communications may live there.

If your litigation hold notices do not already cover AI chat histories, they are out of date. Update the template this week.

What to do now

Write a one-page client advisory. Two paragraphs. Do not use public AI tools (ChatGPT Free, Claude.ai consumer, Gemini, Grok) to discuss pending or anticipated legal matters. Route all AI-assisted legal analysis through counsel, using enterprise tools your firm has vetted, with prompts directed by a lawyer. Send it to every active client. Put it in your next engagement letter.

Audit your firm’s own AI stack. Confirm every tool in production has contractual no-training and no-retention terms, that audit logs exist, that access is role-based, and that the tool is not connected to consumer accounts. If a paralegal is running matter facts through the free tier on a personal laptop, fix it.

Update your litigation hold template to expressly include generative AI chat histories, prompts, outputs, and account identifiers. Add AI platforms to your standard discovery plan. Add document retention language to the information governance policy.

For transactional work, add a carve-out to NDAs and confidentiality agreements that expressly addresses AI use. Standard clauses requiring “reasonable” confidentiality safeguards are now ambiguous as to whether public chatbot use violates them. Specify.

For criminal and regulatory defense practices, add an AI-use interview to every new matter intake. Ask the client what they have typed into which tools, when, and on which accounts. Preserve what exists. Assume the government will ask.

Rakoff’s ruling is a trial court decision, not Second Circuit precedent. It will be persuasive to every magistrate and district judge facing the same question next month. The cases are coming fast enough that by the time a circuit weighs in, the norm will already be set. Act as if the rule is final, because operationally it is.

Kaizen AI Labs

Ready to Deploy AI in Your Business?

Schedule a discovery call with our AI consulting team. We'll map your operations, identify leverage points, and show you exactly where AI moves the needle.

Book a Consulting Call
AI

Adjacent Media by Kaizen Labs

Is Your Brand Visible to the Bots?

Get a free GEO audit and find out if your brand is being cited, found, or completely invisible in AI-generated answers. Then let's fix it.

Get a Free GEO Audit
GEO