Sullivan & Cromwell Just Joined the AI Hallucination Hall of Shame
Last updated: April 23, 2026
On April 18, 2026, Andrew Dietderich, co-chair of Sullivan & Cromwell’s restructuring practice, sent a letter to U.S. Bankruptcy Judge Martin Glenn apologizing for a brief his team had filed ten days earlier. The brief was riddled with AI hallucinations. Fake case citations. Misquoted authorities. Wrong case numbers. Inaccurate article titles. Whole sentences that had to be redlined and rewritten before the next hearing on the Prince Global Holdings liquidation (Bankr. S.D.N.Y., 1:26-bk-10769).
The firm that advises OpenAI on the safe and ethical deployment of artificial intelligence got caught using AI badly in a federal court filing. The dark comedy writes itself.
The hallucination database now has 330+ entries
Damien Charlotin’s hallucination tracker hit 330 documented cases as of April 21. Sullivan & Cromwell is now on it, sitting next to solo practitioners in Wisconsin small claims court and the New Mexico judges who got sanctioned in March. The technology does not care whether your firm has 900 lawyers or one. The verification gap is identical.
What makes the S&C filing instructive is not that it happened. It is who it happened to. This is a firm with formal AI policies, mandatory training, and a designated AI use committee. Dietderich’s letter says the firm’s “safeguards are designed to prevent exactly this situation.” They were. They did not.
The opposing counsel who caught the errors, Matthew Schwartz at Boies Schiller Flexner, did not run anything fancy. He read the brief. The cases were not real. The cite-check failed because the citations were not checked.
Why “we have an AI policy” is now meaningless
Every AmLaw 100 firm has an AI policy. Most of them say some version of: lawyers may use approved tools, all AI output must be verified by a human, no client confidential information goes into public models. The Sullivan & Cromwell policy reportedly says all of that.
Policy is not the constraint. Execution is. When a senior associate is closing out a brief at midnight before a 9 a.m. filing deadline, the policy that says “verify every citation” loses to the deadline that says “file by 9.” The Prince Global brief got out the door because someone did the easy version of cite-checking, the version where you skim the Bluebook format and trust that the AI gave you a real case.
The Bloomberg Law write-up notes that Dietderich said the firm “is evaluating whether further enhancements to its internal training and review processes are warranted.” Translated from BigLaw passive-voice: nobody knows yet what to actually change.
The bankruptcy court angle matters
This was a bankruptcy filing in the Southern District of New York, in front of a judge (Glenn) who has been on the bench for 14 years and runs a tight courtroom. Bankruptcy is also where AI errors are most dangerous because the briefing is dense, the case law is jurisdiction-specific, and judges are often working through hundreds of motions in parallel. Hallucinated citations slip through more easily because the volume is higher.
The case itself is also high-profile. Sullivan & Cromwell represents the liquidators of Prince Global Holdings Ltd., a group of British Virgin Islands entities tied to Chen Zhi, who was indicted by DOJ in October 2025 for allegedly running forced-labor scam compounds in Cambodia. This is a billion-dollar fraud and money-laundering matter. Not the kind of file where you want to be explaining hallucinated cases to a federal judge.
What general counsel should ask their outside firms this week
If you are a GC at a public company or a regulated business, this is the week to send the email. Not next quarter. This week.
Ask three questions:
-
What AI tools is your firm using on my matters, and which models touch privileged or confidential material? If they cannot answer in writing within 48 hours, that is the answer.
-
Who at your firm signs off that every citation in every filing on my matters has been independently verified by a human, and what does that verification process actually look like? Vague answers mean no real process.
-
What is your firm’s policy on disclosing AI use to opposing counsel and the court? Some districts now require it. If your firm does not know which districts, that is also the answer.
The goal is not to ban AI from your matters. AI cite-checking, document review, and discovery work product are real productivity gains and clients should expect them. The goal is to know whether your firm’s policy is operational or theatrical.
What in-house teams should do internally
The same discipline applies to your own legal department. Most in-house teams are quietly running ChatGPT, Claude, and Copilot on contracts, demand letters, and regulatory filings. Some of that work goes to courts, agencies, and counterparties under your firm’s name.
Three things to put in place if they are not already there:
- A written log of every AI-assisted document that leaves the legal department, with the name of the human who verified it.
- A rule that no AI-generated citation gets filed without being pulled from Westlaw, Lexis, or PACER and read by a human.
- A quarterly audit of a random sample of those documents by someone other than the drafter.
This is not paranoia. It is the same posture you would take with a junior associate. The model is the junior associate. It is fast, confident, and occasionally makes things up.
What to do now
Pull a random sample of the last 20 briefs, motions, or memos your outside counsel filed on your behalf in the last 60 days. Pick 10 citations from each. Check them against Westlaw or PACER. If even one is hallucinated, you have a conversation to have with the partner. If they are all real, you still have a conversation to have, because now you know they are doing the work and you should pay for it without complaining about the bill.
Sullivan & Cromwell will survive this. They are too big and too embedded to lose major clients over one filing. The smaller firms in the next ten hallucination cases will not be so lucky. Neither will the GCs who signed off on their work without asking the questions above.
The era of trusting that BigLaw has this handled is over. It ended on a Friday in April when the firm that wrote OpenAI’s ethics memo had to apologize to a bankruptcy judge for the AI it could not control.