A Lawyer Copy-Pasted AI Output Into a Federal Court Filing. The Judge Called It a "Perilous Shortcut."
Last updated: April 16, 2026
By Don Ho, Esq. | April 16, 2026 Last updated: April 2026
A U.S. Magistrate Judge in Indianapolis rebuked an attorney in April 2026 for copying raw AI output directly into a federal court filing without any independent legal analysis, calling it a “perilous shortcut” and establishing that the absence of professional judgment, not just AI hallucination, is grounds for judicial sanction. The lawyer admitted in open court that he uploaded Walmart’s discovery responses into an AI program, asked it to identify deficiencies, and then copied and pasted the output directly into an email to opposing counsel and the court. No review. No independent analysis. Just raw AI output, submitted as legal work product.
U.S. Magistrate Judge Tim Baker in Indianapolis was not impressed. In a written order dated April 15, 2026, Baker rebuked attorney Mark Waterfill for what the judge called a “perilous shortcut around his responsibilities as a trained legal professional.” The case is Cynthia White v. Walmart, No. 1:25-cv-01120-RLY-TAB, Southern District of Indiana.
What Actually Happened
Waterfill represents a plaintiff who claims Walmart retaliated against her after she reported a workplace injury. Walmart denies the allegations. During discovery, Waterfill needed to challenge Walmart’s responses to his client’s evidence requests.
Instead of reading the responses himself and applying his legal training to identify actual deficiencies, Waterfill took the entire batch of Walmart’s discovery responses and fed them into an AI tool. The specific program was not identified in the order. He asked the AI to find problems with the responses. Then he copied whatever the AI generated and pasted it into a communication that went to Walmart’s lawyers at Ogletree Deakins and to the court.
At a hearing last week, Judge Baker observed that the objections in Waterfill’s filing were clearly AI-generated. Waterfill confirmed it. There was no independent legal analysis layered on top.
Baker’s order made the point directly: “AI is a useful tool, but not a substitute for good lawyering.”
Why This Matters More Than the Usual AI-in-Court Story
We have covered AI hallucination cases on this blog. Lawyers citing fake cases generated by ChatGPT. Judges in New Mexico and San Diego sanctioning attorneys for submitting fabricated citations. Those are bad, but the failure mode is obvious: the AI made something up, the lawyer didn’t check, the court caught it.
The Waterfill case is different, and arguably worse. There’s no indication that the AI fabricated anything. The problem is that Waterfill didn’t exercise any professional judgment at all. He outsourced the entire analytical function of his job to a machine and submitted the result without evaluation.
That distinction matters for every lawyer using AI right now. The question is not just “did the AI hallucinate?” The question is “did you actually practice law, or did you let the AI do it for you?”
Baker framed it as a breach of the lawyer’s obligation to “independently consider any discovery deficiencies identified by AI” before relying on that analysis in court proceedings. The AI can flag issues. The lawyer still has to think about whether those issues are real, whether they’re worth raising, and how to present them in a way that reflects actual legal strategy.
The Pattern Is Getting Harder to Ignore
This is not an isolated incident. The Waterfill ruling lands in a growing stack of judicial pushback against lawyers who treat AI tools as autopilot.
In March 2026, a federal appeals court fined lawyers $24,400 for AI-related misconduct in a FIFA case. Courts in New Mexico, San Diego, and elsewhere have sanctioned or warned attorneys for AI-generated filings with fabricated content. A Kenosha DA’s AI-generated brief got 74 criminal counts dismissed. Judge Jed Rakoff in Manhattan just ruled that AI chatbot conversations are discoverable and not protected by attorney-client privilege. Every week brings a new data point.
The common thread in all of these cases: the lawyer stopped doing lawyer things and started doing copy-paste things. The AI was not the problem. The absence of professional judgment was the problem. The sanctions tracker keeps growing because the pattern keeps repeating.
What This Means for Your Practice
If you are a lawyer using AI tools (and you should be, used correctly), here are the operating rules that cases like White v. Walmart are establishing:
AI output is a starting point, not a filing. Treat anything an AI generates the same way you would treat a first-year associate’s draft. Review it. Edit it. Apply your own judgment. If you would not sign a brief your associate wrote without reading it, do not sign one an AI wrote without reading it.
Document your review process. The day is coming when courts will require attorneys to certify not just that they reviewed AI-generated content, but that they independently verified its accuracy and relevance. Start building that habit now.
Understand what the AI actually did. If you feed discovery responses into an AI and ask it to “find deficiencies,” you need to understand what analytical framework the AI is applying. Is it comparing responses to the Federal Rules of Civil Procedure? Is it pattern-matching against some training corpus? If you don’t know what the AI did, you cannot evaluate whether its output is correct. And if you’re using consumer-grade AI tools for legal work, understand that courts are already ruling those conversations aren’t privileged.
The bar is professional judgment, not perfection. Nobody expects lawyers to be right 100% of the time. Judges expect lawyers to exercise independent professional judgment 100% of the time. There is a wide gap between those two standards, and Waterfill fell into it.
The Bottom Line
Judge Baker’s order is four pages long and contains one sentence that should be printed on every law firm’s AI usage policy: “AI is a useful tool, but not a substitute for good lawyering.”
The lawyers getting sanctioned and rebuked are not the ones using AI. They are the ones using AI as a replacement for thinking. The difference between those two things is the entire practice of law.
Copy-paste AI workflows are the highest-risk pattern in legal practice. Book a diagnostic to build a compliant AI workflow for your firm.