More Than 60% of Federal Judges Are Using AI. That Should Worry You.
Last updated: April 6, 2026
By Don Ho, Esq. | April 6, 2026 Last updated: April 2026
A 2026 Northwestern University study found that over 60% of surveyed federal judges use AI tools in their judicial work, with some using AI to draft rulings and identify weaknesses in attorney arguments, despite no uniform federal disclosure or quality-control standards and documented cases of hallucinated citations in court orders. A Northwestern University study published this week found that more than 60% of federal judges surveyed have used AI tools in their judicial work. Twenty-two percent said they use AI daily or weekly. One of the study’s co-authors is a sitting federal judge who feeds court filings into AI to generate case timelines, draft rulings, and identify weaknesses in attorneys’ arguments.
This is not a fringe experiment. This is the federal judiciary quietly integrating a technology that hallucinates into the machinery that decides people’s rights. The same judiciary that has sanctioned lawyers at record rates for using AI irresponsibly is now using it behind the bench.
The Numbers Tell a Bigger Story Than Judges Admit
The study collected responses from 112 federal judges. The headline number, 60%, is striking enough. But the operational details are more revealing.
Judge Xavier Rodriguez of the U.S. District Court for the Western District of Texas, who co-authored the study, told the Washington Post that he routinely uploads case filings into AI tools before hearings. He uses AI to generate chronologies, suggest questions for attorneys, and identify weaknesses in a plaintiff’s case. After deciding on a ruling, he sometimes uses AI to draft it.
Rodriguez has served on the federal bench for over 20 years. He’s not some tech-curious newcomer experimenting on low-stakes traffic cases. He’s a senior district judge using AI in contested civil litigation.
The Los Angeles County Superior Court announced a pilot program in March with Learned Hand, a legal startup building AI specifically for judges. Learned Hand’s technology is already being used in trial courts across 10 states and the Michigan Supreme Court. Thomson Reuters and LexisNexis both have contracts to provide AI tools directly to the federal judiciary.
The Hallucination Problem Has Already Hit the Bench
The optimistic version of this story is that AI helps overworked judges process massive caseloads more efficiently. The problem with that version: it ignores what already happened.
Last year, two federal judges had AI-generated hallucinations appear in their own court filings. Judge Henry T. Wingate of the Southern District of Mississippi and Judge Julien Xavier Neals of the District of New Jersey both issued filings that cited nonexistent cases, contained false descriptions of plaintiffs’ allegations, and included fabricated quotes. Both judges attributed the errors to clerks and interns who used AI.
The Senate Judiciary Committee publicly called out both judges. Neals responded by banning AI from his chambers entirely. Wingate corrected the filings after attorneys flagged the errors.
Those weren’t hypothetical risks. Those were real court orders with fabricated law in them, affecting real cases with real parties. The corrections happened only because opposing counsel caught the problems. If no one had checked, those fake citations would be in the federal record today. In one San Diego custody case, a judge relied on an AI-hallucinated filing without verifying it, and no one caught the error until it was too late.
”Extra Set of Eyes” or Outsourced Judgment?
Rodriguez describes AI as “just an extra set of eyes.” That framing understates what’s actually happening. When a judge uploads all filings in a case and asks AI to “identify potential discriminatory statements,” the AI is performing legal analysis. When a judge uses AI to draft a ruling after deciding the outcome, the AI is performing legal writing that will carry the authority of a federal court order.
The distinction between “tool” and “decision-maker” gets blurry fast when the tool is generating the analysis a judge reviews before ruling and then drafting the ruling itself.
Eric Posner, a law professor at the University of Chicago, put it directly: judges “can’t gamble with a technology that is not fully understood and that is known to hallucinate.”
Legal AI vendors argue their products are safer than general-purpose chatbots because they source answers from databases of actual court cases. That argument took a hit in 2024 when a Stanford study found that legal AI tools from LexisNexis and Thomson Reuters, while more reliable than ChatGPT, still produced errors. “More reliable” is not the same as “reliable enough to draft court orders.” A New Mexico judge recently called out AI hallucinations in legal filings as a systemic problem, not an edge case.
What This Means for Practitioners
If you’re litigating in federal court, you need to operate under the assumption that the judge reviewing your motion may have first run it through an AI tool. That changes the calculus in several ways.
Your citations are going to get checked by machines. AI tools excel at cross-referencing citations against actual case databases. If you’re citing a case, make sure it exists, says what you claim it says, and is still good law. The margin for sloppiness just disappeared.
Your arguments are being pre-analyzed. A judge who uploads your brief into an AI tool and asks it to “identify weaknesses” will arrive at the hearing with a pre-generated list of your vulnerabilities. Your oral argument better address those weak points before the judge raises them.
AI-drafted orders are coming. When a judge tells you to prepare a proposed order, know that the judge may already have an AI-drafted version to compare against yours. The quality and specificity of your proposed order matters more than ever.
Disclosure is inconsistent. There is no uniform federal requirement for judges to disclose when they use AI in their work. Some judges are transparent about it. Others are not. You won’t always know whether AI influenced a ruling, and there’s currently no mechanism to find out. Meanwhile, Wisconsin’s experience with an AI-assisted DA shows what happens when AI enters the courtroom without proper guardrails.
What Should Happen Next
The federal judiciary needs a uniform AI use policy. Right now, adoption is happening judge by judge, court by court, with no consistent standards for disclosure, verification, or quality control. Some judges ban AI entirely. Others draft rulings with it. Attorneys and litigants have no way to know which approach their judge follows.
At minimum, any AI-assisted judicial work product should go through mandatory citation verification before filing. Judges who use AI to draft orders should be required to disclose that fact, the same way attorneys in many jurisdictions must now disclose their AI use. And AI tools deployed in the judiciary should be subject to independent accuracy audits, not just vendor self-reporting.
The efficiency argument is real. Federal courts are overwhelmed, and judges without adequate staff need tools to manage their dockets. But efficiency gains mean nothing if they come with a credibility tax. The legitimacy of the federal judiciary depends on the public’s belief that judges are applying actual law to actual facts. When hallucinated citations start appearing in court orders, that belief erodes.
The 60% adoption number is not going down. The question now is whether the judiciary will build the guardrails before the next hallucination lands in a ruling that changes someone’s life. India’s Gujarat High Court took the opposite approach and banned AI from judicial decision-making entirely. American courts haven’t come close to that level of clarity.
Judges are adopting AI faster than the rules. If you’re deploying AI in legal work, take the ACRA to identify your exposure.
Judges are using AI to analyze your filings. Are your filings ready for that? Kaizen AI Lab helps legal teams deploy AI that holds up under scrutiny — from opposing counsel and from the bench. Let’s talk.
Don Ho, Esq. is Founder & CEO of Kaizen AI Lab, advising companies on operational growth strategies and the legal aspects of AI integration in their businesses.