← Back to Blog

A Wisconsin DA Used AI to Draft Court Filings. 74 Criminal Counts Were Dismissed.

· Don Ho

Last updated: February 1, 2026

Last updated: April 2026

A Kenosha County district attorney’s use of AI to draft legal filings resulted in hallucinated citations, judicial sanctions, and the dismissal of all 74 criminal counts, including 38 felonies, making it the most consequential AI-related court failure to date in a U.S. criminal case. DA Xavier Solis used AI to draft legal filings in a criminal case involving 74 counts, including 38 felonies. The AI hallucinated legal citations. The defense caught it. Circuit Court Judge David Hughes sanctioned the prosecutor. All 74 counts against defendants Christain Garrett, 26, and Cornelius Garrett, 32, were dismissed.

The dismissal was technically for lack of probable cause, not directly because of the AI error. Defense attorney Michael Cicchini confirmed that to CBS 58. But the AI fiasco destroyed the prosecution’s credibility at the worst possible moment, and the case is now gone (dismissed without prejudice, meaning it could theoretically be refiled, but the damage is done).

What Actually Happened

The charges stemmed from a 2023 investigation into burglary and property damage. The defendants faced a combined 74 criminal counts. After the defense moved to dismiss in August 2025, the DA’s office filed a reply opposing dismissal. That reply contained AI-generated content, including legal citations that didn’t exist.

The defense filed a motion identifying “AI hallucinations” in the state’s filing. The fabricated citations undermined the prosecution’s legal arguments. Judge Hughes sanctioned Solis and ultimately dismissed all charges for lack of probable cause based on the evidence presented at a 2023 preliminary hearing.

A different district attorney was in office when the preliminary hearing occurred. Solis inherited the case. He also inherited the burden of proving it, and he chose to let AI do that work without verifying the output.

This Isn’t the First Time. It Won’t Be the Last.

The legal profession has been dealing with AI-generated hallucinations in court filings since at least 2023. The case that made national headlines was Mata v. Avianca, where New York attorney Steven Schwartz submitted a ChatGPT-generated brief containing six fictitious case citations. Judge P. Kevin Castel sanctioned Schwartz and his colleague Peter LoDuca $5,000 and referred them for disciplinary proceedings.

Since then, the list has grown. The AI sanctions tracker now counts over 1,200 cases globally. A Texas attorney was sanctioned in 2024 for AI-generated hallucinated citations. A Colorado lawyer faced discipline for the same. A federal court in the Eastern District of Virginia now requires disclosure of AI use in filings. Multiple state and federal courts have adopted local rules requiring attorneys to certify they’ve verified AI-assisted research.

The pattern is playing out in courtrooms nationwide, from New Mexico to Georgia to Nebraska. The Kenosha case adds a new dimension because it involves a prosecutor, not a private attorney. When a DA’s office submits fabricated citations, it doesn’t just risk sanctions. It risks letting defendants go free on cases that might have merit. It also raises constitutional concerns about the government’s obligation to the court under Brady and its progeny.

The Verification Problem Is Structural

Attorneys are trained to verify citations. Westlaw, LexisNexis, and other legal research platforms have built citation-checking tools for decades. Even Walmart’s legal team got caught submitting an AI-drafted filing with similar problems. The problem with AI-generated legal content is that it looks plausible. The case names sound real. The citation formats are correct. The holdings are written in proper legal language. Everything passes the eye test.

That’s exactly what makes it dangerous. A tired attorney reviewing a brief at 10pm will read an AI-generated citation and think “that sounds right” because AI is specifically designed to produce text that sounds right. The hallucination is structurally invisible without affirmative verification against a primary legal database.

Several legal AI vendors (Casetext, now part of Thomson Reuters; vLex’s Vincent AI; Harvey) have built verification layers into their products. These tools cross-reference generated citations against actual case databases. But the DA in Kenosha apparently used a general-purpose AI tool (the specific tool wasn’t disclosed) without any legal-specific verification layer.

Courts Are Responding, But Slowly

As of early 2026, at least 25 federal district courts and a growing number of state courts have adopted standing orders or local rules addressing AI in legal filings. The approaches vary widely.

Some require mandatory disclosure of any AI assistance. Others require certification that all citations have been independently verified. A few have attempted outright bans on AI-generated content, which creates its own problems (how do you define “AI-generated” when attorneys use AI-enhanced research tools routinely?).

The Fifth Circuit adopted the most detailed framework so far, requiring attorneys to identify AI tools used, certify human review of all output, and disclose any AI-generated language that appears verbatim in the filing. Other circuits are watching to see how it works in practice.

No court has yet adopted a uniform standard. The result is a patchwork where the disclosure obligation depends on which courthouse you’re filing in, and federal judges are adopting AI themselves at varying speeds. Organizations that build verification into their AI governance using a framework like the 5-Layer AI Compliance Stack are far less likely to end up in this position.

What to Do Now

If you’re a lawyer using AI for legal research, verify every citation against a primary legal database. No exceptions. Westlaw, Lexis, Google Scholar, or the court’s own database. If you can’t find the case, it doesn’t exist.

If you’re a managing partner or GC, establish a firm-wide AI use policy. The policy should require disclosure of AI tool usage, mandate citation verification, and specify which AI tools are approved for legal research. Put it in writing. Train on it. Enforce it.

If you’re a client, ask your outside counsel about their AI policies. You’re the one who suffers when fabricated citations tank your case. Ask whether the firm has an AI use policy, what verification procedures they follow, and whether they carry malpractice insurance that covers AI-related errors.

If you’re in-house, check your local court rules. The disclosure requirements change frequently. If you’re filing in federal court, check the local rules of the specific district. If you’re in state court, check whether the presiding judge has a standing order on AI. This is now part of basic filing diligence.

The Kenosha case will be studied in legal ethics courses for years. But the lesson is simple: AI can draft a brief, but it can’t verify one. That’s still your job. If you skip it, you might lose 74 counts on a case you were supposed to win.

74 criminal counts dismissed because of unchecked AI. Take the ACRA to assess your organization’s AI verification gaps.

Kaizen AI Labs

Ready to Deploy AI in Your Business?

Schedule a discovery call with our AI consulting team. We'll map your operations, identify leverage points, and show you exactly where AI moves the needle.

Book a Consulting Call
AI

Adjacent Media by Kaizen Labs

Is Your Brand Visible to the Bots?

Get a free GEO audit and find out if your brand is being cited, found, or completely invisible in AI-generated answers. Then let's fix it.

Get a Free GEO Audit
GEO