New Mexico Judges Are Finding AI Hallucinations in Legal Filings. The Sanctions Are Starting.
Last updated: April 13, 2026
By Don Ho, Esq. | April 13, 2026
Last updated: April 2026
New Mexico federal and state courts have identified AI-generated hallucinations in at least seven lawsuits since 2023, with sanctions now reaching $8,640 for individual litigants and judges issuing standing orders requiring disclosure and verification of any AI-assisted filing. An Albuquerque man filed a federal employment discrimination lawsuit last year and asked for $355.69 quintillion in damages. That’s $355,687,428,096,000,000,000. Senior U.S. District Judge Judith Herrera called the request “quite simply ludicrous.” He’s not alone, and the judicial response is shifting from warnings to fines.
The Pattern Judges Are Seeing
The cases follow a consistent pattern. A self-represented litigant or an attorney uses ChatGPT or a similar tool to draft a legal filing. The AI generates citations to cases that do not exist. The filing gets submitted without anyone checking whether the cited cases are real. The opposing party or the court catches the fabrications. Sanctions follow.
In one Las Cruces case, U.S. Magistrate Judge Damian Martinez found that an attorney filed a pleading citing at least six nonexistent cases. An out-of-state lawyer drafted the brief, and the New Mexico attorney of record filed it without reading it. Martinez fined the attorney $1,500, ordered him to report the incident to both state and federal bar disciplinary committees, and required him to complete a one-hour course in legal ethics focused on AI use in legal writing.
Martinez’s ruling laid out the technical explanation with unusual precision for a judicial opinion. He wrote that hallucinations occur when an AI database generates fake sources of information because the model learns patterns from incomplete or flawed training data, producing “inaccurate predictions or hallucinations.” That level of technical specificity in a court ruling signals that judges are no longer treating AI errors as an unfamiliar novelty. They understand the mechanism. They know what to look for.
Self-Represented Litigants Are the Hardest Hit
State District Judge John P. Sugg of Carrizozo put it bluntly: “A lot of self-represented litigants, especially, are relying heavily on AI and they don’t know how to check these citations or the statutes.” The result is a stream of filings that look professionally formatted but contain fabricated legal authority. Judges are spending their limited time chasing down citations to cases that were never decided, statutes that were never enacted, and legal standards that were never articulated by any court.
This creates a particular problem for access to justice. Self-represented litigants often turn to AI tools because they cannot afford an attorney. The AI produces something that looks like a competent legal brief. The litigant files it in good faith, believing the citations are real. When the court discovers the hallucinations, the litigant faces sanctions, case dismissals, or both, all because they trusted a tool that presented fiction as authority.
The asymmetry is stark. The AI company that built the tool faces no liability. The person who relied on it in good faith pays the fine or loses the case. No regulator has addressed this gap. A recent San Diego dog custody case drove the point home: when the attorney cited fake cases and the judge relied on them, the appellate court said both sides failed to verify.
The Judicial Response Is Formalizing
New Mexico is not waiting for a statewide policy. Judge Sugg imposed his own standing order two weeks ago requiring any attorney or self-represented litigant who uses generative AI to draft, edit, or modify any filing to disclose that use at the top of the document. Filers must also certify that the AI-generated language was verified for accuracy using traditional legal research methods “or by a human being.”
The New Mexico Supreme Court is working on a formal statewide policy on AI use in the judiciary. Other states and federal courts are moving in the same direction, and a Northwestern study found that over 60% of federal judges are using AI themselves, meaning judges are on both sides of this problem. Last summer, a federal judge in Colorado fined two attorneys representing MyPillow CEO Mike Lindell $3,000 each after they submitted an AI-generated brief containing more than two dozen errors, including fabricated case citations.
In October 2023, then-Chief U.S. District Judge William Johnson of New Mexico found what he described as only the second instance he was aware of where a federal court dealt with a pleading citing nonexistent judicial opinions. Two and a half years later, it’s happening regularly enough that individual judges are writing standing orders to address it. The AI sanctions tracker now logs over 1,200 cases globally, and the rate is still climbing.
The Real Risk for Practicing Attorneys
For solo practitioners and small firm attorneys, the temptation to use AI for drafting is obvious. The tools are fast, cheap, and produce output that reads like competent legal writing. The problem is verification. A Walmart attorney learned this the hard way when AI-drafted filings blew up in court. Checking every citation in an AI-generated brief takes almost as long as writing the brief from scratch. If you’re going to use AI for legal drafting, you need a verification workflow that is as rigorous as your editing process.
The attorneys getting sanctioned are not the ones using AI carefully. They’re the ones who skipped the verification step. The New Mexico attorney who filed six fake citations didn’t read the brief before filing it. The Lindell attorneys submitted a document with more than two dozen errors. These are not close calls. These are complete failures of the most basic professional obligation: know what you’re filing.
Bar associations across the country have been issuing guidance on AI use in legal practice, and the consistent message is that the duty of competence requires attorneys to verify AI-generated work product. Model Rule 1.1 has not changed. The competence standard has always required attorneys to understand the tools they use and the limitations of those tools. AI is not exempt from that requirement. And the Wisconsin DA who let AI run his court filings is exhibit A for what happens when you forget that.
What to Do Now
If you’re an attorney using AI tools in your practice, implement a mandatory verification step for every AI-generated citation. No exceptions. Run every case name through Westlaw, Lexis, or a free case database. If you cannot find the case, do not cite it. If a tool generates a citation you cannot verify in under two minutes, flag it and replace it with a real citation that supports the same proposition.
If you’re a judge or court administrator, consider adopting a disclosure requirement similar to Judge Sugg’s standing order. Requiring disclosure does not prohibit AI use. It creates accountability. Attorneys and litigants who know they must certify verification are far more likely to actually perform it.
If you’re a self-represented litigant, understand that ChatGPT, Claude, and every other generative AI tool will sometimes invent case citations that look completely real but do not exist. Before filing anything generated by AI, check every single citation against a legal database. If you don’t have access to Westlaw or Lexis, use Google Scholar’s case law search or the court’s own electronic filing system. A citation that returns zero results in any legal database is almost certainly fabricated.
The era of filing AI-generated briefs without verification is over. The courts have identified the problem, they understand the technology, and they are sanctioning the people who fail to check their work. Treat every AI-generated legal citation as unverified until you prove otherwise.
AI hallucinations in court filings are accelerating. Book a diagnostic to build your firm’s AI verification protocol.