A California Court Just Ordered OpenAI to Ban a User From ChatGPT. Nobody Asked Whether That's Constitutional.
Last updated: April 16, 2026
By Don Ho, Esq. | April 16, 2026 Last updated: April 2026
A San Francisco Superior Court judge ordered OpenAI to ban a specific user from ChatGPT in April 2026 without conducting a First Amendment analysis, setting a precedent that could allow courts to restrict access to any AI platform without constitutional review. On April 13, 2026, San Francisco Superior Court Judge Harold Kahn granted a temporary restraining order requiring OpenAI to keep a specific user locked out of ChatGPT until at least May 6. The user, identified as John Roe, allegedly used ChatGPT to fuel a months-long stalking and harassment campaign against his ex-girlfriend, including generating dozens of fake psychological reports about her and encoding a death threat sent to her family.
The facts of this case are genuinely alarming — and another entry in the growing catalog of real-world AI safety harms that move the conversation beyond hypotheticals. It also lands in a state where the AG is already ramping up AI enforcement, making California ground zero for platform liability cases. John Roe was arrested on four felony counts, including communicating a bomb threat and assault with a deadly weapon. A criminal court found him incompetent and ordered mental health commitment. He was released on a procedural technicality when the state failed to transfer him from jail to the facility on time.
But here is the part that should concern every lawyer, every AI company, and every person who uses an AI platform for anything: the court issued this order without conducting a First Amendment analysis. And the user whose access was cut off was not a party to the proceeding and had no opportunity to be heard.
The Lawsuit Behind the Order
The case is Doe v. OpenAI, filed in San Francisco Superior Court. Jane Doe, Roe’s ex-girlfriend, sued OpenAI on theories of negligent entrustment, product design defect, failure to warn, and unlicensed psychological counseling. She asked the court for an emergency order forcing OpenAI to block Roe from the platform.
The complaint paints a disturbing picture. Roe used ChatGPT extensively during his harassment campaign. He generated fabricated psychological profiles of his ex-girlfriend and distributed them to her family, friends, colleagues, and clients. He spoofed her company email. He contacted former employers. He left voicemails threatening physical violence. He used ChatGPT to encode and transmit a death threat to her family.
His ChatGPT account contained conversations titled “Violence list expansion” and “Fetal suffocation calculation.” (First Amendment scholar Eugene Volokh, who followed the case, noted the second title likely relates to the user’s theories about sleep apnea and fetal asphyxiation, not literal plans for violence, though the ambiguity itself illustrates the problem.)
OpenAI’s own safety systems had previously flagged Roe’s account for “Mass Casualty Weapons” activity and banned it. OpenAI initially upheld the ban on appeal, then reversed itself the next day, restored access, and apologized to Roe for the inconvenience. After that reversal, Doe submitted a detailed abuse report to OpenAI. The company called it “extremely serious and troubling,” promised “appropriate action,” and did nothing.
The Constitutional Problem Nobody Raised
Judge Kahn granted the TRO. Roe’s accounts will remain suspended until the preliminary injunction hearing on May 6.
OpenAI’s lawyers mentioned the First Amendment during the hearing. They cited Packingham v. North Carolina, the 2017 Supreme Court decision holding that the government cannot broadly restrict a person’s access to internet platforms because the internet is the “modern public square” where free speech protections apply. OpenAI argued that blocking Roe from using ChatGPT for any purpose would be overbroad.
According to Volokh’s research assistant, who attended the hearing, there was no meaningful discussion of the user’s speech rights by the court. The order was granted without First Amendment analysis.
That is a problem, regardless of how reprehensible Roe’s conduct was.
There is a clean distinction here. If OpenAI voluntarily decides to ban a user, there is no state action and no First Amendment issue. Private companies can set their own terms of service and enforce them however they want. OpenAI could have permanently banned Roe at any point.
A court order is different. When a judge orders a private company to cut off a specific person’s access to a general-purpose communications platform, the government is restricting that person’s ability to access information and communicate. That is state action. First Amendment scrutiny applies.
Why This Matters Beyond One Stalker
The immediate reaction to this case will be obvious: the guy is a dangerous stalker with felony charges who used AI to terrorize someone. Good. Ban him. And on the merits, that reaction might be right. Courts can restrict speech and access to communication tools for people who have been convicted of crimes or adjudicated as dangerous. Restraining orders routinely limit contact between parties.
But the procedure matters. Three problems stand out.
The user was not a party and was not heard. Roe had no opportunity to argue against the order before it was issued. In the context of prior restraints on speech, courts typically require some form of notice and an opportunity to be heard before access to a communication platform is cut off.
The order covers all use of ChatGPT, not just harmful use. The TRO does not say “Roe cannot use ChatGPT to contact Jane Doe” or “Roe cannot use ChatGPT to generate content about Jane Doe.” It requires OpenAI to suspend his access entirely. If Roe wanted to use ChatGPT to research a medical condition or draft a grocery list, the order prevents that too.
The precedent extends to every AI platform. Oregon just passed a law creating a private right of action with statutory damages for chatbot harms, and the AI regulatory patchwork across states means platform operators face different liability standards in every jurisdiction. If a court can order OpenAI to ban a user from ChatGPT without First Amendment analysis, can a court order Google to ban a user from Gmail? Can it order Microsoft to cut off access to Bing? Can it order a cell phone carrier to terminate service? The logic does not have a natural stopping point once you accept that courts can order platform bans without constitutional review.
What to Do Now
For AI companies: expect more of these requests. Build your legal response framework now. When a court asks you to ban a user, you need a position on whether and how First Amendment protections apply to your platform. OpenAI raised Packingham at the hearing but apparently did not push back hard enough to get the court to engage with the argument.
For lawyers representing harassment victims: this case gives you a new tool, but use it carefully. An overbroad order that gets reversed on appeal helps nobody. Tailor your requests to the specific harmful conduct rather than requesting blanket platform bans.
For general counsel at any company building or deploying AI tools: the liability theories in this case (negligent entrustment, product design defect, failure to warn) are coming for you. OpenAI’s own internal safety system flagged this user, banned him, reversed the ban, received a detailed abuse report, acknowledged the severity, and did nothing. That fact pattern is a plaintiff’s lawyer’s dream. And OpenAI is already getting sued for practicing law without a license — the liability theories are stacking up from every direction.
The May 6 preliminary injunction hearing will be worth watching. It may be the first time a court seriously grapples with whether ordering an AI company to ban a user is a prior restraint on speech. If it is, the constitutional standard the court applies will shape AI platform governance for years.
Courts are now ordering AI platforms to ban users — and the liability theories in the complaint apply to every company with a chatbot, not just OpenAI. Kaizen AI Lab helps AI companies build safety, moderation, and governance frameworks before the lawsuits arrive. Talk to us.