After Cox: The Supreme Court Just Made Your AI Vendor Bulletproof. Liability Now Lives With You.
Last updated: April 26, 2026
Three weeks ago the Supreme Court decided Cox Communications, Inc. v. Sony Music Entertainment and reversed a billion-dollar verdict against an internet service provider that kept serving subscribers it knew were pirating music. The Court held that mere knowledge of user infringement is not enough for contributory copyright liability. A defendant either has to actively induce the infringement or operate a service “tailored to infringement” with no substantial noninfringing use. Last Friday a Chinese AI video generator that produces Darth Vader on demand cited Cox in a motion to dismiss filed in the Central District of California. That motion is the canary, and every general counsel using AI tools should care about which way the bird falls.
The case that started it
Disney, Universal, and Warner Bros. sued the operators of Hailuo AI, a Chinese image and video generator marketed as “a Hollywood studio in your pocket.” Hailuo will produce short clips of Spider-Man, Homer Simpson, Shrek, the Joker, and Bugs Bunny from five-word prompts. The model can do this because it was trained on unauthorized copies of the studios’ content. Hailuo already filters violent and pornographic outputs. It chose not to filter copyright.
Nanonoble Pte. Ltd., the Singaporean entity operating Hailuo for U.S. users on behalf of Chinese AI company MiniMax, moved to dismiss the entire complaint on April 24, 2026. A hearing on the motion is set for May 29 in front of Judge Stanley Blumenfeld, Jr.
The argument is straightforward and dangerous. After Cox, knowing your users might infringe and choosing not to stop them is “mere inaction.” The Supreme Court said inaction is not enough. So MiniMax says it is not liable for what its users prompt, even if the model can only generate Darth Vader because MiniMax trained it on Star Wars.
That is the argument the studios have to break. It is also the argument that, if it lands, will reshape AI copyright liability for every business that uses these tools.
What Cox actually held
The Cox opinion was a Supreme Court loss for the music industry that almost everyone underestimated until they read it. The Court held that contributory infringement requires intent. Intent can be shown in only two ways. Either the provider induced the infringement through specific affirmative acts of encouragement, or the service was “tailored to infringement,” meaning it had no substantial noninfringing use. Mere knowledge that a service will be used to infringe is insufficient.
That holding maps cleanly onto a generative AI defense. The model has substantial noninfringing uses (generating original characters, B-roll, marketing visuals). The company did not “induce” infringement; users typed the prompts. Knowledge that some users would prompt for Spider-Man is not enough.
The studios will argue that generative AI is structurally different from an internet service provider. An ISP transmits content the user already has. A generative model embodies copies of training material in the model’s weights and produces new copies on demand. That is a real distinction, and it is the only thing standing between the studios and a pleading-stage loss.
Why this matters if your company uses AI
Read the Cox argument from the AI vendor’s side and then read it from yours.
The AI vendor’s lawyer says: we built a tool with substantial noninfringing uses, we did not induce your specific use, and under Cox we cannot be liable for what our users do. Liability flows to the user. The user is the one who typed the prompt that produced an infringing output. The user is the one who downloaded, distributed, or commercialized the result.
That user is your company.
The Supreme Court’s Cox ruling, applied to AI, makes it harder to sue OpenAI, Anthropic, Stability, MiniMax, or whoever sits on top of the model your team is using. It is not making it harder to sue you. It is doing the opposite. The contributory infringement doctrine that previously gave plaintiffs a tool to chase deep-pocketed AI companies is now harder to deploy. So plaintiffs will follow the money to the next available defendant, which is the operating company that actually used the output.
This is already happening. The studios’ MiniMax complaint attaches subscriber-posted Instagram, TikTok, and Reddit videos featuring the characters as direct evidence of infringement. Those are users. Those are operating companies. They are the ones whose accounts the studios can identify and sue.
Three things every GC should do this week
Audit your AI output workflows. Identify every place AI-generated content enters your business: marketing creative, product imagery, internal training, code generation, presentation graphics, customer-facing chatbots. For each, document the model used and the prompts. If you cannot reconstruct the chain of generation, you cannot defend it.
Push the indemnity question with your AI vendors. Most enterprise AI contracts now include some form of IP indemnity. Read yours. The standard OpenAI Enterprise indemnity covers outputs generated through ChatGPT and the API, with conditions. The Microsoft Copilot indemnity has scope limits. After Cox, vendors have less direct exposure, which gives them less motivation to maintain robust indemnity. Renegotiate while you still have negotiating power. Get specific dollar caps removed where you can. Push for defense costs in addition to coverage.
Treat AI outputs the way you treat user-generated content on your platform. If your business has any UGC moderation framework (DMCA notice-and-takedown, content review, watermarking), apply that same discipline to AI-generated material before it goes out the door. The plaintiff who would have sued the model maker is now going to sue the publisher. You are the publisher.
The bigger structural shift
Cox is part of a pattern. The Supreme Court refused to take Thaler v. Perlmutter, leaving in place the rule that AI-generated works without sufficient human authorship are not copyrightable. Bartz v. Anthropic produced a $1.5 billion settlement on training-data piracy, with the final approval hearing rescheduled to May 14, 2026. The MiniMax motion to dismiss is the first time a generative AI defendant has explicitly used Cox as the lead argument at the pleading stage.
The line is forming. Training without permission may or may not be fair use, depending on the model and the data; that fight is still live. Outputs that infringe may or may not produce vendor liability under Cox; that fight just started. What is no longer in doubt is who is left standing when the music stops. The companies generating, publishing, and commercializing the outputs are the ones with exposure that does not move.
If you are a GC and your company has been treating AI vendor indemnity as a sufficient shield, the Cox decision is the moment to pull the contracts off the shelf and read them. The shield is getting thinner. The volume of AI content moving through your business is going up. The math is going the wrong way unless you do something about it now.