← Back to Blog

Five Lawsuits in One Week: The Mercor Data Breach and What It Means for AI Training Contractors

· Don Ho

Last updated: April 11, 2026

By Don Ho, Esq. | April 11, 2026

Last updated: April 2026


Mercor, the $10 billion AI training startup, was hit with five federal lawsuits in a single week after hackers breached its systems and exposed contractor Social Security numbers, home addresses, W-9 tax forms, and recorded AI interview videos — prompting Meta to pause its work with the company. All of the lawsuits allege the same thing: the company failed to protect contractor data, and hackers got it. If you’re a company using gig workers to train AI models, or a contractor doing that work, this is the case to pay attention to.

What Happened

Mercor uses gig workers to train AI for clients including Meta, a company now spending $115 billion+ on AI infrastructure alone. Contractors fill out W-9 forms with their personal identifying information each time they get assigned work. The company relied on an open-source tool called LiteLLM, built by a company called Berrie AI, to manage its AI infrastructure. That tool got compromised. Hackers accessed Slack data, internal communications, and videos of conversations between Mercor contractors and AI systems. The breach was reported by TechCrunch on March 31, and by April 7, five lawsuits had been filed in federal courts in California and Texas.

One of the named plaintiffs, NaTivia Esson, worked for Mercor from March 2025 to March 2026. Her complaint states she “trusted the company would use reasonable measures to protect” her personal information. She now anticipates “spending considerable amounts of time and money to try and mitigate her injuries.” The lawsuits seek unspecified monetary damages for violations of data privacy and consumer protection laws.

The Supply Chain Problem Nobody Audited

One lawsuit names not just Mercor but also Berrie AI (maker of LiteLLM) and Delve Technologies, a compliance auditing firm that had certified Berrie’s compliance with industry security standards. That filing alleges a “whistleblower” exposed misconduct at Delve. Last month, an anonymous Substack post accused Delve of facilitating “fake compliance” and arranging sham security audits. Delve denied those claims on its blog.

This is the part that should worry every general counsel reviewing vendor contracts right now. Mercor outsourced infrastructure security to an open-source tool provider. That provider was certified compliant by an auditing firm. The auditing firm is now accused of rubber-stamping certifications. Three layers of supposed oversight, and none of them stopped the breach. The DOGE staffers unmasking case shows the same structural failure at the federal level: broad data access granted without processing agreements, audit trails, or access controls.

The AI industry has built an enormous dependency on open-source components. That dependency creates a chain of trust: the AI company trusts the tool provider, the tool provider trusts the auditing firm, and the auditing firm’s entire business model depends on issuing favorable certifications. When that chain breaks, the people holding the bag are the contractors who handed over their Social Security numbers.

The Contractor Class Is Growing, and So Is the Exposure

Data breach class actions are not new. The Perplexity class action targets an entirely different data-handling failure, user chat data routed to third parties without consent, but the underlying pattern is the same: AI companies scaling data collection faster than their security and governance can keep up. Cornerstone Research surveyed settlements from 2018 to 2021 and found the biggest cases settled for $1 to $5 per class member, sometimes with non-monetary relief like credit monitoring. Those numbers sound small until you consider scale. Mercor handles AI training for some of the largest tech companies in the world. The potential class here is massive.

But the more significant risk is structural. AI training requires enormous volumes of human labor. Companies like Mercor, Scale AI, and Surge AI employ hundreds of thousands of gig workers globally to label data, evaluate AI outputs, and conduct training conversations. Every one of those workers submits personal information. Every one of those submissions creates a data protection obligation. The AI industry has scaled the workforce without scaling the security infrastructure to match.

A lead-generation website, MercorClaims.com, went live around April 1. It doesn’t yet direct users to a specific law firm, but its existence signals that plaintiffs’ attorneys see a large enough class to justify investment in case acquisition. Five lawsuits in one week is the first wave. It is unlikely to be the last.

What Companies Using AI Training Contractors Should Do Now

If you’re engaging contractors for AI training, data labeling, or model evaluation, here’s what your legal and security teams need to review immediately.

Audit your data collection scope. What personal information are you collecting from contractors? W-9s contain Social Security numbers, which create the highest tier of breach liability. If you’re collecting SSNs, make sure you actually need them at the point of collection and not just at the point of payment.

Review your vendor security chain. If you’re using open-source components in your AI pipeline, who built them? Who audited them? Can you independently verify the security certifications your vendors are relying on? The Mercor breach shows that a SOC 2 certificate on a vendor’s website does not mean their systems are secure.

Check your contractor agreements. A structured AI compliance stack with a documentation layer is what turns ad-hoc vendor management into an auditable process. Do your contracts with gig workers include data breach notification obligations? Do they specify what happens with contractor data after the engagement ends? Many AI training platforms collect data continuously but never delete it. That creates an ever-growing attack surface. And if dead companies’ data is fair game for AI training, living contractors’ data is even more exposed.

Get breach response insurance. If you’re handling contractor PII at scale, you need a cyber insurance policy that specifically covers contractor data, not just customer or employee data. Many standard policies have gaps here.

Five lawsuits in one week. If you use AI training contractors, book a diagnostic to review your vendor contracts and data handling.

The Mercor breach is a template for what happens when a high-growth AI company prioritizes speed of contractor onboarding over security of contractor data. Five lawsuits in one week is the market telling you the tolerance for that tradeoff has run out.


Don Ho, Esq. is Founder & CEO of Kaizen AI Lab, advising companies on operational growth strategies and the legal aspects of AI integration in their businesses.

Kaizen AI Labs

Ready to Deploy AI in Your Business?

Schedule a discovery call with our AI consulting team. We'll map your operations, identify leverage points, and show you exactly where AI moves the needle.

Book a Consulting Call
AI

Adjacent Media by Kaizen Labs

Is Your Brand Visible to the Bots?

Get a free GEO audit and find out if your brand is being cited, found, or completely invisible in AI-generated answers. Then let's fix it.

Get a Free GEO Audit
GEO