← Back to Blog

Your AI Chat History Is Being Sold. A Class Action Just Made It Official.

· Don Ho · 5 min read

Last updated: April 4, 2026

By Don Ho, Esq. | April 4, 2026 Last updated: April 2026

A federal class action filed March 31, 2026 alleges that Perplexity AI embedded Meta and Google tracking tools directly into its platform, sending users’ chat data, including prompts typed in “incognito mode,” to both advertising companies without disclosure or consent. The 135-page complaint should make every general counsel rethink how their company uses AI chatbots. No disclosure. No consent. Just a pipeline from your “private” AI conversation straight to the two largest advertising companies on earth.

The plaintiff, filing as John Doe, claims the tracking operated even when users activated Perplexity’s “incognito mode,” a feature the company markets as creating “anonymous threads” that “won’t save to your history and expire after 24 hours.” According to the complaint, that promise was hollow. Meta and Google’s tracking tools allegedly harvested email addresses, Facebook IDs, IP addresses, device information, and the full text of user prompts and AI responses.

What Users Were Actually Sharing

The complaint catalogs the kinds of information people routinely share with AI chatbots: tax planning strategies, legal questions, financial advice, health concerns, political views. The plaintiff himself used Perplexity to calculate Social Security timing, plan Roth IRA conversions, and research cannabis investments. He believed those conversations were private.

That belief was reasonable. Perplexity designed its interface to feel like a conversation. The chat function mimics human interaction. The incognito mode implies privacy. Studies cited in the complaint confirm what anyone who’s used these tools already knows: people share things with AI chatbots they wouldn’t share with other humans, including relationship problems, health fears, and identity questions.

The disconnect between user expectations and reality is the core of this case. People treated Perplexity like a confidential advisor. The complaint says Perplexity treated their conversations like inventory.

The 14 counts in the complaint draw from well-established privacy law. Invasion of privacy under the California constitution. Violations of the state’s Comprehensive Computer Data Access and Fraud Act. Federal Electronic Communications Privacy Act claims. Deceit and unfair competition.

None of these are novel theories. California courts have seen waves of tracking-technology litigation over the past five years. The FTC’s OkCupid settlement already established that funneling user data to AI training without disclosure is a deception violation. What changes here is the context: AI chatbot conversations contain qualitatively different information than website browsing history or search queries. When someone asks an AI chatbot for legal advice about a custody dispute or runs financial scenarios through it, the data is more intimate than anything a tracking pixel on a news site would capture.

Google and Meta are both named as defendants. The theory is straightforward: they built the tracking tools, they received the data, they profited from it.

Perplexity’s chief communications officer, Jesse Dwyer, told reporters the company had “not been served any lawsuit that matches this description” and could not “verify its existence or claims.” The complaint is publicly filed in the Northern District of California.

Two Classes, One Gap

The plaintiff seeks certification of two classes. The first covers all U.S. users who chatted with Perplexity and had their data sent to Meta or Google between December 7, 2022, and February 4, 2026. The second is a California-only subclass with the same parameters.

Paid subscribers with “Pro” or “Max” plans are excluded. A footnote explains those accounts are “subject to different terms and conditions.” The implication is worth noting: free users got tracked, paid users got different terms. If that distinction holds up, it creates a two-tier privacy model where the product you don’t pay for isn’t the software. It’s your data.

Perplexity is not a small company. The complaint notes a $20 billion valuation as of September 2025 after raising $200 million in funding. It operates from San Francisco, neighboring OpenAI and Anthropic. The scale of the potential class (every free Perplexity user over a three-year period) is enormous.

What This Means for Companies Using AI Tools

If your company has employees using AI chatbots for work (and they do, whether you’ve authorized it or not), this case raises immediate questions.

First, review your AI acceptable use policy. If employees are entering client data, deal terms, strategy documents, or legal questions into AI chatbots, that information may be going places your privacy policy promises it won’t. The Perplexity complaint alleges the tracking happened at the code level, invisible to users. Your employees would have no way to know.

Second, audit your vendor agreements. If you’re paying for enterprise AI tools, check whether the terms of service address third-party tracking and data sharing. The distinction between free and paid tiers in this case suggests that paid accounts may have different (possibly better) protections. “May” is doing heavy lifting in that sentence. Confirm it. This free-tier-as-data-pipeline model is showing up everywhere: GitHub just made Copilot users’ code into training data by default, with the opt-out buried in settings.

Third, update your data handling risk assessment. AI chatbots are a new category of data exfiltration risk, and they’re not the only one: malicious AI browser extensions are already stealing data from 260,000 users through fake “AI assistant” tools. They don’t look like a data breach because users voluntarily enter the information. But if that information is routed to third parties without consent, the legal exposure is the same.

What to Do Now

The Ahmad, Zavitsanos & Mensing firm out of Houston, a 60-lawyer litigation shop, is running point for the plaintiff. They describe themselves as “first and foremost a trial firm.” That framing is intentional. This isn’t a settlement mill filing nuisance suits. They’re building for trial.

For general counsel and compliance officers, the action items are concrete:

Your employees’ AI chat data is being collected. Take the ACRA to identify which AI tools in your stack create data exposure.

Inventory which AI tools your organization uses. Map the data flows. Determine whether any third-party tracking operates within those tools. If you can’t answer that question, neither could Perplexity’s users, and that’s the problem.

The broader signal is clear. The AI industry spent three years telling users their conversations were private while building business models that depended on those conversations being anything but. This lawsuit is the first major test of whether courts will enforce the promises AI companies made, or whether “incognito mode” was just a marketing feature with no legal weight. And the regulatory picture is only getting worse for platforms that play loose with user data — Oregon just signed a chatbot law with a private right of action and $1,000 per-violation statutory damages. The compliance tab is coming due.

If the data exposure angle isn’t enough to get your attention, consider the liability angle. Nippon Life is suing OpenAI for practicing law without a license because ChatGPT gave legal advice that harmed a client. When AI companies build products that act like professionals while harvesting user conversations, the lawsuits come from both directions — privacy and unauthorized practice.


Your employees are putting company data into AI chatbots right now. Do you know where it’s going? Take the ACRA to map your AI data exposure, or book a diagnostic to get ahead of the next class action.

Kaizen AI Labs

Ready to Deploy AI in Your Business?

Schedule a discovery call with our AI consulting team. We'll map your operations, identify leverage points, and show you exactly where AI moves the needle.

Book a Consulting Call
AI

Adjacent Media by Kaizen Labs

Is Your Brand Visible to the Bots?

Get a free GEO audit and find out if your brand is being cited, found, or completely invisible in AI-generated answers. Then let's fix it.

Get a Free GEO Audit
GEO