Three Million Dating Photos Trained a Facial Recognition AI. The FTC Just Settled for Zero Dollars.
Last updated: April 8, 2026
By Don Ho, Esq. | April 8, 2026 Last updated: April 2026
The FTC settled with Match Group for zero dollars after OkCupid secretly transferred approximately three million user photographs to AI startup Clarifai for facial recognition training in 2014 without user consent, establishing that undisclosed data collection for AI training violates Section 5 of the FTC Act as a deception violation under existing law. On March 30, the Federal Trade Commission settled with Match Group Americas and its subsidiary Humor Rainbow, Inc. (the company operating OkCupid) over the undisclosed transfer of approximately three million user photographs to AI startup Clarifai. The photos, along with demographic profiles and geolocation data, were handed over in 2014 through the personal email account of an OkCupid founder. No data-sharing agreement. No payment. No notice to users. Clarifai used the photographs to train its facial recognition technology.
The settlement carries no monetary penalty. The FTC imposed a 20-year consent order with a 10-year compliance regime requiring recordkeeping and periodic reporting. Match Group reported roughly $3.5 billion in revenue for 2025.
That mismatch between the conduct and the consequence tells you exactly where federal AI enforcement stands in April 2026. But the remedy is not the story. The legal theory is.
How It Happened
In September 2014, Clarifai CEO Matthew Zeiler emailed an OkCupid founder requesting access to a large dataset of user photos. The founder sent the images through a personal email account, bypassing whatever corporate controls OkCupid had in place. Clarifai received unrestricted access to roughly three million photographs plus associated demographic and location data.
The connection was not a business deal. OkCupid founders Sam Yagan and Max Krohn had invested in Clarifai through Corazon Capital, the venture fund Yagan later grew into a $100 million vehicle. The FTC characterized the data transfer as a favor to a portfolio company.
At the time, OkCupid’s privacy policy told users the platform would not share personal information “except as indicated in this Privacy Policy or when we inform you and give you an opportunity to opt out.” The policy listed narrow exceptions: service providers, business partners, and affiliated companies. Clarifai was none of those.
From a security standpoint, the data left through an individual’s personal email. No security assessment of Clarifai preceded the transfer. No contractual restrictions governed how the data could be stored, processed, or shared further. This is what third-party risk management failure looks like when it happens at the executive level. It’s the same pattern that produced the Mercor data breach and five resulting lawsuits — casual handling of sensitive data in AI training pipelines.
The Cover-Up Made It Worse
When The New York Times started reporting on Clarifai’s use of OkCupid photographs, executives at both companies coordinated a response that the FTC says misrepresented the nature of the relationship and minimized the scope of the transfer. OkCupid told users directly that any suggestion it had shared their data with Clarifai was “false.”
During the FTC’s investigation, Match Group withheld nearly every responsive internal communication by asserting overbroad claims of attorney-client privilege and work-product protection. The Commission had to enforce its Civil Investigative Demand (the FTC’s equivalent of a subpoena) in federal court before OkCupid produced the requested records. That kind of courtroom fight to compel compliance with a routine investigative demand is unusual and, according to agency watchers, signaled how seriously the FTC viewed the underlying conduct.
An OkCupid spokesperson said the company settled without admitting wrongdoing, stating the alleged conduct “does not reflect how OkCupid operates today.”
Why This Case Changes the Enforcement Map
The FTC’s complaint does not cite any AI-specific statute. There is no federal AI law to cite. Instead, the Commission used Section 5 of the FTC Act, which prohibits unfair or deceptive acts in commerce. The deception theory is straightforward: the privacy policy said one thing, the company did another.
But the factual foundation is new. By centering its complaint on the transfer of user photographs to train a facial recognition model, the FTC established that undisclosed data collection for AI training purposes falls squarely within its enforcement perimeter. The Perplexity class action takes this theory further, alleging that AI chatbot data was routed to Meta and Google through embedded tracking tools, again without user consent.
This is a meaningful distinction from the FTC’s prior AI enforcement. In September 2024, the Commission launched “Operation AI Comply,” targeting companies like DoNotPay and Evolv Technologies for making unsubstantiated claims about what their AI products could do. Those cases addressed output-side deception: lying about what AI delivers. The OkCupid case addresses input-side deception: lying about how consumer data feeds an AI system.
That shift from output to input puts data governance at the center of AI compliance. Companies cannot simply verify their AI products perform as advertised. They need to verify that the data used to build those products was obtained through transparent, disclosed channels. The FTC Commissioner has said explicitly that the agency will regulate AI case by case, making every data-handling decision a potential enforcement target.
The State Law Layer
Federal enforcement is not the only exposure here. The same data transfer intersected with state biometric privacy law. An Illinois OkCupid user filed a BIPA class action against Clarifai, alleging the company created thousands of unique facial geometry templates by scanning user photos without consent. That case was dismissed on jurisdictional grounds, but the underlying claim survived: a private right of action exists under BIPA for exactly this type of biometric data extraction.
Texas and Washington maintain their own biometric privacy statutes. Several other states have enacted or are considering comparable legislation. A single data-sharing event can trigger enforcement exposure across multiple jurisdictions and under multiple legal frameworks at the same time. Companies sharing biometric-adjacent data with AI vendors face potential liability from the FTC, state attorneys general, and individual consumers in states with private rights of action. These are the kinds of real-world AI safety risks that make abstract compliance frameworks urgently concrete.
What to Do Now
The FTC settled for zero. But class action lawyers noticed. Take the ACRA to audit your AI data collection practices.
If your company collects user data and works with AI vendors, this case creates a clear checklist.
Audit every third-party data transfer. Every transfer of user data to an AI vendor, model trainer, or development partner needs a formal data-sharing agreement that specifies permitted uses, retention limits, and deletion obligations. If a transfer happened without one, fix it now or end the relationship.
Update your privacy policy to match reality. If your policy says you only share data with service providers and business partners, but you’re feeding user content to an AI training pipeline, your policy is a liability. The FTC just proved it will enforce that gap.
Check your AI vendor contracts for training rights. Many SaaS and API agreements include clauses allowing vendors to use customer data for model improvement. Read the actual language. If your vendor can train on your users’ data, your users need to know.
Prepare for biometric exposure. If user photographs, voice recordings, or behavioral biometrics are part of any AI training workflow, check your exposure under BIPA (Illinois), CUBI (Texas), and Washington’s biometric identifier law. The compliance burden varies by state, but the litigation risk is real in all three.
The FTC settled for zero dollars. That detail will draw criticism. But the precedent is worth more than any fine. The Commission just told every company in America: if you funnel user data to AI training without disclosure, that is a deception violation under existing law. No new legislation required. No rulemaking delay. Section 5 already covers it.
Match Group’s $3.5 billion revenue makes the zero-dollar penalty look lenient. The next company caught doing this will not get the same deal. And with SimpleClosure showing that even dead companies’ data gets scraped for AI training, the data governance problem extends well beyond active operations.
Zero-dollar settlements won’t last forever. The precedent is already set. Kaizen AI Lab audits your AI data pipelines so the FTC doesn’t have to. Get the audit.
Don Ho, Esq. is Founder & CEO of Kaizen AI Lab, advising companies on operational growth strategies and the legal aspects of AI integration in their businesses.