The FTC Just Told You Exactly How It Plans to Regulate AI: One Lawsuit at a Time
Last updated: April 7, 2026
By Don Ho, Esq. | April 7, 2026 Last updated: April 2026
The FTC will not write AI-specific rules or create safe harbors; Commissioner Mark Meador confirmed at the April 2026 IAPP Global Summit that the agency will regulate AI entirely through case-by-case enforcement actions, meaning companies will learn the boundaries only when the FTC files a complaint. FTC Commissioner Mark Meador walked into the IAPP Global Summit in Washington, D.C., on April 6 and said something that every general counsel in America should have heard. The FTC will not write rules for AI. It will not issue binding regulations. It will not tell companies how to run their businesses. Instead, it will enforce the law case by case, action by action, based on whatever facts show up in the next complaint.
“We’re approaching this as enforcers who are trying to spot harm, address it, prevent it from occurring, and remedy it for the injured consumers as much as we can,” Meador said.
That is not a passive statement. It is a strategic choice with direct consequences for every company deploying AI right now.
What “Case by Case” Actually Means for Your Business
When the FTC says it will regulate case by case instead of writing rules, it is saying two things at once. First: there will be no safe harbor. No published standard you can meet and consider yourself compliant. Second: enforcement will be unpredictable. The first time the FTC decides your AI deployment crosses a line, you will find out through a civil investigative demand, not a guidance document.
This is the regulatory approach the Trump administration’s AI Action Plan demands. The executive branch wants innovation-first, enforcement-later. Meador delivered that message clearly. The FTC under Chair Andrew Ferguson positions itself as a “cop on the beat,” not a rulemaker.
For operators, this creates a specific problem. If the agency won’t tell you what the rules are in advance, your compliance program has to anticipate what the agency might find objectionable based on prior enforcement patterns. That’s expensive, uncertain, and favors large companies with legal departments big enough to track every FTC action and extract the implied standard.
Small and mid-market companies using AI tools don’t have that luxury. They need clear rules. They are not going to get them from this FTC.
The Rytr Reversal: What It Signals
The most revealing data point from Meador’s appearance wasn’t what he said. It was what the FTC already did.
In December 2025, the FTC reopened and set aside its enforcement action against Rytr, an AI writing tool. The original complaint alleged that Rytr’s technology could be used to generate fake product reviews. The FTC had obtained a final order. Then the Trump administration’s AI Action Plan came out, and the agency reversed course. This is the same dynamic playing out in Utah, where the White House killed a bipartisan AI child safety bill because it conflicted with the innovation-first posture.
Think about what that means. The FTC investigated a company, concluded it violated the law, obtained an order, and then withdrew the order because the policy environment changed. The enforcement action wasn’t overturned by a court. The agency voluntarily set it aside.
For any company facing or anticipating FTC scrutiny over AI, the Rytr reversal establishes that enforcement outcomes under this administration are contingent on policy alignment, not just legal merit. If your AI deployment advances innovation (as the administration defines it), the agency may look the other way. If your AI deployment causes consumer harm that the administration can’t ignore, you’re exposed.
That is a much harder risk calculus than “follow the rules.” The rules are moving.
The OkCupid Settlement: Enforcement Without Teeth
Hours before Meador spoke at the Summit, the FTC announced a settlement with OkCupid and its parent company Match Group. The allegation: OkCupid shared user data with an unrelated third party without consent and took “extensive steps” to conceal the sharing since 2014.
The remedy: a prohibition on future misrepresentations about data collection. No fines. No corrective measures beyond the order itself.
An AI-adjacent data-sharing violation spanning a decade, concealed from users, resolved with a promise to stop lying about it. That is what “case-by-case enforcement” looks like in practice. The FTC identifies the harm, obtains a consent order, and moves on. The deterrent effect of a no-fine settlement against a company that concealed data sharing for 10 years is close to zero.
If you are a GC reading the OkCupid outcome and calibrating your company’s risk tolerance for AI-related data practices, the rational takeaway is that the penalty for getting caught is a consent decree, not a financial consequence. That calculus will change how companies make decisions about AI data practices, and not in the direction the FTC presumably intends.
Where the FTC Will Actually Crack Down: AI Scams
Meador was most specific about one AI enforcement priority: scams. AI tools being used to impersonate real people, generate convincing phishing content, and automate fraud at scale.
“It’s lowering the barriers to entry into scamming,” Meador said. “That’s probably the first place we’re seeing it.”
This tracks with where the FTC has traditionally been most aggressive. Consumer protection cases involving clear, provable financial harm to identifiable victims have always been the agency’s strongest enforcement category. AI-powered scams fit that pattern perfectly. The technology is new. The fraud is old. The FTC knows how to prosecute fraud.
If your business operates anywhere near the boundary between AI-generated content and consumer-facing communications, this is the enforcement vector that should concern you most. AI chatbots that could be mistaken for human customer service — and Oregon just passed a law with statutory damages for exactly that scenario. AI-generated marketing content that makes claims the underlying product can’t support. AI voice tools that replicate real people. The FTC may not write rules for general AI deployment, but it will prosecute AI-enabled deception aggressively.
The Bigger Picture: State and Federal Enforcement Are Diverging
Meador’s case-by-case approach at the federal level is happening while state attorneys general are moving in the opposite direction. California, Colorado, New York, and Texas have enacted AI-specific statutes. At least 19 states have some form of AI disclosure or oversight law on the books. State AGs are using existing consumer protection and antitrust authorities to investigate AI companies.
The result is a two-track enforcement system. The FTC will handle AI cases that involve provable consumer harm, particularly fraud and deception, on a reactive basis. State regulators will pursue proactive, rule-based enforcement under new AI-specific statutes. This divergence is exactly what created the AI regulatory patchwork that companies are already struggling to navigate.
Companies operating nationally face the compliance cost of the state-level patchwork with no federal floor and no federal ceiling. The FTC’s refusal to write rules means there is no preemptive federal standard that simplifies compliance across jurisdictions. Each state’s AI law operates independently. The White House has proposed federal preemption of state AI laws, but that’s a recommendation, not legislation.
What to Do Now
The FTC is building the rulebook through lawsuits. Don’t be a case study. Take the ACRA to find your gaps before they do.
Stop waiting for federal AI rules. They are not coming from this FTC. Build your compliance program around three things.
Map your AI deployments against existing FTC enforcement patterns. If your AI makes representations to consumers about what it is or what it can do, those representations need to be accurate. The deception prong of Section 5 of the FTC Act is the agency’s primary tool, and Meador confirmed it will be applied to AI content.
Monitor state-level AI legislation in every state where you operate. The compliance obligations will vary. The enforcement risk is real. State AGs have budgets and incentives to pursue AI enforcement actions.
Audit your AI tools for scam and impersonation risk. If any customer-facing AI system could be mistaken for a human, or could generate content that misrepresents your product’s capabilities, that is your highest-risk exposure under the current enforcement posture. Fix it before the FTC finds it.
The FTC isn’t writing rules. It’s writing complaints. Kaizen AI Lab builds compliance programs that survive enforcement actions you can’t predict. Get ahead of the next case.