New York Now Requires Disclosure When AI-Generated Performers Appear in Ads
Last updated: February 5, 2026
Last updated: April 2026
New York enacted a law requiring advertisers to disclose when AI-generated digital replicas of real or synthetic performers appear in advertisements, with penalties starting at $5,000 per violation and a private right of action for living performers whose likenesses are used without consent. The law applies to any advertisement distributed in New York that uses a digitally generated likeness of a human performer, whether that person is real, deceased, or entirely fabricated by AI. Penalties scale up for repeat offenders.
This isn’t about deepfake porn or political misinformation. This is about commercial advertising. The 30-second TV spot with a synthetically generated spokesperson. The Instagram ad featuring an AI-created model. The radio spot using a cloned voice. New York just made all of those a compliance issue.
What the Law Covers
The statute targets “digital replicas” in advertising, which it defines broadly. A digital replica includes any AI-generated or AI-manipulated representation of a real person’s likeness, voice, or physical appearance, as well as entirely synthetic performers created by generative AI.
Key provisions:
The disclosure must be “clear and conspicuous.” The law borrows that standard from the FTC Act, which means fine-print disclaimers buried in terms and conditions won’t satisfy the requirement. The disclosure needs to be visible to a reasonable consumer at the point of viewing.
The law covers both the advertiser and the platform distributing the ad. If you’re a media company running ads in New York, you have exposure here too.
Living performers whose likenesses are replicated without consent have a private right of action. They can sue for actual damages plus statutory damages of $5,000 per unauthorized use. Estates of deceased performers can also bring claims for up to 40 years after death.
Why This Matters Beyond New York
New York is the largest advertising market in the United States. Madison Avenue is still the center of gravity for the ad industry. Any national ad campaign that runs in New York (which is essentially every national ad campaign) is subject to this law.
The practical effect is that this becomes a de facto national standard, similar to how California’s CCPA became the floor for privacy compliance nationwide. Advertisers aren’t going to create two versions of every campaign, one with AI disclosure for New York and one without for everywhere else. They’ll add the disclosure everywhere or stop using AI-generated performers in campaigns that touch New York.
California is considering similar legislation. Illinois already has biometric privacy protections under BIPA that touch on related issues. Tennessee passed the ELVIS Act in 2024 protecting against AI voice cloning. The New York law is part of a growing state-level regulatory patchwork that’s moving faster than any federal framework. New York is also simultaneously advancing workplace AI bills that regulate AI in hiring and employee monitoring.
The SAG-AFTRA Connection
This law didn’t emerge from a vacuum. SAG-AFTRA’s 118-day strike in 2023 centered partly on AI protections for performers. The resulting contract with the AMPTP included provisions requiring consent and compensation for digital replicas of performers.
The New York law extends those protections beyond union contracts into advertising broadly. A performer doesn’t need to be a SAG-AFTRA member to benefit. Any person whose likeness is digitally replicated for a New York advertisement has standing.
The law also addresses the synthetic performer gap that the SAG-AFTRA contract didn’t fully cover: entirely AI-generated people who don’t correspond to any real individual. Advertisers using tools like Synthesia, HeyGen, or D-ID to create virtual spokespeople now need to disclose that those performers are AI-generated. This is a new requirement that didn’t exist under prior law.
Compliance Challenges
The “clear and conspicuous” standard is going to generate litigation. What counts as adequate disclosure in a 15-second TikTok ad? Where do you put the disclosure in an audio-only podcast ad? What about programmatic display ads where the creative is assembled dynamically?
AI-enhanced imagery adds another layer of complexity. If a photographer uses AI to smooth skin, adjust lighting, or modify a model’s appearance in post-production, is that a “digital replica”? The law doesn’t draw a clear line between AI-enhanced photography and AI-generated imagery. Retouching has been standard practice in advertising since before Photoshop. The new law potentially captures post-production AI enhancements that go beyond traditional retouching.
There’s also a detection problem. If an advertiser uses AI to generate a performer but doesn’t disclose it, how would a regulator or plaintiff know? AI-generated imagery is getting harder to distinguish from real photography. The real-world harms from AI-generated content extend far beyond advertising, but this law focuses enforcement squarely on the commercial context. Without metadata standards or watermarking requirements, enforcement will depend largely on whistleblowers and competitive complaints.
The FTC is already enforcing AI transparency on a case-by-case basis. New York just codified what the FTC has been signaling: consumers have a right to know when they’re looking at something a machine made.
Disclosure obligations are expanding. Take the ACRA to identify where your AI use triggers disclosure requirements.
What to Do Now
Audit your current ad campaigns for AI-generated content. If any advertisement running in New York uses AI-generated or AI-manipulated images, video, audio, or likenesses of real people, you need to add disclosure.
Update your creative production workflows. Add a mandatory AI disclosure checkpoint to your ad approval process. Before any creative goes to market, someone needs to answer: does this contain AI-generated performer content? Document the answer.
Get consent and document it. If you’re using a real person’s likeness in any AI-modified form, get written consent that specifically addresses digital replica rights. Standard model releases may not cover AI manipulation. Update your talent agreements.
Talk to your media buyers and platforms. If you’re distributing ads through agencies or programmatic platforms, make sure they’re aware of the disclosure requirement. Liability flows to both the advertiser and the distributor.
New York just forced the advertising industry to answer a question it’s been dodging: when a human in your ad isn’t actually human, do consumers have a right to know? The answer is yes, and the penalty for forgetting starts at $5,000.
AI disclosure laws are multiplying faster than most compliance teams can track. Book a diagnostic to audit your ad pipeline before the fines start.