← Back to Blog

The DOJ Is Coming After AI-Generated Job Ads. Nine Settlements and Counting.

· Don Ho

Last updated: April 14, 2026

By Don Ho, Esq. | April 14, 2026 Last updated: April 2026

The Department of Justice has reached nine settlements under its Protecting U.S. Workers Initiative for discriminatory job postings that restrict applicants by visa category, and has established through the Elegant Enterprise case that AI-generated discriminatory job ads carry the same employer liability as human-written ones. On April 7, 2026, the Department of Justice announced a $313,420 settlement with Compunnel Software Group, a New Jersey staffing company. The allegation: Compunnel’s recruiters posted U.S.-based job ads that restricted applicants to H-1B and other temporary visa holders, screening out U.S. citizens and permanent residents before a single interview took place. The settlement includes $255,420 in civil penalties and $58,000 in back pay to a U.S. citizen who was excluded from a Python Developer role.

That’s the ninth settlement since the DOJ relaunched its Protecting U.S. Workers Initiative in 2025. But here’s the part that should get your attention: in February 2026, the DOJ settled a separate case against Elegant Enterprise-Wide Solutions for the same violation, with one critical addition. The discriminatory job ads in that case were generated by AI.

The enforcement position is now explicit. It does not matter whether a human or a machine wrote the ad. If the language restricts applicants by immigration status in a way that violates the Immigration and Nationality Act’s anti-discrimination provision, the employer is liable. “The AI wrote it” is not a defense.

What the Compunnel Settlement Actually Requires

The terms go well beyond paying money.

Compunnel must stop using citizenship-status restrictions in job postings unless a law, regulation, executive order, or government contract specifically mandates it. That last phrase is doing all the heavy lifting, because those exceptions are rare. Nearly every job in the U.S. is open to citizens, permanent residents, asylees, and refugees. Filtering ads to H-1B-only or OPT-preferred is almost never legally required.

The company must obtain written legal support if a client requests such a restriction. That documentation has to show the restriction is mandated by a legal requirement, not just preferred by the client. Compunnel must retain the proof and provide it to the DOJ on a quarterly basis.

For staffing firms and third-party recruiters, that term is the sharpest edge of the settlement. “The client asked for it” is no longer a viable explanation. If you can’t produce a documented legal basis for the restriction, you own the violation.

Why AI Makes This Worse

Staffing companies and corporate recruiters increasingly use templates and AI tools to generate job postings at speed. A recruiter managing 50 open roles doesn’t write 50 unique job descriptions. They use a template, a macro, or an AI drafting tool that populates fields based on role parameters.

The problem is that these systems inherit whatever biases exist in their training data or template libraries. If a staffing firm’s historical postings frequently included phrases like “H-1B only,” “OPT/CPT preferred,” or “must hold valid work visa,” the AI will reproduce those patterns at scale. What was one recruiter’s compliance failure becomes a systematic violation across hundreds of job postings in a single afternoon.

The DOJ’s settlement with Elegant Enterprise made this connection explicit. The discriminatory language came from AI-generated ads. The DOJ held the employer responsible anyway. The reasoning is straightforward: you published the ad, you own the content. The tool is not a shield. Same principle playing out in courtrooms, where lawyers are getting sanctioned at record rates for AI-generated filings they didn’t verify.

This creates a new category of AI compliance risk that most companies are not managing. Your legal team probably reviews marketing copy for regulatory compliance. Your HR team probably reviews offer letters. Who reviews the language in AI-generated job postings before they go live? Multiple states are now advancing workplace AI bills that would add legislative requirements on top of the DOJ’s enforcement actions. And the liability doesn’t stop at job ads: the Workday AI hiring class action is testing whether employers are liable when the AI platform itself discriminates.

The anti-discrimination provision in the Immigration and Nationality Act (8 U.S.C. § 1324b) is narrower and more technical than most employers realize.

It does not ban H-1B sponsorship. It does not require employers to avoid all mention of work authorization. Employers can lawfully ask whether an applicant is authorized to work in the U.S. and whether the person will need sponsorship. Those are neutral screening questions tied to work authorization status, not visa-category preferences.

What it does prohibit is using citizenship-status or immigration-status filters that exclude protected workers (U.S. citizens, nationals, lawful permanent residents, asylees, and refugees) from consideration. Phrases like “H-1B only,” “H-1B and OPT preferred,” or “must currently hold valid H-1B” cross that line.

The distinction trips up staffing companies constantly. Many assume that because they can lawfully sponsor H-1B workers, they can also tailor recruiting language toward that population. The DOJ’s enforcement actions say otherwise. Sponsorship is a process. Restricting applicants by visa category is discrimination.

This enforcement complexity mirrors the broader AI regulatory patchwork across the country. H-1B program rules and anti-discrimination rules are enforced by different agencies with different standards. The Department of Labor administers H-1B requirements. The DOJ enforces anti-discrimination provisions. A company can be fully compliant with its H-1B filing obligations and still violate the INA’s anti-discrimination provision with a single poorly worded job ad. And with states now layering their own AI wage and hiring laws on top of federal enforcement, the compliance surface area is expanding fast.

What to Do Now

Audit every job posting template and AI-generated ad for status-based language. Search your active and archived postings for phrases that reference specific visa categories. Pull them. Replace them with neutral work-authorization questions if screening is needed.

Implement a review process for AI-generated recruiting content. If you use any tool that auto-generates job descriptions, a human with compliance training needs to review the output before publication. This is not optional. The DOJ has now established through two settlements that AI-generated content carries the same liability as human-written content.

Document your legal basis for any citizenship-status restriction. If a client or internal stakeholder requests a restriction, get it in writing with a legal citation showing the restriction is mandated by law, regulation, executive order, or government contract. If you can’t produce that documentation, don’t use the restriction.

Train recruiters on the distinction between sponsorship and status-based screening. This is the compliance gap the DOJ keeps exploiting. Most recruiters don’t understand the difference between “this role offers H-1B sponsorship” (lawful) and “this role is for H-1B holders only” (unlawful). Close that knowledge gap before the DOJ closes it for you.

Nine settlements and counting. If AI touches your job postings or hiring, take the ACRA to assess your discrimination risk.

Watch the enforcement trajectory. Nine settlements since the Protecting U.S. Workers Initiative relaunched. Multiple cases involving technology firms and staffing companies. The DOJ is treating AI-generated discriminatory ads as a distinct enforcement lane, not a one-off. If you’re in the staffing or tech recruiting space, assume you’re in the inspection zone.

The Compunnel settlement is $313,420. That’s manageable for a mid-size staffing firm. The reputational damage, the quarterly DOJ reporting requirements, and the compliance overhaul are not. The cheaper option is the audit you run this week.


If AI writes your job ads, you own what it says. Kaizen AI Lab audits AI-driven recruiting workflows and builds the review processes that keep the DOJ off your doorstep. Talk to us.


Kaizen AI Labs

Ready to Deploy AI in Your Business?

Schedule a discovery call with our AI consulting team. We'll map your operations, identify leverage points, and show you exactly where AI moves the needle.

Book a Consulting Call
AI

Adjacent Media by Kaizen Labs

Is Your Brand Visible to the Bots?

Get a free GEO audit and find out if your brand is being cited, found, or completely invisible in AI-generated answers. Then let's fix it.

Get a Free GEO Audit
GEO