Four States Just Made AI-Powered Pay Decisions a Legal Liability. Your State Is Probably Next.
Last updated: April 17, 2026
By Don Ho, Esq. | April 17, 2026 Last updated: April 2026
As of April 2026, Illinois, Colorado, Texas, and California have enacted or introduced laws that make employers legally liable for using AI tools in compensation, hiring, and performance decisions without bias testing, impact assessments, and employee notification. California just reintroduced the No Robo Bosses Act. This time it’s Senate Bill 947, a revised version of the bill Governor Newsom vetoed last October. If enacted, it would prohibit employers from using automated decision-making systems to set or influence employee compensation unless the employer can demonstrate that any pay differences are based on cost differentials in performing the task involved, or that the data was directly related to tasks the worker was hired to perform.
California is not alone. Illinois, Colorado, and Texas have all enacted AI employment laws that are either already in effect or hitting enforcement deadlines in the next 90 days. These are part of a broader wave of workplace AI legislation hitting state legislatures simultaneously. If your company uses any AI tool that touches hiring, pay, promotions, or performance scoring, you are operating in a legal environment that has changed faster than most HR departments realize.
What Each State Requires
Illinois (Effective January 1, 2026). The Illinois Human Rights Act amendments prohibit employers from using AI tools in connection with employment decisions, including wage setting, unless they notify employees when AI is being used and ensure the tools do not produce discriminatory outcomes based on protected classes. This is not aspirational guidance. It is an amendment to an existing civil rights statute with existing enforcement infrastructure. The Illinois Department of Human Rights can investigate complaints and pursue remedies.
Colorado (Enforcement begins June 30, 2026). The Colorado Artificial Intelligence Act requires employers to exercise “reasonable care” when deploying AI systems in high-risk areas, which explicitly includes compensation, promotion, and hiring decisions. Meeting the duty of care requires implementing risk-management policies, conducting annual impact assessments to identify bias, and notifying employees when AI is used. Colorado’s attorney general has enforcement authority. The clock starts in 73 days.
Texas (Effective January 1, 2026). The Texas Responsible Artificial Intelligence Governance Act prohibits employers from using AI tools in employment decisions with the intent to discriminate against protected classes. Texas took a narrower approach than Illinois or Colorado: liability requires intentional discrimination, not just disparate impact. Unintentional bias alone does not create liability under TRAIGA. That distinction matters for employers assessing their exposure in different jurisdictions.
California (Pending). Senate Bill 947 was introduced in February 2026. It would restrict employers from using automated decision-making systems to process worker data as inputs or outputs for compensation decisions. The bill is a revised version of SB 7, which Newsom vetoed in October 2025. The reintroduction signals that California legislators believe AI wage-setting regulation is a matter of when, not whether.
Why This Matters More Than You Think
These four laws share a common architecture, even where their specifics differ. They all define “automated decision systems” broadly enough to cover everything from basic rule-based systems to sophisticated generative AI tools. Resume screening software, performance scoring algorithms, compensation benchmarking platforms, and even AI-assisted scheduling tools that affect hourly pay could fall within scope.
The problem for most employers is not that these laws exist. The problem is that their AI procurement decisions were made in a different legal environment. Companies adopted AI hiring tools, compensation benchmarking platforms, and performance scoring systems when the legal framework was federal anti-discrimination law and whatever state employment statute happened to apply. The regulatory calculus was familiar: avoid disparate treatment, monitor for disparate impact under EEOC guidance, and document business necessity.
That calculus has changed. State AI laws add new obligations that did not exist when these tools were purchased. Annual impact assessments. Employee notification requirements. Risk management programs. Documentation standards that go beyond what federal law requires. And critically, new enforcement channels: state attorneys general, state human rights agencies, and in some cases private rights of action.
The Wage-Setting Problem Is Different
AI in hiring has received most of the regulatory attention. The Workday class action, the EEOC’s enforcement priorities, New York City’s Local Law 144. Those are important, but they address one phase of the employment lifecycle. The emerging state laws go further. They regulate what happens after the hire.
Compensation decisions are where AI bias becomes structurally embedded. If an algorithm sets starting pay for new hires based on market data that reflects historical gender or race-based pay gaps, it encodes those gaps into the employer’s wage structure from day one. If a performance scoring algorithm influences raises and bonuses, and that algorithm was trained on data that reflects managerial bias patterns, the bias gets compounded each review cycle.
This is not hypothetical. Researchers at MIT and Cornell published findings in March 2026 showing that AI compensation tools trained on market data from 2015 to 2023 systematically recommended lower starting salaries for roles disproportionately held by women and minorities, even when the tools were not given gender or race data as inputs. The Workday AI hiring class action is testing this exact theory of liability in court right now. The bias entered through proxy variables: zip code, previous salary (in states where that question is still legal), educational institution, and job title history.
The Illinois and Colorado laws are designed to catch exactly this pattern. They require employers to look for discriminatory outcomes, not just discriminatory intent. Texas, by requiring intentional discrimination, creates a different enforcement threshold, but employers operating across multiple states cannot build their compliance programs to the lowest standard. And don’t assume federal preemption will bail you out — the White House AI framework wants to kill these state laws, but it’s a recommendation, not a law, and Colorado’s enforcement date is 73 days away.
The Compliance Gap
Here is the operational reality. Most companies that use AI tools for compensation or employment decisions have not done what Illinois and Colorado now require. They have not conducted impact assessments. They have not implemented risk management programs specific to AI employment tools. They have not notified employees that AI is involved in wage-setting or performance scoring. And they cannot explain, with specificity, how their AI tools arrive at compensation recommendations.
The vendor is not going to solve this for you. Compensation benchmarking platforms and AI-powered HR tools sell productivity. They do not sell compliance with state AI laws. If you ask your vendor for an adverse impact analysis of their tool’s compensation recommendations broken down by protected class, most vendors either cannot provide it or have never been asked.
That gap between what the law requires and what employers have actually implemented is where enforcement actions and lawsuits will focus. Illinois has an enforcement mechanism. Colorado will have one in June. California, if SB 947 passes, will have one shortly after. The state-by-state AI regulatory patchwork means there’s no single compliance checklist that covers every jurisdiction. And some states are already using AI themselves to enforce compliance — so the speed of detection is about to outpace the speed of remediation.
If AI touches any compensation or scheduling decisions at your company, you need to know your exposure. Take the ACRA.
What to Do Now
Step 1: Inventory every AI touchpoint in compensation and employment decisions. This goes beyond hiring. Performance scoring, promotion recommendations, bonus calculations, scheduling algorithms that affect hourly pay, compensation benchmarking tools. If a machine influences what someone gets paid, it goes on the list.
Step 2: Map your exposure by state. If you have employees in Illinois, you are already subject to the IHRA amendments. If you have employees in Colorado, enforcement begins June 30. If you have employees in Texas, TRAIGA is in effect. If you have employees in California, SB 947 may pass. Multi-state employers need a compliance matrix, not a single policy.
Step 3: Conduct impact assessments now. Colorado requires annual assessments. Illinois requires ensuring tools do not produce discriminatory outcomes. Pull your compensation data. Run adverse impact analysis by protected class for every employment decision where AI is involved. If you find disparities, you need a business necessity justification or a new tool.
Step 4: Implement employee notification. Both Illinois and Colorado require telling employees when AI is used in employment decisions. Draft the notices. Get them reviewed by employment counsel. Deploy them before June 30 at the latest.
Step 5: Document everything. When the enforcement action comes, and it will come, the companies with documented impact assessments, risk management programs, and employee notifications will be in a fundamentally different position than the companies that assumed their vendors had this handled.
The trajectory is clear. More states will pass AI employment laws. The specifics will vary, but the direction is uniform: if you use AI to make decisions about people’s pay and employment, you need to prove those decisions are not discriminatory. The window for getting ahead of this is closing fast.