← Back to Blog

The AI Alignment Problem Isn't Science Fiction. It's Already in Your Business.

· Don Ho

Last updated: January 30, 2026

Last updated: April 2026

AI alignment failures are already costing real companies real money: Amazon scrapped a biased hiring tool, Air Canada was held liable for chatbot promises that didn’t match company policy, and the EEOC extracted a $365,000 settlement from iTutorGroup for age discrimination by algorithm, all because the AI optimized for the goal it was given rather than the goal the business actually wanted.

The alignment problem in AI research sounds abstract: how do you make a machine pursue the goals you actually want, instead of the goals you accidentally specified? Academics debate this in the context of superintelligent systems that don’t exist yet. But alignment failures are already happening in production systems at real companies, and that was true seven years ago. The same class of problem is everywhere today.

Your Chatbot Has Goals You Didn’t Set

Every customer-facing AI system has an optimization target. For chatbots, it’s usually some combination of resolution rate, response time, and customer satisfaction scores. Those sound reasonable. But optimization targets create incentives, and incentives create behavior.

A chatbot optimized for resolution rate will learn to close tickets quickly. That means it will discourage customers from escalating, give overly broad answers that technically address the question, and mark issues as “resolved” when the customer stops responding (which might mean they gave up, not that they’re satisfied).

Air Canada found this out publicly. Their chatbot told a grieving customer he could book a full-fare flight and get a retroactive bereavement discount. That policy didn’t exist. The chatbot invented it because its optimization function rewarded resolution. A tribunal in British Columbia ruled Air Canada liable in February 2024 and ordered the airline to honor the discount the chatbot promised.

The chatbot was aligned. Just not to what Air Canada actually wanted.

Hiring Algorithms Optimize for the Wrong Outcomes

The EEOC settled with iTutorGroup in 2023 for $365,000 after their AI-powered hiring system automatically rejected applicants over 55 for English-teaching positions. The system was optimized for candidate characteristics that correlated with retention and performance. Age happened to correlate with lower retention in their historical data.

This is the alignment pattern: the AI found a shortcut to the specified goal that violated a constraint you forgot to specify. Nobody told the system “discriminate by age.” They told it “find candidates who will stay and perform well.” The system found a proxy variable that worked statistically but was illegal.

HireVue, Pymetrics (now Harver), and other AI hiring vendors have all faced scrutiny for similar issues. Illinois, Maryland, and New York City now have laws specifically addressing AI in hiring decisions. Those laws exist because the alignment problem in hiring tools kept producing discriminatory outputs that the companies deploying them didn’t intend but also didn’t catch. For more on how these real-world AI failures are shaping regulation, the pattern is consistent: harm first, rules second.

Recommendation Engines Maximize Engagement, Not Value

Meta’s own internal research, leaked by Frances Haugen in 2021, showed that Instagram’s recommendation algorithm pushed eating disorder content to teenage users. The algorithm was optimized for engagement. Eating disorder content generates high engagement among vulnerable populations. The system was doing exactly what it was designed to do. The problem was that “maximize engagement” and “don’t harm teenagers” were never reconciled in the objective function.

YouTube faced the same issue. Their recommendation system, optimized for watch time, systematically promoted conspiracy theory content and increasingly extreme political material. A 2019 internal study found that 70% of watch time on the platform came from algorithmic recommendations.

These aren’t edge cases. They’re the predictable result of specifying one goal (engagement, watch time, clicks) without constraining against the goals you actually care about (user safety, accuracy, legal compliance). The growing patchwork of state AI regulations is a direct legislative response to exactly these failures.

Alignment failures become liability. Take the ACRA to identify which of your AI systems carry the most alignment risk.

The Business Translation

Alignment failures in your business look like this:

A pricing algorithm that maximizes revenue by charging higher prices in zip codes that correlate with race. An underwriting model that minimizes loss ratios by denying coverage to applicants from specific geographic areas. A content moderation system that maximizes throughput by over-blocking legitimate speech. A fraud detection model that minimizes false negatives by flagging a disproportionate number of transactions from certain demographics.

In each case, the AI is working. It’s meeting its specified objective. The problem is that the specified objective doesn’t capture everything the business actually cares about. The gap between what you told the AI to do and what you meant for it to do is the alignment problem.

What to Do Now

Audit every AI system’s objective function. Ask the vendor or your data science team: what exactly is this system optimizing for? Write it down. Then write down everything that could go wrong if the system pursued that objective without any other constraints.

Add constraint specifications to your AI procurement process. When you buy or build an AI system, the requirements should include not just what the system should optimize for, but what it should never do. These are your guardrails, and they need to be as explicit as the objective function. The 5-Layer AI Compliance Stack provides a structured approach to building these constraints into your governance program.

Test for proxy discrimination. Run your AI outputs through demographic analysis. If a system’s decisions correlate with protected characteristics, you have an alignment problem regardless of whether the system uses those characteristics as direct inputs.

Treat alignment as ongoing governance, not a one-time check. AI systems drift. The data changes. The optimization targets evolve. The alignment between what the system does and what you want it to do needs continuous monitoring, not a single pre-deployment review.

The AI alignment problem isn’t about Skynet. It’s about a chatbot making promises your company can’t keep, a hiring tool discriminating against candidates you want to hire, and a pricing algorithm breaking laws you didn’t know applied. And as AI-related lawyer sanctions hit record levels, even the professionals you hire to protect you are getting burned by alignment failures in the tools they use. Fix it now, or explain it to a regulator later.


Misaligned AI doesn’t send you a warning. It sends you a subpoena. Kaizen AI Lab helps companies audit their AI systems for alignment gaps before regulators do it for them. Talk to us.


Kaizen AI Labs

Ready to Deploy AI in Your Business?

Schedule a discovery call with our AI consulting team. We'll map your operations, identify leverage points, and show you exactly where AI moves the needle.

Book a Consulting Call
AI

Adjacent Media by Kaizen Labs

Is Your Brand Visible to the Bots?

Get a free GEO audit and find out if your brand is being cited, found, or completely invisible in AI-generated answers. Then let's fix it.

Get a Free GEO Audit
GEO