← Back to Blog

"As Soon as It Works, No One Calls It AI Anymore." The Definitional Problem Every Business Faces.

· Don Ho

Last updated: January 28, 2026

Last updated: April 2026

The EU AI Act, Colorado’s AI law, and federal regulators all use different definitions of “artificial intelligence,” and no federal statute defines the term at all, creating an operational crisis for any business trying to write AI policies, negotiate vendor contracts, or comply with laws that regulate a technology nobody agrees how to define. Larry Tesler coined it decades ago: “AI is whatever hasn’t been done yet.” Optical character recognition was AI in the 1970s. Spell check was AI in the 1980s. Spam filters were AI in the 2000s. Today, nobody calls any of those things artificial intelligence. They’re just software.

This pattern creates a real operational problem. If your business can’t define AI, you can’t write policies around it, govern it, insure against it, or comply with laws that regulate it. And right now, every regulator in the country is writing laws that use different definitions of the same term.

The EU Says One Thing. Colorado Says Another. Congress Says Nothing.

The EU AI Act defines an “AI system” as a machine-based system that generates outputs like predictions, recommendations, or decisions for objectives that may influence physical or virtual environments. That’s broad. It could cover a weighted Excel formula. And as the state-by-state regulatory patchwork grows, the definitions only get more inconsistent.

Colorado SB 24-205 uses a narrower definition focused on “high-risk AI systems” that make “consequential decisions” about consumers in areas like employment, lending, insurance, and housing. That’s more targeted but still vague on the technical boundary. And Colorado’s enforcement timeline keeps shifting, which means even the regulators aren’t sure what they’re regulating yet.

The federal government hasn’t passed comprehensive AI legislation. Executive orders reference AI without defining it consistently. The NIST AI Risk Management Framework describes characteristics of AI systems but explicitly avoids a rigid definition.

For a general counsel trying to write a company-wide AI policy, this is a mess. Which definition do you use? The answer depends on where you operate, what industry you’re in, and which regulators you answer to.

Take the ACRA to see which of your tools qualify as AI under current state definitions.

Why Your Contracts Are Exposed

I review AI vendor contracts weekly. Most of them reference “AI” or “artificial intelligence” without defining the term anywhere in the agreement. That means the parties have different assumptions about what’s covered.

Ask yourself: does your vendor contract cover a recommendation engine? A rules-based chatbot? A predictive analytics model trained on your customer data? What about an automated workflow that uses if-then logic but no machine learning?

Without a definition, your indemnification clause, your data processing terms, and your liability limitations are all ambiguous. Ambiguity in contracts is litigation fuel.

Here’s what I see over and over: companies sign AI vendor agreements that reference “AI-generated outputs” in the warranty section but never define what counts as AI-generated versus software-generated versus human-assisted. When something goes wrong, the vendor says “that feature isn’t AI, it’s rules-based automation.” The customer says “it’s marketed as AI on your website.” Both are right. Neither can prove it from the contract.

Insurance Doesn’t Know What to Cover Either

Cyber insurance carriers are adding AI exclusions to policies. But they’re excluding something they can’t define. Hartford, Travelers, and Chubb have all introduced endorsements addressing “artificial intelligence” in their commercial liability and E&O policies. The definitions vary across carriers.

One carrier I reviewed defines AI as “any system that uses machine learning.” Another defines it as “any automated decision-making system.” A third references the OECD definition. If you’re a risk manager, you need to know exactly which definition your carrier is using, because it determines whether your claim gets paid.

The “AI Washing” Problem Makes It Worse

The SEC charged Delphia Inc. and Global Predictions Inc. in 2024 for “AI washing,” making false claims about using AI in their investment processes. Total penalties: $400,000. But the underlying problem is deeper than marketing fraud. Companies are calling things AI that aren’t, and calling things “not AI” that are, depending on what’s convenient.

When a vendor says “AI-powered” on the sales page but “rules-based automation” in the contract, that’s not just inconsistent messaging. That’s a definitional arbitrage that will eventually bite someone. The EU AI Act’s logging deadline is forcing companies to answer this question by August — you can’t log what you haven’t defined.

What to Do Now

Pick a working definition and use it everywhere. I recommend starting with the NIST AI RMF definition and narrowing it for your context. Put the definition in your AI policy, your vendor contracts, your insurance applications, and your employee training materials. The same words, in every document.

Audit your vendor contracts for undefined terms. If a contract mentions AI, artificial intelligence, machine learning, or automated decision-making without a definitions section, flag it. Negotiate a definition into the next renewal.

Map your definition to your regulatory exposure. If you operate in Colorado, your internal definition needs to encompass everything the Colorado AI Act covers. If you sell into the EU, it needs to align with the EU AI Act. Build a crosswalk between your definition and each applicable regulation.

Stop treating the definitional question as academic. It’s operational. The company that can’t define AI in its own policies is the company that can’t prove compliance when the regulator comes asking. And the stakes go beyond compliance: the alignment problem means your AI systems may be pursuing goals you never intended, a risk you can’t manage if you haven’t even defined which systems count as AI. Once you’ve settled on a definition, the 5-Layer AI Compliance Stack gives you a governance framework to build on.


If you can’t define AI in your own contracts, neither can your vendors, your insurers, or the regulator about to audit you. We fix that. Book a diagnostic and walk out with a definition that actually holds up.

Kaizen AI Labs

Ready to Deploy AI in Your Business?

Schedule a discovery call with our AI consulting team. We'll map your operations, identify leverage points, and show you exactly where AI moves the needle.

Book a Consulting Call
AI

Adjacent Media by Kaizen Labs

Is Your Brand Visible to the Bots?

Get a free GEO audit and find out if your brand is being cited, found, or completely invisible in AI-generated answers. Then let's fix it.

Get a Free GEO Audit
GEO