Frequently Asked Questions

Everything you need to know about working with Kaizen AI Lab. Can't find what you're looking for? Book a call and we'll answer it directly.

We work with businesses between $5M and $100M in revenue, typically 10 to 200 employees. Large enough that AI can move the needle on operations, small enough that the big consultancies won't return your call. If you have real workflows and real bottlenecks, we can help.
Something is working in week one. Not a proof of concept. Not a slide deck. An actual system running inside your business. Our diagnostic takes 1 to 2 hours. Full implementations run 1 to 4 weeks depending on scope, but you will see measurable impact before the first invoice is due.
Your data never trains models. Every Kaizen Colony™ deployment runs in a dedicated, isolated colony. No data crosses colony boundaries. We use enterprise-grade API agreements with AI providers that contractually prohibit training on your inputs. Credentials are managed through centralized secret vaults with automated rotation and hourly leak detection.
No. We build for operators, not engineers. Your team interacts with AI systems through the tools they already use: email, Slack, spreadsheets, CRMs. If someone on your team can write an email, they can work with what we deploy. We handle the technical layer so you don't have to hire for it.
We have deep operational experience across law firms, lending, real estate, food and beverage, insurance, healthcare, and education. Our SME library covers 20+ niche verticals with industry-specific AI playbooks. If your business has repeatable workflows and human bottlenecks, the patterns transfer regardless of industry.
An AI engineer builds technology. A SaaS tool gives you someone else's workflow. We do neither. We embed in your business, learn how it actually runs, and build AI systems around your existing operations. You get a custom-built workforce of AI agents with built-in governance and continuous improvement, not a generic tool that needs a full-time admin to maintain.
It's a 30-minute conversation, not a sales pitch. We ask about your operations, where time gets wasted, and what you've already tried. By the end, you'll know whether AI can meaningfully impact your business and roughly what the ROI looks like. No commitment required.
Kaizen Colony™ is our proprietary multi-agent operating system. Instead of one AI tool doing one thing, Kaizen Colony™ orchestrates an entire colony of specialized AI agents that work together, monitor each other, and improve over time. It includes a built-in Guardian audit layer, self-healing capabilities, and three-tier reporting so you always know exactly what your AI workforce is doing and how much value it's delivering.
If you use AI in any customer-facing, employee-facing, or decision-making capacity — yes. The EU AI Act applies to any company whose AI affects EU citizens. Colorado's AI Act (effective June 2026) covers high-risk AI decisions. NYC Local Law 144 mandates bias audits for AI hiring tools. And the patchwork is growing. Take the ACRA to find out specifically which regulations apply to you.
Implementation is building the AI system. Governance is making sure it's fair, transparent, documented, and compliant with applicable laws. Most AI consultants only do implementation. We do both — in one engagement — so you don't accumulate 'compliance debt' that becomes expensive to fix later.
Yes. Our AI Compliance Diagnostic can assess any existing AI system against applicable regulatory frameworks. We'll tell you where you're exposed and give you a remediation roadmap. If the system needs modifications, we can either guide your existing vendor or implement the fixes ourselves.
We align to NIST AI Risk Management Framework (AI RMF), ISO/IEC 42001 (AI Management Systems), and applicable regulations including the EU AI Act, Colorado AI Act, NYC Local Law 144, and industry-specific requirements (HIPAA, FCRA, ECOA, etc.). The specific framework mix depends on your industry, geography, and use cases.
Law firms can advise on legal requirements but can't build or fix the AI systems. We do both. Our founder is an attorney — so you get legal-grade compliance understanding AND technical implementation from one team.
EU AI Act penalties: up to €35M or 7% of global annual revenue. A single AI discrimination lawsuit: $500K-$5M. Reputational damage: incalculable. Our compliance-embedded implementations are a fraction of those costs. The math is clear.
Our diagnostic takes 1-2 weeks. Implementation with governance takes 4-12 weeks depending on complexity. Most clients go from 'zero governance' to 'audit-ready' in under 90 days.
Most AI consultancies are founded by engineers or data scientists who've never operated a business. Our founder has built, scaled, and sold businesses across multiple industries. We don't just understand AI. We understand what it's like to make payroll, manage inventory, deal with compliance, and keep customers happy. That context changes everything about how we build.
It means exactly that. During implementation, we identify one high-impact workflow and deploy an automation for it within your first 7 days. It might be automating client intake follow-ups, building a document generator, or setting up an AI assistant for your team. It won't be your entire system, but it will be real, working, and saving your team time immediately.
An AI impact assessment evaluates how an AI system affects individuals and groups before and during deployment, covering discrimination risk, privacy impact, transparency, and accountability gaps. The EU AI Act mandates impact assessments for high-risk AI systems. Colorado's AI Act requires deployers to complete impact assessments for high-risk AI used in consequential decisions. The NIST AI RMF recommends them as a core governance practice. An impact assessment is not a one-time checkbox. It is a living document updated when your AI system changes, when new data sources are added, or when regulations evolve. We include impact assessments as a standard deliverable in every Kaizen engagement. Take the ACRA at acra.kaizenailab.com to find out if your AI systems require one.
NYC Local Law 144 requires any employer or employment agency using an automated employment decision tool (AEDT) in New York City to conduct an independent bias audit before using the tool and to publish the results. If your AI screens resumes, scores candidates, or makes promotion recommendations for NYC-based roles, you almost certainly need a bias audit. The audit must be conducted by an independent auditor and the results must be publicly available on your website. Penalties run up to $1,500 per violation per day. Even if you are not in NYC, this law is becoming the template. Illinois, Maryland, and other states are following suit. Our AI Compliance Diagnostic identifies whether your hiring tools trigger LL144 or similar requirements.
Your IT team can handle the technical infrastructure. What they typically cannot handle is the regulatory analysis, governance framework design, bias testing methodology, and documentation that compliance requires. AI compliance sits at the intersection of law, operations, and technology. It requires someone who understands all three. Most mid-market companies do not have that profile on staff. A consultant fills the gap without the overhead of a full-time hire, gets you compliant faster, and transfers the knowledge so your team can maintain it going forward.
Yes, but only with proper guardrails. AI hiring tools are among the most regulated AI applications in the United States. NYC Local Law 144 requires bias audits. Illinois AIPA requires disclosure and consent. Colorado's AI Act classifies AI hiring decisions as high-risk. The EEOC has issued guidance treating AI-driven adverse impact the same as human-driven discrimination. To use AI in hiring legally, you need bias testing, disclosure notices, human oversight procedures, and documentation of your validation process. We build these governance layers into AI hiring implementations from day one.
There is no single universal checklist because compliance requirements vary by jurisdiction, industry, and AI use case. But the core elements are consistent: inventory all AI systems in use, classify each by risk level, conduct impact assessments for high-risk systems, document data sources and decision logic, implement bias testing and monitoring, establish human oversight procedures, create incident response protocols, maintain audit trails and logging, publish required disclosures, and schedule regular reassessments. Our AI Compliance Diagnostic builds a customized checklist for your specific regulatory exposure, not a generic template. Book a diagnostic at cal.com/dhoesq/kaizen.
The Colorado AI Act (SB 205) was signed into law in 2024 and takes effect February 1, 2026. It requires developers and deployers of high-risk AI systems to use reasonable care to avoid algorithmic discrimination. High-risk means AI that makes or substantially contributes to consequential decisions in employment, education, financial services, healthcare, housing, insurance, or legal services. Deployers must conduct impact assessments, provide notice to consumers, and implement risk management policies. Developers must provide technical documentation. The Colorado Attorney General has exclusive enforcement authority. If your business uses AI for any consequential decision affecting Colorado residents, you need a compliance plan. Our diagnostic identifies your specific exposure.
An AI governance framework is the set of policies, procedures, roles, and controls that determine how your organization develops, deploys, monitors, and retires AI systems. Think of it as the operating manual for responsible AI use. If you are using AI in any capacity beyond personal productivity tools, yes, you need one. The EU AI Act, Colorado AI Act, and NIST AI RMF all assume organizations have governance structures in place. Without a framework, you are making compliance decisions ad hoc, which means inconsistently, which means expensively when something goes wrong. We build governance frameworks as part of every implementation engagement.
An AI risk assessment evaluates the potential harms and benefits of an AI system across its lifecycle. It covers bias risk, privacy risk, security risk, transparency gaps, and operational failures. The NIST AI Risk Management Framework provides the most widely adopted methodology. You should conduct an initial assessment before deploying any AI system and reassess at minimum annually, or whenever the system is materially updated, the data inputs change, or new regulations take effect. For high-risk AI under the EU AI Act or Colorado AI Act, ongoing monitoring is not optional. Our diagnostic includes a risk assessment aligned to NIST AI RMF.
The Big Four will sell you a 200-page AI strategy document written by junior consultants, bill you $500K, and leave you to figure out implementation. We build the actual systems. Our engagements start at $2,500 for a diagnostic and $12,000 for a full implementation. You get a working AI system with governance baked in, not a PowerPoint deck. Our founder has 25 years of operating experience across multiple industries. We understand AI compliance because we have built compliant AI systems, not because we read about it in a framework document.
Penalties vary by regulation. EU AI Act: up to 35 million euros or 7% of global annual revenue. Colorado AI Act: enforcement by the state Attorney General with civil penalties. NYC LL144: up to $1,500 per violation per day. Beyond fines, you face class action lawsuits from affected individuals, regulatory investigations, reputational damage, and forced system shutdowns. The cost of non-compliance almost always exceeds the cost of doing it right from the start. Take the ACRA at acra.kaizenailab.com to understand your current exposure.
The EU AI Act is the world's first comprehensive AI regulation, taking effect in phases from 2024 through 2027. It applies to any organization that places an AI system on the EU market or whose AI system's output is used in the EU, regardless of where the company is headquartered. If your AI touches EU citizens' data, makes decisions about EU residents, or your product is used by EU-based customers, you likely fall under its scope. The Act classifies AI into risk tiers: unacceptable (banned), high-risk (heavily regulated), limited risk (transparency obligations), and minimal risk (no requirements). Most business AI falls into the high-risk or limited-risk categories. Our compliance diagnostic maps your AI systems against EU AI Act requirements.
The EU AI Act defines high-risk AI systems in Annex III. The categories include AI used in employment and worker management, access to essential services (credit, insurance, housing), education and vocational training, law enforcement, migration and border control, and administration of justice. If your AI system scores job candidates, determines creditworthiness, sets insurance premiums, makes admissions decisions, or performs any consequential assessment of natural persons, it is likely high-risk. High-risk classification triggers mandatory requirements for risk management, data governance, transparency, human oversight, accuracy, robustness, and cybersecurity. We help you classify your systems and build the required compliance documentation.
AI compliance is meeting specific legal requirements. AI governance is the broader system of policies, roles, and processes that ensures your AI is developed and used responsibly. Compliance is the floor. Governance is the operating system. You can be technically compliant with a specific regulation while still having no governance framework, which means the next regulation that comes along requires starting from scratch. A strong governance framework makes compliance with any individual regulation faster, cheaper, and more sustainable. We build governance-first, which makes compliance a byproduct rather than a fire drill.
Lending is one of the most heavily regulated verticals for AI. The Equal Credit Opportunity Act (ECOA) and Fair Credit Reporting Act (FCRA) already prohibit discriminatory lending decisions, and regulators are explicitly applying these to AI-driven underwriting. The CFPB has issued guidance requiring adverse action notices that explain AI-driven credit denials in specific, understandable terms. The EU AI Act classifies AI used in creditworthiness assessment as high-risk. Colorado's AI Act covers AI in financial services. HUD's disparate impact rule applies to AI-driven mortgage decisions. If you are using AI for underwriting, fraud detection, pricing, or collections, you need a compliance framework that covers all applicable federal and state requirements. This is one of our deepest verticals.

Kaizen AI Labs

Ready to Deploy AI in Your Business?

Schedule a discovery call with our AI consulting team. We'll map your operations, identify leverage points, and show you exactly where AI moves the needle.

Book a Consulting Call
AI

Adjacent Media by Kaizen Labs

Is Your Brand Visible to the Bots?

Get a free GEO audit and find out if your brand is being cited, found, or completely invisible in AI-generated answers. Then let's fix it.

Get a Free GEO Audit
GEO