AI Governance Framework: What It Is and How to Build One
By Don Ho · 20 min read
By Don Ho | April 2026 Last updated: April 2026
An AI governance framework is a three-layer decision-making architecture, spanning board oversight, operational policy, and technical controls, that defines who can approve AI deployments, how AI systems are monitored after launch, and what happens when something goes wrong. I’ve been an entrepreneur for over two decades, advising companies across all industries as an attorney and now as Founder & CEO of Kaizen AI Lab. I build AI governance frameworks for a living. The phrase “AI governance framework” gets thrown around in board meetings, consultant pitch decks, and LinkedIn posts like it’s a finished product you can buy off the shelf. It’s not. Most of what gets sold as AI governance is a binder of policies that nobody follows, a risk matrix nobody updates, and a quarterly board report that says “AI risks are being managed” without explaining how.
That’s not governance. That’s documentation theater.
A real AI governance framework is a decision-making architecture. It defines who in your organization can approve AI deployments, what criteria they use, how AI systems are monitored after deployment, and what happens when something goes wrong. It connects board-level risk oversight to operational execution to technical controls. It has teeth. People follow it because the alternative is personal liability, regulatory enforcement, or both.
This guide breaks down what an AI governance framework actually contains, why most frameworks fail, and how to build one that works. If you’ve already read the AI compliance guide for mid-market companies, this is the structural companion. Compliance tells you what the rules are. Governance tells you how your organization follows them.
Why Most Governance Frameworks Fail
Before I walk through what a working framework looks like, you need to understand why the standard approach produces expensive garbage.
The consultant model is broken. A Big Four firm comes in, interviews 15 people, and produces a 200-page governance document that costs $300,000 and sits in SharePoint. It’s comprehensive. It’s well-organized. It’s completely disconnected from how the company actually makes decisions about AI. Nobody reads it. Nobody follows it. When the FTC sends a civil investigative demand two years later, the company produces the document and the FTC asks: “Great. Show us the evidence you followed it.” There is none.
Boards treat governance as a reporting exercise. The most common board-level AI governance I see is a slide in the quarterly risk committee deck. It says something like: “AI usage is governed by our Responsible AI Policy, adopted Q3 2025. No material incidents reported this quarter.” The board nods. The slide changes. That’s the entire AI governance conversation at the board level for 80% of the companies I work with. No one on the board has asked what AI systems are actually deployed. No one has asked what data they’re trained on. No one has asked what would happen if a model produced discriminatory outputs in a regulated domain. The board’s AI governance contribution is receiving a slide and not asking questions about it.
Companies confuse policies with governance. A policy is a document. Governance is a system. An AI acceptable use policy is not governance. An AI risk assessment template is not governance. A vendor due diligence checklist is not governance. These are artifacts within a governance framework, but without the surrounding structure (roles, authority, workflows, accountability, monitoring), they’re just paper.
Governance gets delegated to the wrong level. The CEO says “we need AI governance” and delegates it to the CISO, the GC, or the Chief Data Officer. That person writes some policies, gets them approved, and reports back that governance is in place. But governance that lives in a single functional silo doesn’t work. AI decisions cross every boundary in the organization. Engineering decides what models to use. Product decides what features to ship. Legal evaluates regulatory exposure. Procurement negotiates vendor contracts. HR deploys AI in hiring. Finance uses AI for forecasting. No single function has visibility across all of these. Governance needs a cross-functional structure or it’s just one department’s opinion.
The Three Layers of AI Governance
A working AI governance framework operates on three layers: board, operational, and technical. Each layer has a different scope, different participants, and different cadence. Most companies have fragments of one layer. Almost none have all three connected.
Layer 1: Board-Level Governance
Board-level governance sets the boundaries. It defines the organization’s risk appetite for AI, establishes oversight structures, and holds management accountable for execution.
Risk appetite. The board needs to answer a question most boards have never discussed: how much AI risk is this organization willing to accept? That answer varies. A financial services company deploying AI for credit decisions has a different risk appetite than a marketing agency using AI for content generation. But the question needs to be asked and answered explicitly, because without a defined risk appetite, every AI decision becomes ad hoc. The engineer who deploys a new model is making a risk decision. The product manager who adds an AI feature is making a risk decision. The employee who pastes customer data into ChatGPT is making a risk decision. None of them know what level of risk the board considers acceptable because the board never said.
Risk appetite should be documented and specific. Not “we accept moderate risk” (that means nothing). Specific: “We will deploy AI in customer-facing applications only after bias testing against [defined metrics]. We will not deploy AI for autonomous decision-making in [lending, hiring, insurance] without human review. We accept the risk of AI-generated content for [internal use cases] with [defined guardrails].”
Oversight structure. Someone at the board level needs to own AI oversight. The three models I see are:
A dedicated AI committee of the board (rare, usually only at companies where AI is the core business). An expanded mandate for the existing risk committee or audit committee (most common and usually sufficient). A designated board member with AI expertise who serves as the board’s AI liaison (practical for smaller boards).
The structure matters less than the commitment. Whichever model you pick, the body or individual needs regular reporting, the authority to ask hard questions, and the ability to block AI deployments that exceed the organization’s risk appetite.
Reporting cadence. Quarterly is the minimum. For companies in regulated industries or with high-risk AI deployments, monthly is better. The reports should include: the current AI system inventory (what changed since last period), incidents and near-misses, regulatory developments that affect the company’s AI operations, compliance status against the company’s own framework, and planned deployments with risk assessments.
The board doesn’t need to understand the technical details of every model. They need to understand what AI systems are running, what decisions those systems influence, what went wrong (or almost went wrong), and whether management is following the governance framework that the board approved.
Personal liability is the accelerant. Directors and officers are starting to understand that AI governance failures create personal exposure. When a company deploys an AI system that discriminates in lending and the board never asked about bias testing, the resulting enforcement action won’t stop at the company level. The regulatory patchwork is creating obligations that attach to individuals, not just entities. Board members who can’t demonstrate they exercised oversight over AI risk are in the same position as board members who ignored cybersecurity risk in 2015. We know how that played out.
Layer 2: Operational Governance
Operational governance translates the board’s risk appetite into daily execution. This is where most of the work happens. It covers policies, roles, workflows, and vendor management.
AI governance roles. Every governance framework needs clearly assigned roles. At minimum:
An AI governance lead (or committee) with cross-functional authority. This person or group has the authority to approve, modify, or block AI deployments. They report to the board or its designated committee. In mid-market companies, this is often the GC or a senior leader with both business and technical literacy. In larger companies, it might be a dedicated AI governance function.
System owners for each AI deployment. The person accountable for a specific AI system’s compliance, performance, and risk profile. Not the vendor. Not IT in general. A named individual who owns the outcomes.
A designated privacy and data protection liaison for AI-related data flows. AI systems ingest, process, and generate data constantly. Someone needs to own the intersection of AI governance and data privacy. If your AI governance framework doesn’t talk to your data privacy program, you have two incomplete programs instead of one working one.
Policies. At a minimum, your governance framework should include:
An AI acceptable use policy that defines what employees can and can’t do with AI tools. This needs to address shadow AI directly. If you don’t give employees clear rules, they’ll make their own. A 2024 Microsoft survey found 78% of AI users at work brought their own tools. Your policy needs to account for the fact that employees are already using AI whether you’ve approved it or not.
An AI risk assessment policy that defines when risk assessments are required, who conducts them, what criteria they evaluate, and who approves the results. Not every AI tool needs the same level of assessment. A code completion tool used for internal development needs less scrutiny than a customer-facing chatbot that handles sensitive inquiries. The five-layer compliance stack provides a risk classification model that maps directly to your assessment requirements.
An AI vendor management policy that covers due diligence, contract requirements, ongoing monitoring, and exit planning. Your vendors’ AI practices are your AI practices. When Google shut down a lawyer’s NotebookLM account without warning, the lawyers who relied on that tool for client work had no fallback plan. Vendor management isn’t just about onboarding. It’s about understanding what happens when a vendor changes its terms, gets acquired, or simply disappears.
An AI incident response policy that defines what constitutes an AI-related incident, who gets notified, what the investigation and remediation process looks like, and how the organization communicates about it. The worst time to figure out your incident response process is during an incident.
Approval workflows. New AI deployments need a defined path from request to approval. That path should include a risk assessment, a data privacy review, a legal review (especially for customer-facing or decision-making systems), sign-off from the appropriate authority based on risk level, and documentation of the approval decision.
Low-risk deployments (Tier 3 in the compliance stack model) can follow a streamlined approval path. High-risk deployments should require AI governance committee approval and, for the most sensitive applications, board notification.
The approval workflow should also cover material changes to existing AI systems. A vendor model update, a change in data inputs, an expansion of the system’s scope. These aren’t new deployments, but they can change the risk profile of an existing system. Your governance framework needs to catch them.
Vendor management. I’ve written about platform dependency risk and government procurement standards for AI in separate articles, but vendor management deserves specific attention in the governance context.
Your vendor contracts need to address: model transparency (what model version is running, when it changes, and notification requirements), data handling (where your data goes, who can access it, how long it’s retained, and whether it’s used for training), security and access controls, compliance certifications and audit rights, incident notification and response obligations, service continuity and exit provisions.
Most companies sign the vendor’s standard terms and never negotiate AI-specific provisions. That’s governance failure at the procurement level. The GSA’s approach to AI procurement provides a model worth studying, because if the federal government is adding AI-specific terms to its contracts, your company should be too.
Layer 3: Technical Governance
Technical governance is the enforcement mechanism. Board-level governance sets expectations. Operational governance defines processes. Technical governance proves that those processes are actually working through logging, monitoring, testing, and access controls.
Logging. Every AI system in your inventory should produce logs that capture inputs, outputs, decision parameters, and user interactions at a level sufficient to reconstruct what the system did and why. This isn’t optional for companies subject to the EU AI Act. The August 2026 logging deadline requires automatic logging of AI system events over the system’s lifetime, documentation for deployers to interpret logs, and a minimum six-month retention period.
Even if you’re not subject to the EU AI Act, logging is the foundation of AI accountability. When a bias claim surfaces, when a regulatory inquiry arrives, when an output goes wrong, the first question is always: “What did the system do?” Without logs, you can’t answer it.
Monitoring. Logging captures what happened. Monitoring catches what’s going wrong in real time (or close to it). Technical monitoring should track:
Output quality degradation. AI models drift. A model that performed well at deployment can degrade over months as input patterns shift, training data becomes stale, or the vendor makes updates. Without monitoring, you won’t know until a customer complains or a regulator notices.
Compliance drift. New regulations take effect. Existing requirements get updated. Your AI system’s compliance status is not static. The regulatory environment is evolving continuously, and a system that was compliant six months ago may not be compliant today.
Anomalous behavior. AI systems can produce unexpected outputs. A chatbot that suddenly starts making policy promises the company doesn’t support. A recommendation engine that starts showing patterns that correlate with protected characteristics. A coding assistant that introduces security vulnerabilities at higher rates after a model update. Monitoring is how you catch these problems before they become incidents.
Testing. Three categories matter:
Pre-deployment testing (bias testing, security testing, performance testing, adversarial testing). This is table stakes. Don’t deploy an AI system without testing it against the criteria your governance framework defines.
Ongoing testing (periodic re-evaluation against the same criteria). Models change. Data changes. The regulatory environment changes. Pre-deployment testing tells you the system was acceptable when you launched it. Ongoing testing tells you whether it still is.
Red-teaming (adversarial testing by people trying to break the system). Autonomous AI systems are especially important to red-team, because their failure modes can be more severe and less predictable than simple input-output systems. If your AI agent can take actions in the real world (sending emails, executing transactions, modifying records), you need to understand what it does when it encounters edge cases, adversarial inputs, or conflicting instructions.
Access controls. Who can deploy AI systems, who can modify them, who can access their outputs, and who can override their decisions. Access controls for AI systems should follow the same principles as access controls for any other sensitive system: least privilege, separation of duties, and audit trails. But AI adds specific considerations. Who can change model parameters? Who can update training data? Who can modify guardrails? Who can promote a model from testing to production? These are governance-relevant access decisions that most companies don’t track.
Model management. If you’re developing or fine-tuning models internally, you need version control, change management, and rollback capability. If you’re using third-party models, you need to track vendor model versions and understand when they change. A vendor model update can change your AI system’s behavior in ways that affect compliance, performance, and risk. Your governance framework needs a process for evaluating model changes before they take effect in production.
The Governance Maturity Model
I use a four-stage maturity model when I assess clients. Most companies think they’re at Stage 3. Most companies are at Stage 1.
Stage 0: Absent. No AI governance exists. Employees use AI tools without policies, guidelines, or oversight. The company has no inventory of its AI systems. No one at the leadership level is thinking about AI risk. This was every company two years ago. Some are still here, and their exposure is growing every quarter.
Stage 1: Reactive. The company has an AI policy. It was probably written after a board member read a news article about AI-generated legal briefs getting lawyers sanctioned or after a vendor asked whether the company had an AI governance program. The policy exists, but implementation is thin. There’s no systematic inventory. Risk assessments are ad hoc. Monitoring doesn’t exist. The company can produce a document that says “we govern AI.” It can’t produce evidence that the governance is working.
Most mid-market companies are here. The DIE Progress Unit framework would classify this as “Document without Implement.” The document exists. The practice doesn’t match.
Stage 2: Structured. The company has defined governance roles, documented policies, a working AI inventory, and risk assessment procedures that are actually being followed. Approval workflows exist and are enforced. The board receives regular reports. Some technical controls (logging, basic monitoring) are in place. Incident response has been defined and tested at least once.
This is where governance starts having real value. The company can demonstrate to regulators, auditors, and insurance carriers that it has a functioning governance program. Gaps remain (monitoring is inconsistent, vendor management is incomplete, testing is pre-deployment only), but the foundation is solid.
Stage 3: Integrated. Governance is embedded in the organization’s operating model. AI risk decisions are part of the standard business decision-making process, not a separate governance exercise bolted on afterward. Technical controls are comprehensive: continuous monitoring, automated compliance checking, regular testing cycles. The governance framework covers the full AI lifecycle from evaluation through deployment to decommissioning. Cross-functional coordination is working. The governance lead has real authority. The board engages substantively with AI risk.
Few companies are here. The ones that are have usually made governance a competitive advantage, either because they operate in a heavily regulated industry where it’s mandatory or because they sell AI-related products and governance credibility is part of their value proposition.
Stage 4: Adaptive. The governance framework evolves automatically in response to new regulations, new technology, new risk patterns, and organizational changes. Regulatory intelligence feeds directly into governance updates. Monitoring systems flag compliance drift and trigger reassessments. The organization doesn’t just follow its framework. It continuously improves it based on data, incidents, and external developments.
I haven’t seen a company at Stage 4 yet. This is the target state. Companies that get here will be the ones that treat governance as a system, not a project.
Where are you? Be honest. If you wrote an AI policy and haven’t looked at it since, you’re Stage 1. If you have a CISO who runs quarterly AI risk reviews but no one else is involved, you’re Stage 1 with better reporting. The gap between where companies think they are and where they actually are is the single biggest governance risk I see.
Building From Zero: A Practical Roadmap
If you’re starting from nothing (or from a policy that nobody follows, which is functionally the same thing), here’s how to build a governance framework that works. This isn’t a 12-month consulting engagement. It’s a 90-day sprint to get the foundation in place, followed by continuous improvement.
Weeks 1-2: Inventory and assess.
Catalog every AI system in your organization. Every vendor tool, every embedded AI feature, every internal model, every employee using ChatGPT on their phone. Don’t rely on IT to know what’s out there. Survey department heads. Check expense reports for AI subscriptions. Review browser extension policies. Shadow AI is real. You have more AI systems than you think.
For each system, capture: what it does, what data it processes, what decisions it influences, who uses it, and what vendor provides it.
Weeks 3-4: Classify and prioritize.
Assign risk tiers to every system in your inventory. High-risk systems (decisions about people, regulated domains, sensitive data, customer-facing) get governance first. Low-risk systems (internal productivity, non-sensitive data) get policies but lighter oversight.
Identify your top five AI governance gaps. These are the areas where your highest-risk systems have the least governance. This is your priority list for the next phase.
Weeks 5-8: Build the framework.
Draft the core policies: acceptable use, risk assessment, vendor management, incident response. Don’t try to write perfect policies. Write working policies. You’ll revise them after you’ve operated under them for a quarter.
Assign governance roles. Name the AI governance lead. Name system owners for high-risk deployments. Define reporting lines and approval authorities.
Design approval workflows for new AI deployments and material changes to existing ones. Keep them proportional to risk. A new internal productivity tool shouldn’t require the same approval process as a customer-facing AI system in a regulated domain.
Establish board-level reporting. Define what the board will receive, how often, and from whom. Get the first report on the calendar.
Weeks 9-12: Implement technical controls.
Enable logging on high-risk AI systems. If your vendor doesn’t support adequate logging, that’s a finding that goes into your vendor management process.
Set up baseline monitoring for output quality and anomalous behavior on your highest-risk systems.
Conduct pre-deployment testing (or retroactive testing, since many of your systems are already deployed) against defined criteria: bias, security, accuracy, compliance.
Document everything. Every decision, every assessment, every approval. The documentation is what makes the framework verifiable. Without it, you have governance in theory. With it, you have governance in practice.
Ongoing: Operate, measure, improve.
Run the framework. Use the DIE Progress Unit to measure maturity across each component. Identify where you’re stuck at “Document” and push to “Implement.” Identify where you’re at “Implement” and push to “Evaluate.”
Review and update policies quarterly. Review the AI inventory monthly. Report to the board on schedule. When incidents happen (and they will), use them to improve the framework.
Governance is a system, not a project. It doesn’t have an end date. The companies that treat it as a one-time initiative will be the ones explaining to regulators why their “governance framework” didn’t prevent the thing that just went wrong.
What Happens Without Governance
If the argument for governance sounds theoretical, these enforcement examples make it concrete.
Air Canada’s chatbot made refund promises the company didn’t authorize. The chatbot told a customer he could book a full-fare flight and get a retroactive bereavement discount. That wasn’t Air Canada’s policy. The company argued that the chatbot was a “separate legal entity” responsible for its own words. The tribunal rejected that argument completely. Air Canada was liable for its AI’s representations. The governance failure: no guardrails defining what the chatbot could and couldn’t promise, no monitoring catching the unauthorized representations, and no incident response when the problem surfaced.
Lawyers filed AI-generated briefs without verification. The sanctions record grows every month. Attorneys who used ChatGPT, Gemini, or other tools to draft court filings without checking whether the cited cases actually existed. The courts imposed sanctions ranging from fines to referrals for disciplinary proceedings. The governance failure: no acceptable use policy, no verification requirements, no quality controls. The Heppner privilege ruling adds another dimension: using AI for legal work can waive attorney-client privilege if the AI tool’s terms of service allow the vendor to access the content. That’s a governance failure that compounds. You’re not just getting the law wrong. You’re potentially destroying the privilege that protects the entire engagement.
The EU AI Act carries penalties up to €35 million or 7% of worldwide annual turnover. For prohibited AI practices, the maximum penalty is €35 million or 7% of global revenue, whichever is higher. For non-compliance with other obligations (including the logging requirements that take effect in August 2026), the penalty is up to €15 million or 3% of worldwide annual turnover. These aren’t theoretical. The EU enforces its technology regulations. GDPR fines have exceeded €4 billion since 2018. The AI Act will follow the same enforcement pattern.
The FTC is building AI enforcement precedent. The FTC forced Rite Aid to stop using AI-powered facial recognition after the system falsely flagged customers (disproportionately women and people of color) as potential shoplifters. The FTC’s order didn’t just stop the program. It required the company to delete the data and submit to 20 years of compliance monitoring. The FTC has stated publicly that it will pursue AI enforcement case by case, without publishing binding rules first. You find out the standard when you violate it.
State attorneys general are active. California is building a dedicated AI enforcement unit. The regulatory patchwork across states means that a single AI deployment can trigger obligations in multiple jurisdictions simultaneously. New York City has been enforcing Local Law 144 (automated employment decision tools) since 2023. Colorado’s AI Act targets algorithmic discrimination in consequential decisions. The enforcement surface is expanding faster than most companies’ governance programs.
The pattern across all of these cases is the same: organizations deployed AI without the governance infrastructure to manage it. The technology worked (mostly). The oversight didn’t exist. When something went wrong, there was no framework to detect it, respond to it, or demonstrate to regulators that the organization was trying.
The Alignment Problem Is a Governance Problem
There’s a deeper issue underneath all of this. The AI alignment problem is typically discussed in the context of superintelligent AI and existential risk. But alignment is a governance problem right now, at the level of the AI systems running in your company today.
Your AI systems are doing what they’re optimized to do. A chatbot optimized for engagement will say whatever keeps the conversation going, including things that aren’t true. A hiring tool optimized for pattern-matching will replicate the biases in its training data. A content generation tool optimized for fluency will produce confident, well-structured text regardless of whether the underlying claims are accurate.
Governance is how you align these systems with your organization’s actual objectives, values, and risk tolerance. Without governance, your AI systems are aligned with their training objectives. Those objectives may not match yours. The AI productivity paradox shows that companies adopting AI without governance structures often see less productivity gain than expected, because the AI is optimizing for the wrong things or creating problems that require human intervention to fix.
The companies that separate “AI hype from operational reality” are the ones with governance frameworks that force honest assessments of what their AI systems actually do, what risks they actually create, and what value they actually deliver.
Building Governance for the Regulatory Reality
The regulatory environment for AI is moving fast and in different directions simultaneously. The EU is implementing comprehensive regulation. The U.S. federal government has no binding AI law. States are passing conflicting requirements. Federal judges are adopting AI in their own work while courts impose sanctions on lawyers who use it poorly.
A governance framework built for today’s rules will be out of date in six months. That’s why adaptive governance (Stage 4 in the maturity model) is the target state. Your framework needs to absorb new requirements without being rebuilt from scratch.
Practically, that means:
Building your governance framework on principles rather than specific regulatory text. Principles like “we test AI systems for bias before deployment” survive regulatory changes. Specific compliance checklists tied to a single law’s requirements don’t.
Maintaining a regulatory watch function. Someone (or something) needs to track new AI legislation, enforcement actions, and guidance across every jurisdiction where you operate. The regulatory environment is too fragmented for a quarterly legal update to catch everything.
Designing your technical controls for extensibility. When a new regulation requires a new type of logging, your logging infrastructure should be able to accommodate it without a rebuild. When a new testing requirement emerges, your testing framework should support it.
Treating every enforcement action as intelligence. When the FTC settles with a company over AI practices, that settlement defines the FTC’s current expectations. When a state AG brings an AI enforcement case, the complaint reveals what that state considers a violation. These actions are the closest thing to published standards that exist in the U.S. regulatory environment.
Where to Start
You’ve read 4,000 words about AI governance frameworks. Here’s what to do with them.
If you have nothing, start with the 90-day roadmap in the “Building From Zero” section. The inventory comes first. You can’t govern what you can’t see.
If you have policies but no implementation, use the DIE Progress Unit to measure the gap between what you’ve documented and what you’re actually doing. Close the highest-risk gaps first.
If you have a working program but suspect it’s not mature enough, assess yourself against the four-stage maturity model. Be honest. Then build toward the next stage.
If you’re a board member or executive wondering whether your company’s AI governance is real or theater, ask three questions: What AI systems are we running right now? What would happen if one of them produced discriminatory outputs tomorrow? Can we demonstrate to a regulator that our governance framework is working, with evidence, not just documents? If your management team can’t answer those questions with specifics, your governance framework needs work.
AI governance is not a compliance exercise you finish. It’s an operating discipline you build and maintain. The companies that build it now, while the regulatory environment is still forming, will have a structural advantage over the ones that wait for a crisis to force the issue.
The regulatory window is closing. The enforcement apparatus is building. The question for every company deploying AI is not whether they need governance, but whether they’ll build it on their own timeline or on a regulator’s.
Don Ho is Founder & CEO of Kaizen AI Lab, where he builds AI governance and compliance programs for mid-market companies. He has been an entrepreneur for over two decades, advising companies across all industries as an attorney and AI legal consultant.