The EU AI Act's Logging Deadline Is Four Months Away. Most Companies Aren't Ready.
Last updated: April 17, 2026
By Don Ho, Esq. | April 17, 2026 Last updated: April 2026
The EU AI Act’s mandatory logging requirements for high-risk AI systems take effect August 2, 2026, carrying penalties of up to €15 million or 3% of worldwide annual turnover, and most companies have not started building the tamper-evident, automatically generated log infrastructure the regulation requires. August 2, 2026. That is the date when the EU AI Act’s high-risk AI system obligations take effect, including mandatory logging requirements that most companies deploying AI agents have not even started to address.
The penalty for missing this deadline: up to €15 million or 3% of worldwide annual turnover, whichever is higher. The regulation does not distinguish between a Fortune 500 and a 40-person startup with a European customer base. If your AI system operates in a high-risk context and serves EU users, this applies to you.
The Commission proposed a potential delay through the Digital Omnibus package last November, possibly pushing enforcement to December 2027. Both the Council and Parliament adopted negotiating positions in March 2026 with trilogues underway. Nothing has passed into law. August 2026 remains the enforceable date. Planning around a delay that has not been enacted is gambling with eight-figure penalties.
What the Act Actually Requires
The EU AI Act is 144 pages long. It’s also just one piece of the global AI regulatory patchwork that companies are navigating simultaneously. In the U.S., states are already using AI to enforce their own compliance frameworks, which means companies deploying in both markets face compounding obligations. The logging requirements that matter for companies deploying AI sit across four articles that reference each other in ways designed to confuse anyone without a compliance team.
Here is what they actually say.
Article 12 requires high-risk AI systems to “technically allow for the automatic recording of events (logs) over the lifetime of the system.” Two words in that sentence matter more than the rest. “Automatic” means the system generates logs on its own. Manual documentation does not satisfy this requirement. “Lifetime” means from deployment to decommissioning, not just the current software release.
Article 12(2) defines three categories your logs must cover: situations where the system might present a risk or undergo a substantial modification, data for post-market monitoring, and data for operational monitoring by deployers. The regulation does not prescribe a format. It does not require specific fields. It requires those three purposes to be served.
Article 13 requires documentation that tells deployers how to collect and interpret the logs. Think of it as a technical integration guide for your logging layer. If you build the logs but don’t tell your customers how to read them, you have failed this requirement.
Articles 19 and 26 set a six-month minimum retention period. Financial services firms can fold AI logs into existing regulatory paperwork. Everyone else holds logs for at least six months, possibly longer depending on sector-specific rules.
Your AI Agent Is Probably High-Risk
The Act does not use the phrase “AI agent.” What matters is what the system does. If your AI agent scores credit applications, filters resumes, decides who gets healthcare benefits, prices insurance, or triages emergency calls, it falls under Annex III and is classified as high-risk.
Article 6(3) offers a theoretical exit. If the system does not materially influence decision outcomes, it may not qualify as high-risk. In practice, that argument is difficult to make for an agent that calls tools, chains actions, and produces outputs that humans act on. If your agent is doing anything more than returning search results, assume it’s high-risk until a regulator tells you otherwise.
General-purpose AI models have separate obligations under Chapter V. The model itself does not become high-risk. The system built on top of it does, once deployed in a high-risk context. The model provider keeps its Chapter V obligations. The company that integrates the model into a product picks up the high-risk provider obligations under Article 25. This is where most companies get caught. They assume the model provider (OpenAI, Anthropic, Google) handles compliance. The Act says the integrator is the one on the hook.
Why Your Current Logs Won’t Pass
Your AI agent calls tools, delegates to sub-agents, gets LLM responses, and produces a final answer. Standard application logging captures all of that. You probably already have request logs, response logs, error logs, and maybe even trace IDs.
The problem shows up six months later when a regulator asks you to prove the logs were not modified.
Application logs live on infrastructure someone controls. They sit in a database or a log aggregation service where they can be edited or replaced without anyone noticing. Article 12 does not explicitly say “tamper-proof.” But if your logs can be silently altered and you cannot demonstrate otherwise, their evidentiary value to a regulator is zero. For high-risk systems, that is a problem you cannot paper over with a compliance memo.
The practical solution involves cryptographic signing. Sign each agent action with a key the agent itself does not hold. Chain each signature to the previous one. Store the receipt where the agent cannot access it. If someone changes one entry, the chain breaks visibly. The specific cryptographic scheme matters less than the architecture: the signing key lives outside the agent’s trust boundary, every action gets a receipt, and the receipts form a verifiable chain.
No Standard Exists Yet
Here is the part that makes compliance teams nervous. There is no finalized technical standard for Article 12 logging. Two drafts are in progress: prEN 18229-1 covering AI logging and human oversight, and ISO/IEC DIS 24970 focused on AI system logging. Neither has been completed.
You are building to a regulation that defines outcomes without specifying implementation. The regulation says what your logs must accomplish. It does not say how to build them. Companies that get logging right now will be ahead when the standards land. Companies that wait will be retrofitting under deadline pressure with a regulator watching. Measuring progress against a compliance deadline requires a quantifiable unit, and the DIE Progress Unit provides exactly that framework for tracking how far along you are.
The Integrator Problem
This is the structural issue that most companies are not thinking about. You built an AI product using OpenAI’s API or Anthropic’s Claude or Google’s Gemini. Your product makes decisions that affect people in the EU. Under the Act, you are the provider of a high-risk AI system. The model vendor is not going to handle your Article 12 compliance for you.
OpenAI’s API returns a response and logs what it logged. That is OpenAI’s Chapter V obligation. Your obligation under Article 25 is to log what your system did with that response, what tools it called, what decisions it made, and what outputs it delivered to users. Those are two different compliance surfaces.
Companies that built their products assuming the model provider’s existing logging was sufficient are going to discover in August that they have a six-month retention requirement for data they never collected. Starting that data collection pipeline four months before the deadline is late. Starting it today is barely sufficient.
What to Do Now
Step 1: Classify your AI systems. Go through every AI-powered feature in your product. If it touches credit, employment, healthcare, insurance, education, or law enforcement decisions for EU users, it is probably high-risk under Annex III. Do not assume the model provider’s classification applies to your system.
Step 2: Audit your current logging against Article 12. Are your logs automatic? Do they cover the system’s lifetime? Do they address risk situations, post-market monitoring, and operational monitoring? If you are missing any of these, you have a gap. A 5-layer AI compliance stack that maps documentation and testing layers to regulatory requirements is the framework that makes this audit systematic instead of ad hoc.
Step 3: Implement tamper-evidence. Sign your logs cryptographically. Store signatures outside the agent’s access boundary. Build a chain that breaks if entries are modified. This does not need to be complex. It needs to be auditable.
Step 4: Build the deployer documentation. Article 13 requires you to tell your customers how to collect and interpret the logs your system generates. If you ship an AI product to an EU customer without this documentation, you have failed Article 13 regardless of how good your logs are.
Step 5: Set your retention policy. Six months minimum. Longer if your sector has additional requirements. If you are in financial services, insurance, or healthcare, your sector regulator probably already has retention rules that exceed the AI Act minimum.
The EU AI Act is the most consequential AI regulation in the world. It applies to any company whose AI system serves EU users, regardless of where the company is headquartered. American companies that assumed this was a European problem are about to find out that their European customers, partners, and regulators disagree. And with Colorado’s AI anti-discrimination law already drawing constitutional challenges, the compliance burden is only going to get heavier on both sides of the Atlantic.
August 2 is four months away. Take the ACRA to assess your EU AI Act readiness.
Four months is not a lot of time to build a compliant logging infrastructure from scratch. But it is enough time if you start this week. If you wait until June to begin, you are going to be explaining to your board why you missed an enforceable deadline with eight-figure penalties.
August 2 doesn’t care about your roadmap. Kaizen AI Lab builds compliant logging infrastructure for AI systems that need to pass regulator scrutiny, not just internal audits. Talk to us before the deadline talks to you.