← All articles

EU AI Act Compliance Guide: Requirements, Deadlines & Penalties

The EU AI Act is the world's first comprehensive legislation regulating artificial intelligence. It entered into force on August 1, 2024, and its most consequential provisions — the requirements for high-risk AI systems — take effect on August 2, 2026. That is five months from today.

If your organization develops or deploys AI systems that operate in the European Union, this guide covers what you need to know: how the risk classification system works, which systems qualify as high-risk, what the requirements actually demand in practice, and what the penalties look like if you get it wrong.

This is not a surface-level overview. We will walk through the specific articles, explain the derogation that can exempt your system from high-risk classification, and cover the implementation requirements in enough detail for you to start planning your compliance architecture.


What Is the EU AI Act?

The EU AI Act (Regulation (EU) 2024/1689) is a horizontal regulation that applies across all sectors and industries. Unlike sector-specific regulations such as the FDA's clinical decision support guidance or NYC's Local Law 144 for hiring tools, the AI Act covers every AI system placed on the EU market or whose output is used in the EU — regardless of where the provider is based.

The regulation takes a risk-based approach. Rather than regulating all AI systems equally, it classifies systems into risk tiers and applies requirements proportional to the potential harm. An AI system that recommends movies faces no meaningful obligations. An AI system that screens job applicants faces extensive requirements for risk management, data governance, transparency, and human oversight.

Three foundational concepts shape the entire regulation:

  1. Provider vs. Deployer distinction. The "provider" is the entity that develops or places the AI system on the market. The "deployer" is the entity that uses it. Both have obligations, but the provider bears the heavier burden — including conformity assessment, technical documentation, and post-market monitoring.

  2. Extraterritorial scope. Like GDPR, the AI Act applies to providers outside the EU if their systems are placed on the EU market, and to deployers outside the EU if their system's output is used within the EU (Article 2). If your AI system makes decisions about EU residents, the Act likely applies to you.

  3. Risk proportionality. The Act does not ban AI. It creates escalating obligations tied to risk levels. For most AI systems, the obligations are minimal or nonexistent. The real compliance burden falls on systems classified as high-risk.


Timeline: Phased Enforcement

The EU AI Act does not arrive all at once. It follows a phased enforcement timeline, with different provisions taking effect at different dates:

| Date | What Takes Effect | |------|-------------------| | Aug 1, 2024 | AI Act enters into force | | Feb 2, 2025 | Prohibited AI practices banned; AI literacy obligations begin | | Aug 2, 2025 | Governance rules and obligations for general-purpose AI (GPAI) models apply | | Aug 2, 2026 | High-risk AI system requirements (Annex III) become enforceable | | Aug 2, 2027 | High-risk AI systems embedded in regulated products (Annex I) — extended transition |

The February 2025 milestone has already passed. Organizations using prohibited AI practices — social scoring, real-time biometric identification in public spaces (with narrow exceptions), emotion recognition in workplaces and schools, and manipulation of vulnerable groups — are already in violation.

The August 2025 milestone is also behind us. Providers of general-purpose AI models (think foundation models like GPT-4, Claude, Gemini) must now comply with transparency obligations, provide technical documentation, and implement copyright compliance policies.

The critical date for most organizations is August 2, 2026. This is when the full requirements for high-risk AI systems in Annex III categories — employment, education, credit, healthcare, law enforcement, and more — become enforceable. If your AI system falls into one of these categories and does not qualify for the Article 6(3) derogation, you have five months to comply.


Risk Classification: Four Tiers

The AI Act organizes AI systems into four risk tiers. Understanding which tier your system falls into determines your entire compliance obligation.

Tier 1: Unacceptable Risk (Prohibited)

These AI practices are banned outright, with no compliance path — they simply cannot be deployed in the EU. Since February 2, 2025, the following are prohibited:

  • Social scoring by public authorities or on their behalf
  • Subliminal, manipulative, or deceptive techniques that distort behavior and cause significant harm
  • Exploitation of vulnerabilities (age, disability, social or economic situation) to distort behavior
  • Real-time remote biometric identification in publicly accessible spaces for law enforcement (with narrow exceptions for missing children, imminent threats, and serious crime)
  • Emotion recognition in workplaces and educational institutions
  • Untargeted scraping of facial images from the internet or CCTV to build facial recognition databases
  • Biometric categorization based on sensitive attributes (race, political opinions, sexual orientation)
  • Individual predictive policing based solely on profiling or personality traits

Tier 2: High-Risk

This is where the real compliance burden lives. High-risk AI systems are subject to extensive requirements covering risk management, data governance, technical documentation, logging, transparency, human oversight, accuracy, robustness, and cybersecurity.

A system is classified as high-risk in two ways:

  1. Annex I path: The AI system is a safety component of a product already covered by EU harmonization legislation (medical devices, vehicles, machinery, toys, lifts, etc.), or the AI system is itself such a product.

  2. Annex III path: The AI system falls into one of eight domain categories listed in Annex III (detailed in the next section).

Tier 3: Limited Risk

Systems with limited risk face transparency obligations only. The primary requirement: users must be informed they are interacting with an AI system. This applies to:

  • Chatbots and conversational AI (users must know they are talking to a machine)
  • Deepfakes and AI-generated content (must be labeled as artificially generated or manipulated)
  • Emotion recognition systems (where not prohibited — users must be informed)

Tier 4: Minimal Risk

The vast majority of AI systems — spam filters, AI-enabled video games, inventory management, recommendation engines for non-consequential decisions — fall here. No specific obligations apply beyond existing law. The Act explicitly encourages voluntary codes of conduct for these systems, but compliance is not mandatory.


Annex III: The Eight High-Risk Categories

Annex III is the list that determines whether your AI system is high-risk under the domain-based classification path. If your system falls into any of these eight categories, it is presumed high-risk unless the Article 6(3) derogation applies.

1. Biometrics

AI systems intended for:

  • Remote biometric identification (not including verification — unlocking your phone is not covered)
  • Biometric categorization based on sensitive or protected attributes
  • Emotion recognition (in contexts where not outright prohibited)

Practical impact: Facial recognition for access control, identity verification systems that go beyond 1:1 matching, and any system that infers emotional states from biometric data.

2. Critical Infrastructure

AI systems used as safety components in the management and operation of:

  • Digital infrastructure
  • Road traffic
  • Supply of water, gas, heating, and electricity

Practical impact: AI systems managing power grid load balancing, traffic signal optimization, water treatment monitoring, or network infrastructure routing — but only when they serve as safety components, not general optimization tools.

3. Education and Vocational Training

AI systems used to:

  • Determine access to or admission into educational institutions
  • Evaluate learning outcomes, including systems that influence the level of education a person can receive
  • Monitor prohibited behavior of students during tests (proctoring)

Practical impact: Automated admissions scoring, AI-powered grading systems, exam proctoring software that flags suspicious behavior, and systems that determine student placement or tracking.

4. Employment, Workers Management, and Access to Self-Employment

AI systems used for:

  • Recruitment and selection — particularly screening, filtering, or evaluating candidates
  • Decisions on promotion, termination, or task allocation
  • Monitoring and evaluation of worker performance and behavior

Practical impact: Resume screening tools, AI interview analysis, performance evaluation systems, workforce scheduling optimization that affects individual workers, and productivity monitoring tools. This is one of the broadest categories and affects nearly every HR technology vendor.

5. Access to and Enjoyment of Essential Private Services and Public Services

AI systems used to:

  • Evaluate creditworthiness or establish credit scores (except for detecting financial fraud)
  • Risk assessment and pricing for life and health insurance
  • Evaluate and classify emergency calls (dispatch prioritization)
  • Assess eligibility for public assistance benefits, including granting, reducing, revoking, or reclaiming benefits
  • Evaluate applications for asylum, visas, and residence permits

Practical impact: Credit scoring models, insurance underwriting AI, emergency 911/112 call triage, benefits eligibility determination, and immigration processing systems.

6. Law Enforcement

AI systems used for:

  • Assessing the risk of a person becoming a victim of criminal offenses
  • Polygraphs and similar tools during interrogation
  • Evaluating the reliability of evidence
  • Assessing the risk of offending or reoffending (recidivism)
  • Profiling in the course of crime detection or investigation

Practical impact: Predictive policing tools, criminal risk assessment instruments, evidence analysis systems, and any AI used in criminal investigation profiling.

7. Migration, Asylum, and Border Control Management

AI systems used for:

  • Polygraphs and similar tools
  • Assessing irregular migration risk
  • Examining applications for asylum, visa, and residence permits and associated complaints
  • Detecting, recognizing, or identifying persons (except document verification)

Practical impact: Border screening AI, asylum application processing, and migration pattern analysis systems.

8. Administration of Justice and Democratic Processes

AI systems used to:

  • Assist judicial authorities in researching and interpreting facts and the law and applying the law to concrete facts
  • Influence the outcome of elections or referendums, or the voting behavior of persons (not including organizational or logistical tools)

Practical impact: Legal research AI used by judges, AI-generated case summaries for judicial decision-making, and any system that could influence democratic processes.


The Article 6(3) Derogation: When High-Risk AI Is Not High-Risk

This is one of the most important provisions in the AI Act for organizations building AI in Annex III domains, and it is widely misunderstood. Article 6(3) creates a derogation — an exemption — from high-risk classification for Annex III systems that do not pose a significant risk of harm.

The key test: the AI system must not pose "a significant risk of harm to the health, safety or fundamental rights of natural persons, including by not materially influencing the outcome of decision making."

A system qualifies for the derogation if it meets any one of four conditions:

Condition 1: Narrow Procedural Task

The AI system performs a narrow procedural task such as transforming unstructured data into structured data, classifying incoming documents into categories, or detecting duplicates among a large number of applications. The task must not materially influence the outcome of a subsequent decision.

Example: An AI system that converts handwritten insurance claim forms into structured database entries. It is processing data, not evaluating claims.

Condition 2: Improving a Prior Human Activity

The AI system improves the result of a previously completed human activity. The human has already done the substantive work; the AI enhances it without replacing the judgment involved.

Example: A grammar-checking or style-polishing tool applied to a legal brief that a lawyer has already written. The AI improves the result but did not generate the substantive analysis.

Condition 3: Pattern Detection Without Replacing Human Assessment

The AI system detects decision-making patterns or deviations from prior decision-making patterns. It is intended to flag potential inconsistencies or anomalies, not to replace or influence the human assessment of those patterns without proper human review.

Example: An AI system that analyzes a loan officer's historical decisions and flags cases where the current decision deviates significantly from past patterns. The flagging is informational; the loan officer still makes the decision independently.

Condition 4: Preparatory Task

The AI system performs a task that is preparatory to an assessment relevant for the purposes of the use cases listed in Annex III. It gathers, organizes, or summarizes information before a human evaluates it.

Example: An AI system that aggregates a job candidate's publicly available information into a summary dossier for a human recruiter to review. The AI does not score, rank, or recommend — it prepares information.

The Process for Claiming the Derogation

The derogation is not automatic. The provider must:

  1. Document the assessment explaining why the system meets at least one of the four conditions
  2. Register the system in the EU database per Article 49(2), noting the derogation claim
  3. Maintain the documentation and make it available to national authorities upon request
  4. Reassess if the system's functionality changes in a way that could affect the classification

The burden of proof is on the provider. If a national competent authority challenges the classification, the provider must demonstrate that the conditions were met.

The Profiling Exception: When the Derogation Cannot Apply

This is the single most critical caveat in Article 6(3), and missing it can be catastrophic.

The derogation never applies to AI systems that perform profiling of natural persons.

Profiling, as defined in GDPR Article 4(4), means any form of automated processing of personal data consisting of the use of personal data to evaluate certain personal aspects relating to a natural person — in particular to analyze or predict aspects concerning that person's performance at work, economic situation, health, personal preferences, interests, reliability, behavior, location, or movements.

If your AI system evaluates personal characteristics of individuals — even as a preparatory or pattern-detection step — the profiling exception kills the derogation. The system is high-risk, full stop.

Examples where the derogation dies:

  • A credit pre-screening tool that scores individuals based on financial behavior patterns (profiling, even if a human makes the final decision)
  • An employee performance analysis system that categorizes workers by predicted attrition risk (profiling of personal aspects)
  • A healthcare triage tool that risk-scores patients based on demographic and clinical data (profiling of health aspects)

This exception is absolute. There is no materiality threshold, no de minimis carve-out. If the system profiles natural persons within an Annex III domain, it is high-risk regardless of how minimal its influence on the final decision might be.

Commission Guidelines: Overdue and Creating Uncertainty

Article 6(5) required the European Commission to publish guidelines by February 2, 2026, providing practical examples of AI systems that are high-risk and not high-risk under the derogation framework. Those guidelines have not been published.

The Commission missed the deadline. Reports indicate the Commission is integrating stakeholder feedback and plans to publish a draft for further consultation, but as of March 2026, organizations are classifying their systems without the benefit of official examples.

This creates real uncertainty. The four conditions in Article 6(3) are principles-based, and reasonable people can disagree about whether a specific system meets them. Until the guidelines are published with concrete examples, providers must document their reasoning carefully and be prepared to defend their classification to national authorities.

The practical implication: err on the side of caution. If there is genuine ambiguity about whether your system qualifies for the derogation, the safer path is to comply with the high-risk requirements. The penalties for misclassification are severe, and the burden of proof rests entirely on the provider.


High-Risk Requirements: What the Law Actually Demands

If your AI system is classified as high-risk — either because it does not qualify for the Article 6(3) derogation or because it is a safety component of a regulated product — the following requirements apply. These are the obligations that take effect on August 2, 2026 for Annex III systems.

Article 9: Risk Management System

Providers must establish, implement, document, and maintain a continuous, iterative risk management process throughout the entire lifecycle of the AI system. This is not a one-time assessment — it must be regularly reviewed and updated.

The risk management system must:

  • Identify and analyze known and reasonably foreseeable risks to health, safety, and fundamental rights
  • Estimate and evaluate risks that may emerge when the system is used as intended and under conditions of reasonably foreseeable misuse
  • Evaluate risks based on post-market monitoring data — once the system is deployed, ongoing risk evaluation is required
  • Adopt appropriate and targeted risk management measures addressing identified risks through design, mitigation controls, or information and training for deployers

Risk management measures must give due consideration to the effects and possible interactions resulting from the combined application of the requirements in this section. They must ensure that the overall residual risk is judged acceptable.

Practical implementation: This maps closely to ISO 14971 (medical device risk management) and ISO 42001 Annex A risk controls. Organizations with existing risk management frameworks can extend them rather than building from scratch.

Article 10: Data and Data Governance

Training, validation, and testing datasets must meet specific quality requirements:

  • Relevant, sufficiently representative, and as free of errors as possible in view of the intended purpose
  • Appropriate statistical properties, including with regard to the persons or groups on whom the system is intended to be used
  • Data governance and management practices covering design choices, data collection, data preparation, labeling, cleaning, enrichment, and aggregation
  • Bias examination and mitigation — specifically, examination in view of possible biases that are likely to affect the health and safety of persons, have a negative impact on fundamental rights, or lead to discrimination

For systems that use techniques involving training with data, the datasets must also undergo validation and testing procedures, including as regards the data quality criteria and the performance metrics.

Practical implementation: This requires documented data lineage, bias testing reports, and representative sampling analysis. If your training data underrepresents a protected group that the system will serve, Article 10 creates an obligation to identify and address that gap.

Articles 11-12: Technical Documentation and Record-Keeping

Article 11 requires providers to draw up technical documentation before the system is placed on the market or put into service and to keep it up to date. The documentation must demonstrate that the system complies with the requirements and provide national authorities with the information necessary to assess compliance.

The technical documentation must include, at minimum:

  • A general description of the AI system (intended purpose, versions, hardware/software dependencies)
  • A detailed description of the elements of the system and its development process
  • Monitoring, functioning, and control information
  • A description of the risk management system
  • A description of changes made throughout the lifecycle

Article 12 requires that high-risk AI systems be designed to allow for the automatic recording of events (logs) over the system's lifetime. Logging capabilities must include:

  • Recording the period of each use of the system (start and end date/time)
  • The reference database against which input data is checked
  • The input data for which the search leads to a match
  • The identification of natural persons involved in verification of results

Logs must be kept for a period appropriate to the intended purpose of the high-risk AI system — at least six months unless otherwise provided by applicable law. Deployers of high-risk AI systems have an obligation to retain the logs generated automatically by the system.

Practical implementation: This is where many organizations will face the largest gap. Logging requirements demand append-only, tamper-evident audit trails. Retrofitting existing AI systems with comprehensive event logging is a significant engineering effort. The six-month minimum retention period applies to deployers specifically — providers may face longer retention requirements depending on the system's intended purpose.

Article 13: Transparency

High-risk AI systems must be designed and developed in a way that ensures their operation is sufficiently transparent to enable deployers to interpret the system's output and use it appropriately.

The system must be accompanied by instructions for use that include:

  • The identity and contact details of the provider
  • The characteristics, capabilities, and limitations of performance (intended purpose, accuracy levels, known risks)
  • Human oversight measures, including technical measures to facilitate interpretation
  • The expected lifetime of the system and maintenance measures
  • The computational and hardware resources needed
  • Where applicable, a description of the mechanisms for logging

Practical implementation: This goes beyond a README. The instructions for use must be comprehensive enough for a deployer — who may have limited AI expertise — to understand the system's capabilities, limitations, and failure modes. Think product safety documentation, not developer docs.

Article 14: Human Oversight

This is one of the most architecturally significant requirements. High-risk AI systems must be designed to allow effective oversight by natural persons during the period they are in use. Human oversight must aim to prevent or minimize risks to health, safety, or fundamental rights.

Article 14 specifies five capabilities that human overseers must be able to exercise:

  1. Understand the system's capabilities and limitations — the overseer must be able to properly understand the relevant capabilities and limitations of the AI system and monitor its operation, including in view of detecting and addressing anomalies, dysfunctions, and unexpected performance
  2. Remain aware of automation bias — the overseer must understand the possible tendency to automatically rely on or over-rely on the output produced by the system (automation bias), particularly for systems used to provide information or recommendations for decisions
  3. Correctly interpret the system's output — the overseer must be able to correctly interpret the output, taking into account the characteristics of the system and the interpretation tools and methods available
  4. Decide not to use the system or disregard its output — the overseer must be able to decide, in any particular situation, not to use the high-risk AI system or to otherwise disregard, override, or reverse the output of the system
  5. Intervene or interrupt the system — the overseer must be able to intervene in the operation of the system or interrupt it through a "stop" button or similar procedure that allows the system to come to a halt in a safe state

For certain high-risk AI systems — particularly those in biometric identification — any action or decision taken on the basis of the system's identification must be verified and confirmed by at least two natural persons.

Practical implementation: These five requirements map directly to what we call advisory-only architecture. A system where AI outputs flow to a review queue, where a trained human reviewer can see the AI's reasoning, where the reviewer can override or reject the AI's output, and where the system can be interrupted — that system satisfies Article 14 by design. The architecture itself is the compliance control.

This is one of the strongest arguments for building AI systems as advisory tools with human-in-the-loop review rather than autonomous decision-makers. It is not merely good practice — it is a legal requirement for high-risk systems.

Article 15: Accuracy, Robustness, and Cybersecurity

High-risk AI systems must be designed to achieve an appropriate level of:

  • Accuracy — including, where appropriate, accuracy metrics declared in the accompanying instructions of use
  • Robustness — resilient to errors, faults, or inconsistencies within the system or its environment, including through technical redundancy solutions such as backup or fail-safe plans
  • Cybersecurity — resilient against unauthorized third-party attempts to alter use, outputs, or performance by exploiting system vulnerabilities, including AI-specific vulnerabilities such as data poisoning, adversarial examples, model inversion, and model extraction

Practical implementation: This requires documented accuracy benchmarks (tested against representative data), adversarial robustness testing, and a cybersecurity program that specifically addresses AI-specific attack vectors. Standard application security is necessary but not sufficient.

Article 17: Quality Management System

Providers must establish and maintain a quality management system documented in the form of written policies, procedures, and instructions. The QMS must include:

  • A strategy for regulatory compliance
  • Techniques, procedures, and systematic actions for design, development, quality control, and quality assurance
  • Examination, test, and validation procedures before, during, and after development
  • Technical specifications, including standards to be applied
  • Systems and procedures for data management (collection, analysis, labeling, storage, filtering, mining, aggregation, retention, and more)
  • A risk management system per Article 9
  • A post-market monitoring system per Article 72
  • Procedures for reporting serious incidents per Article 73
  • Communication management with national competent authorities and notified bodies
  • Systems and procedures for record-keeping
  • Resource management, including supply chain-related measures
  • An accountability framework

Practical implementation: Organizations already certified to ISO 9001 or ISO 13485 will recognize this structure. ISO 42001 (AI management systems) provides the most direct mapping to these requirements and is increasingly referenced as a framework for demonstrating compliance. While ISO 42001 certification is not a formal safe harbor under the AI Act, it provides strong evidence of a systematic approach to meeting these obligations.

Article 26: Deployer Obligations

Deployers — the organizations that use high-risk AI systems rather than building them — also have specific obligations:

  • Implement appropriate technical and organizational measures to ensure use in accordance with the instructions
  • Assign human oversight to natural persons with the necessary competence, training, and authority
  • Ensure input data is relevant and sufficiently representative in view of the intended purpose
  • Monitor the system's operation on the basis of the instructions of use and inform the provider or distributor of serious incidents
  • Retain logs automatically generated by the system for at least six months
  • Conduct a Fundamental Rights Impact Assessment (FRIA) before putting the system into use, for certain deployer categories
  • Inform natural persons that they are subject to the use of a high-risk AI system (unless obvious from the circumstances)
  • Use information from the provider to comply with the deployer's own obligation to carry out a data protection impact assessment under GDPR

Practical implementation: Deployers cannot simply purchase a high-risk AI system and assume the provider handles all compliance. The deployer must actively monitor the system, retain logs, ensure competent human oversight, and — critically — conduct a Fundamental Rights Impact Assessment. This FRIA obligation applies to deployers that are public bodies, private entities providing public services, and deployers of systems in credit scoring, life and health insurance pricing, and certain other categories.

Article 43: Conformity Assessment

Before placing a high-risk AI system on the market, the provider must submit it to a conformity assessment procedure. The specific procedure depends on the system type:

  • For Annex III systems (categories 2-8): Providers follow the conformity assessment based on internal control (Annex VI). This does not require a notified body — the provider self-assesses compliance and maintains documentation.
  • For Annex III biometric identification systems (category 1): Providers must choose between internal control and assessment involving a notified body.
  • For Annex I systems (safety components of regulated products): The conformity assessment follows the procedures under the relevant harmonization legislation (e.g., Medical Device Regulation).

After successful conformity assessment, the provider draws up an EU Declaration of Conformity and affixes the CE marking to the system. The system must also be registered in the EU database per Article 49.

Critical note: Conformity assessment must be repeated when a substantial modification is made to the system. This includes changes to the intended purpose, modifications to training data that materially affect accuracy, or changes to the system architecture. Continuous learning systems that update their parameters in deployment may trigger reassessment obligations.


Penalties: The Cost of Non-Compliance

The AI Act establishes a tiered penalty structure that scales with the severity of the violation. For large companies, the potential fines are in the same order of magnitude as GDPR.

| Violation Type | Maximum Fine | Revenue-Based Cap | |----------------|-------------|-------------------| | Prohibited AI practices (Article 5) | EUR 35,000,000 | 7% of total worldwide annual turnover | | High-risk system requirements (Articles 9-17, 26, 43, etc.) | EUR 15,000,000 | 3% of total worldwide annual turnover | | Supplying incorrect information to authorities | EUR 7,500,000 | 1% of total worldwide annual turnover |

The fine is whichever is higher — the fixed amount or the revenue-based cap. For a company with EUR 10 billion in annual revenue, a prohibited practices violation could reach EUR 700 million.

SME and Startup Reduction

The Act provides a meaningful concession for small and medium-sized enterprises, including startups. For SMEs, each fine shall be up to the percentages or amount referred to above, whichever thereof is lower (not higher). This inverts the standard formula — a startup with EUR 5 million in revenue would face the lower of EUR 35 million or 7% of revenue (EUR 350,000), meaning the penalty cap is EUR 350,000 rather than EUR 35 million.

Enforcement Mechanisms

Enforcement is decentralized to national competent authorities designated by each EU member state. In addition:

  • The European AI Office oversees GPAI model enforcement and coordinates cross-border cases
  • National authorities have powers to request access to documentation, conduct market surveillance, and order withdrawal of non-compliant systems from the market
  • Complainants can submit complaints to national authorities, who must investigate
  • There is no private right of action under the AI Act itself, but national authorities can refer cases to existing enforcement bodies

Beyond Fines: Market Access

The penalty that matters most may not be the fine. Non-compliant high-risk AI systems cannot be placed on the EU market — they cannot receive CE marking, and deployers within the EU should not use them. For global AI companies, losing access to the EU market may be a more significant consequence than any fine.


EU AI Act vs. CPRA ADMT: Key Differences

For organizations navigating both EU and US compliance, understanding the structural differences between the EU AI Act and California's CPRA ADMT regulations is essential.

| Dimension | EU AI Act | CPRA ADMT (California) | |-----------|-----------|----------------------| | Scope | All AI systems (risk-based) | Technology that "replaces or substantially replaces" human decision-making for significant decisions | | Classification | Four risk tiers; high-risk defined by domain (Annex III) and function | Binary: either ADMT or not, based on degree of human involvement | | Escape mechanism | Article 6(3) derogation (four conditions, provider burden of proof) | Three-prong test for human involvement (know, analyze, authority) | | Profiling | Always high-risk, no derogation | Covered as a form of ADMT; risk assessment required | | Human oversight | Article 14 — five specific requirements for system design | Three-prong test — focused on decision-maker capabilities, not system design | | Penalties | Up to EUR 35M or 7% global revenue | $7,500 per violation, no cap | | Consent model | No consumer opt-out for high-risk; regulatory compliance required regardless | Consumer opt-out rights for ADMT processing | | Enforcement timeline | Aug 2, 2026 (high-risk Annex III) | Jan 1, 2026 (risk assessments); Apr 1, 2027 (significant decision obligations) | | Documentation | Technical documentation, conformity assessment, CE marking, EU database registration | Risk assessments submitted to CPPA; pre-use notices |

The most significant architectural difference: the EU AI Act regulates the system itself (design requirements, technical standards, conformity assessment), while CPRA ADMT regulates the decision process (how decisions are made, whether humans are meaningfully involved, and what rights consumers have).

An advisory-only architecture — where AI generates recommendations that trained humans review before any decision is made — is a strong compliance strategy under both frameworks. Under CPRA, it can remove the system from ADMT classification entirely. Under the EU AI Act, it satisfies Article 14 human oversight requirements and may support an Article 6(3) derogation claim.


How to Prepare: A Practical Roadmap

With the August 2, 2026 deadline five months away, here is a structured approach to EU AI Act readiness for high-risk systems.

Step 1: System Inventory and Classification

Before you can comply, you must know what you have. Conduct a comprehensive inventory of all AI systems your organization develops or deploys, and classify each one:

  • Does it fall within an Annex III category? Map each system against the eight domain categories.
  • Does the Article 6(3) derogation apply? For each Annex III system, evaluate whether it meets one of the four conditions. Document the reasoning in detail. Remember: if the system profiles natural persons, the derogation cannot apply.
  • Is it a safety component of a regulated product? Check against Annex I harmonization legislation.
  • What is your role? Are you the provider (developer) or the deployer (user)? Both have obligations, but they differ.

Step 2: Documentation Gap Analysis

For each high-risk system, compare your current documentation against the Article 11 requirements:

  • Do you have documented risk management procedures (Article 9)?
  • Can you demonstrate data governance practices, including bias testing (Article 10)?
  • Do you have technical documentation covering system design, development process, and performance metrics (Article 11)?
  • Do you have append-only, tamper-evident logging that meets the Article 12 requirements?
  • Do you have instructions for use that meet Article 13 transparency requirements?

Most organizations will find significant gaps in logging and technical documentation. Start here — these are the hardest to retrofit.

Step 3: Human Oversight Architecture

Evaluate your current human oversight mechanisms against the five Article 14 requirements. For each high-risk system, ask:

  • Can overseers understand the system's capabilities and limitations?
  • Are overseers trained on automation bias risks?
  • Can overseers correctly interpret the system's output?
  • Can overseers override, reverse, or disregard the system's output?
  • Can overseers interrupt or stop the system?

If the answer to any of these is no, you have an architectural problem — not just a documentation problem. Implementing effective human oversight may require changes to your system's UI, workflow integration, and operational procedures.

Step 4: Quality Management System

Establish or extend your QMS to cover AI-specific requirements per Article 17. If you have existing ISO 9001 or ISO 13485 certification, use that as a foundation. If you are starting from scratch, ISO 42001 provides the most direct path to a QMS that addresses AI Act requirements.

ISO 42001 is not a formal safe harbor under the AI Act — certification does not create a presumption of conformity. But it provides a systematic framework for AI governance that maps closely to the Act's requirements, and it demonstrates a good-faith effort toward compliance that national authorities are likely to credit.

Step 5: Conformity Assessment Preparation

For Annex III systems (categories 2-8), conformity assessment is based on internal control. This means:

  • You must prepare the technical documentation (Article 11)
  • You must demonstrate that your QMS is in place (Article 17)
  • You must verify that the system meets all applicable requirements
  • You must draw up an EU Declaration of Conformity
  • You must register the system in the EU database (Article 49)
  • You must affix the CE marking

Start preparing your conformity assessment documentation now. The process is internal, but the documentation burden is substantial.

Step 6: Post-Market Monitoring

Plan your post-market monitoring system before deployment. Article 72 requires providers to establish and document a post-market monitoring system proportionate to the nature of the AI system and the level of risk. The monitoring system must actively and systematically collect, document, and analyze relevant data that deployers or other parties may provide throughout the system's lifetime.


Frequently Asked Questions

Does the EU AI Act apply to companies outside the EU?

Yes. The AI Act applies to providers who place AI systems on the EU market or put them into service in the EU, regardless of where the provider is established. It also applies to providers and deployers located outside the EU if the output produced by the AI system is used in the EU (Article 2(1)). This extraterritorial scope mirrors GDPR.

What counts as an "AI system" under the Act?

The definition is broad: a machine-based system designed to operate with varying levels of autonomy, that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers from the input it receives how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments (Article 3(1)). This covers machine learning models, expert systems, and statistical approaches.

Can I use the Article 6(3) derogation for a recommendation engine in HR?

It depends entirely on whether the system profiles natural persons. If your recommendation engine evaluates candidates based on personal characteristics — skills matching, experience scoring, cultural fit prediction — that constitutes profiling under GDPR Article 4(4), and the derogation cannot apply. If the system performs a purely procedural task (e.g., parsing resumes into structured data without scoring or ranking), the derogation may be available, but you bear the burden of documenting and defending that classification.

What is the relationship between the EU AI Act and GDPR?

The AI Act complements GDPR — it does not replace it. If your AI system processes personal data, you must comply with both. The AI Act adds AI-specific requirements (risk management, human oversight, conformity assessment) on top of GDPR's data protection obligations. Notably, the AI Act explicitly requires deployers to use information from providers to fulfill their GDPR obligations, including Data Protection Impact Assessments.

How does the AI Act treat general-purpose AI models like GPT or Claude?

General-purpose AI (GPAI) models have their own obligation framework (Articles 51-56), which took effect on August 2, 2025. GPAI providers must provide technical documentation, comply with EU copyright law, and publish a sufficiently detailed summary of training data content. GPAI models with "systemic risk" (generally, models trained with more than 10^25 FLOPs) face additional requirements including adversarial testing, cybersecurity measures, and serious incident reporting. Downstream providers who build high-risk applications on top of GPAI models remain responsible for ensuring the full application meets the high-risk requirements.

Is ISO 42001 certification sufficient for EU AI Act compliance?

No, but it is a strong foundation. ISO 42001 provides a management system framework for responsible AI governance that maps to many AI Act requirements — risk management, data governance, human oversight, and quality management. However, it does not cover AI Act-specific obligations like conformity assessment, CE marking, registration in the EU database, or the prohibition rules. Think of ISO 42001 as the operating framework and the AI Act as the specific legal rulebook — you need both, but ISO 42001 gets you a significant part of the way there.


The Clock Is Ticking

The EU AI Act's high-risk requirements take effect on August 2, 2026. The Commission guidelines that were supposed to clarify the practical application of these requirements are overdue. The burden of proof for classification decisions rests on providers. And the penalties — up to EUR 35 million or 7% of global revenue — are designed to ensure that non-compliance is not a viable business strategy.

The organizations that will navigate this well are the ones that start now: inventorying their AI systems, classifying them against Annex III, documenting their derogation arguments where applicable, and building the technical infrastructure — logging, human oversight, risk management — that the law demands.

The good news: the architectural patterns that satisfy the EU AI Act are the same patterns that satisfy CPRA ADMT requirements, Colorado's AI Act, and emerging regulations worldwide. Advisory-only architecture, meaningful human oversight, tamper-evident audit trails, and systematic risk management are not just compliance controls. They are good engineering practice for AI systems that make decisions affecting people's lives.

Ready to understand where your AI systems stand? Take the free ADMT compliance assessment at admt.ai — our AI-powered gap analysis maps your systems against the EU AI Act, CPRA, Colorado, and ISO 42001 requirements in minutes, not months.

Ready to assess your ADMT compliance?

Get a free, AI-powered gap assessment for your organization in minutes.