← All articles

AI Compliance Deadlines 2026-2028: The Complete Regulatory Calendar

If you deploy AI in consumer-facing decisions, you are now operating under active enforcement from at least one jurisdiction. Probably several.

The challenge is not that regulations exist. The challenge is that seven different regulatory frameworks across four jurisdictions have staggered deadlines between now and April 2028, each with different triggers, different obligations, and different penalties. Miss one, and the consequences range from $500/day fines to 7% of your global revenue.

This article is the reference calendar. Every deadline. Every penalty. Every requirement. Bookmark it, share it with your legal team, and use it to plan your compliance roadmap.

Last updated: March 22, 2026. We update this article as regulations change. The EU AI Act high-risk deadlines are currently subject to a proposed delay under the Digital Omnibus package — we cover both the current law and the proposed timeline below.


What Is Already In Effect (Do Not Skip This Section)

Before looking ahead, check whether you are already non-compliant. Four major AI regulations are actively being enforced right now.

CPRA ADMT Risk Assessments — In Effect Since January 1, 2026

Jurisdiction: California Penalty: $7,500 per violation, no cap Enforced by: California Privacy Protection Agency (CPPA)

California's automated decision-making technology (ADMT) regulations require businesses to conduct risk assessments before deploying AI systems that "replace or substantially replace human decision-making" for significant decisions — employment, credit, insurance, housing, healthcare, and education.

The CPPA is not waiting around. In September 2025, the agency issued a record $1.35 million fine against Tractor Supply Company for CCPA violations related to inadequate privacy notices and failed opt-out mechanisms. That enforcement action signals the agency's willingness to impose meaningful penalties, and the new ADMT regulations give them an even broader mandate.

What you need now:

  • A documented risk assessment for every AI system making or substantially replacing human decisions
  • Evidence that your AI systems meet the three-prong test for meaningful human involvement (the decision-maker knows how to interpret the output, reviews the output alongside other information, and has genuine authority to change the decision)
  • Record retention for 5 years or as long as processing continues

EU AI Act Prohibited Practices — In Effect Since February 2, 2025

Jurisdiction: European Union Penalty: Up to 35 million EUR or 7% of global annual turnover (whichever is higher) Enforced by: National competent authorities in each EU member state

The first wave of the EU AI Act has been enforceable for over a year. Prohibited AI practices include:

  • Subliminal manipulation techniques that cause harm
  • Exploitation of vulnerabilities of specific groups (age, disability, social/economic situation)
  • Social scoring by public authorities
  • Real-time remote biometric identification in public spaces (with limited exceptions)
  • Emotion recognition in workplaces and educational institutions (with limited exceptions)
  • Untargeted scraping of facial images from the internet or CCTV for facial recognition databases

If your AI system falls into any of these categories, you are already subject to the most severe penalties in global AI regulation.

NYC Local Law 144 (AEDT) — In Effect Since July 5, 2023

Jurisdiction: New York City Penalty: $500 first violation; $500-$1,500 per day thereafter Enforced by: NYC Department of Consumer and Worker Protection (DCWP)

Local Law 144 requires employers and employment agencies using automated employment decision tools (AEDTs) to conduct annual independent bias audits and provide notice to candidates.

An AEDT is any computational process that issues a simplified output (score, classification, recommendation) used to "substantially assist or replace discretionary decision making" for employment decisions.

Critical 2026 update: In December 2025, the New York State Comptroller released a damaging audit finding that DCWP enforcement was "ineffective." Among the findings: 75% of test calls about AEDT issues to the NYC 311 hotline were misrouted and never reached DCWP. The agency surveyed 32 companies but identified only one case of non-compliance, while the Comptroller's auditors reviewing the same companies identified at least 17 potential violations. DCWP has since committed to overhauling enforcement — expect a new phase of proactive investigations in 2026.

Illinois AI Video Interview Act + HB 3773 — In Effect Since January 1, 2026

Jurisdiction: Illinois Penalty: Civil penalties under the Illinois Human Rights Act Enforced by: Illinois Department of Human Rights

Illinois has two overlapping AI employment laws:

  1. AI Video Interview Act (AIVIA, 2019): Requires employers to notify job applicants and obtain written consent before using AI to analyze video interviews. Employers must explain how the technology works, limit video sharing, and delete recordings on request.

  2. HB 3773 (effective January 1, 2026): Amends the Illinois Human Rights Act to prohibit employers from using AI that results in discrimination based on protected classes. Covers recruitment, hiring, promotion, discharge, discipline, and terms of employment. Also prohibits using zip codes as a proxy for protected classes.

What makes Illinois notable: HB 3773 does not require intent. If your AI system produces discriminatory outcomes — even if you did not intend it — you are in violation. The standard is disparate impact, not disparate treatment.

Texas TRAIGA — In Effect Since January 1, 2026

Jurisdiction: Texas Penalty: $10,000-$12,000 per curable violation; $80,000-$200,000 per uncurable violation; $2,000-$40,000 per day for continuing violations Enforced by: Texas Attorney General (exclusive enforcement authority)

The Texas Responsible AI Governance Act (TRAIGA) applies to any entity conducting business in Texas, offering products to Texas residents, or deploying AI systems accessible to Texas users. Key prohibitions include:

  • AI systems designed for behavioral manipulation or discrimination
  • AI-generated child exploitation material or unlawful deepfakes
  • Government use of AI for social scoring
  • Government use of AI for biometric identification from public sources without consent

The Attorney General must provide notice and a 60-day cure period before enforcement, but uncurable violations carry penalties up to $200,000 per violation with no cure period.


The Complete AI Compliance Timeline: 2026-2028

Here is every upcoming deadline in chronological order.

| Date | Regulation | Jurisdiction | Penalty | Status | |------|-----------|-------------|---------|--------| | Already in effect | CPRA ADMT Risk Assessments | California | $7,500/violation | Active enforcement | | Already in effect | EU AI Act Prohibited Practices | EU | 7% global revenue | Active enforcement | | Already in effect | NYC Local Law 144 | NYC | $500-$1,500/day | Enforcement tightening | | Already in effect | Illinois HB 3773 | Illinois | Civil penalties | Active enforcement | | Already in effect | Texas TRAIGA | Texas | Up to $200K/violation | Active enforcement | | Jun 30, 2026 | Colorado AI Act (SB 24-205) | Colorado | AG enforcement | Approaching | | Aug 2, 2026 | EU AI Act High-Risk + Transparency | EU | 7% global revenue | Current law* | | Jan 1, 2027 | CPRA Consumer Rights (ss 7220-7222) | California | $7,500/violation | Approaching | | Feb 1, 2027 | Colorado AI Act Phase 2 | Colorado | AG enforcement | Approaching | | Dec 2, 2027 | EU AI Act High-Risk (Annex III)* | EU | 7% global revenue | Proposed delay | | Aug 2, 2028 | EU AI Act High-Risk (Annex I)* | EU | 7% global revenue | Proposed delay | | Apr 1, 2028 | CPPA Risk Assessment Submissions | California | $7,500/violation | Approaching |

*The EU Digital Omnibus proposal, advanced by the EU Council on March 13, 2026, would delay Annex III high-risk obligations from August 2, 2026 to December 2, 2027 and Annex I obligations to August 2, 2028. As of this writing, the delay is proposed but not finalized. Plan for the original August 2, 2026 deadline until the legislative process concludes.


2026 Deadlines: What You Need to Do Now

June 30, 2026 — Colorado AI Act (SB 24-205)

Time remaining: ~3 months

Colorado's AI Act is the most aggressive comprehensive AI regulation in the United States. Originally scheduled for February 1, 2026, the effective date was delayed to June 30, 2026 following a special legislative session in August 2025.

Who it applies to: Any developer or deployer of a "high-risk AI system" — defined as a system that "makes, or is a substantial factor in making, a consequential decision." A consequential decision is one with a "material legal or similarly significant effect" on a consumer in education, employment, financial services, government services, healthcare, housing, insurance, or legal services.

The critical test — "capable of altering the outcome": Colorado uses the broadest trigger in any U.S. regulation. The AI system does not need to actually make the decision. It only needs to be capable of altering the outcome. If your system produces a score, ranking, or recommendation that a human decision-maker could follow, it likely qualifies as high-risk — even if humans override it in practice.

What deployers must do:

  • Risk management policy and program: Implement a documented program to identify and mitigate risks of algorithmic discrimination
  • Annual impact assessments: Complete an impact assessment for each high-risk AI system, plus additional assessments within 90 days of any significant system modification
  • Consumer notification: Inform Colorado consumers when an AI system is being used, disclose its purpose and nature, and provide contact information
  • Appeal process: Offer consumers an opportunity to appeal adverse consequential decisions through human review, if technically feasible
  • Plain-language disclosure: Provide a description of the AI system that a consumer without technical expertise can understand

What developers must do:

  • Provide deployers with documentation sufficient to complete impact assessments
  • Disclose known limitations, intended uses, and the types of data the system was trained on

Penalty: Violations constitute unfair trade practices. Enforced exclusively by the Colorado Attorney General.

Small deployer exemption: Organizations with fewer than 50 employees that do not use their own data to train the system, use it for its intended purpose, and make available the developer's impact assessment are exempt from most requirements — but consumer notification still applies.

August 2, 2026 — EU AI Act High-Risk and Transparency Obligations

Time remaining: ~4 months (current law) or ~21 months (if Digital Omnibus delay is finalized)

This is the big one. The majority of the EU AI Act's substantive requirements become enforceable on August 2, 2026, under current law.

What takes effect:

  • High-risk AI system obligations (Annex III): Systems used in biometrics, critical infrastructure, education, employment, essential services, public services, law enforcement, and justice/democracy must comply with extensive requirements including risk management systems, data governance, technical documentation, record-keeping, transparency, human oversight, accuracy, robustness, and cybersecurity
  • Transparency obligations (Article 50): AI-generated content must be clearly marked. Providers must ensure users know they are interacting with AI. Deepfake content must be labeled.
  • Innovation measures: Member states must establish at least one AI regulatory sandbox

The Digital Omnibus complication: On March 13, 2026, the EU Council agreed on a position to delay high-risk obligations under what is called the "Digital Omnibus" package. The proposed delay would push Annex III (standalone high-risk systems) to December 2, 2027 and Annex I (high-risk systems embedded in regulated products) to August 2, 2028. The rationale: harmonized standards, guidance documents, and support tools are not yet ready.

Our recommendation: Do not count on the delay. The Digital Omnibus must still pass the full legislative process. Plan for August 2, 2026, and treat any delay as bonus preparation time rather than an excuse to wait.

Penalty: Up to 35 million EUR or 7% of global annual turnover — the most severe AI penalty on Earth.

The derogation escape (Article 6(3)): An Annex III system is NOT high-risk if it does not pose a significant risk of harm, including by "not materially influencing the outcome of decision making." Four conditions exist (any one suffices): the AI performs a narrow procedural task, it improves a prior human activity, it detects patterns without replacing human judgment, or it performs a preparatory task before a human decides. Critical exception: Systems that perform profiling of natural persons are ALWAYS high-risk. No derogation applies.


2027 Deadlines: Consumer Rights and Phase 2 Requirements

January 1, 2027 — CPRA Consumer Rights for ADMT (ss 7220-7222)

This is the deadline many companies are underestimating. While the 2026 risk assessment requirements focus on what businesses do internally, the 2027 consumer rights provisions create direct obligations to individual consumers.

Three new rights take effect:

ss 7220 — Pre-Use Notice: Before using ADMT on a consumer, you must deliver a prominent notice containing:

  • A specific, plain-language explanation of what the ADMT does and why (generic statements like "to improve our services" are explicitly prohibited)
  • What categories of personal information influence the outputs
  • What outputs the ADMT generates and how they feed the decision
  • A description of the alternative process if the consumer opts out
  • A direct link to the opt-out mechanism
  • A statement that you will not retaliate against consumers who exercise their rights

ss 7221 — Right to Opt-Out: Consumers can refuse automated decision-making. You must:

  • Offer at least two easy opt-out methods
  • Process opt-outs without requiring identity verification
  • Cease ADMT processing within 15 business days if it has already started
  • Notify downstream service providers and contractors to comply within the same 15-day window
  • Actually provide the alternative human decision-making process described in the notice

HR/education exception: Employers may skip the opt-out for admission, hiring, work allocation, and compensation — but must instead offer a right to appeal to a qualified human reviewer with genuine authority to change the decision. Appeal timelines: 10 business days to confirm receipt, 45 calendar days to respond (extendable to 90).

ss 7222 — Right to Access ADMT Information: Consumers can request detailed, consumer-specific information about how ADMT was used on them:

  • Why the ADMT was used (purpose)
  • What the ADMT actually produced for this specific consumer (output)
  • Whether the output was the sole factor or one of many, and what other factors contributed
  • How the system processed this consumer's personal information, including key parameters
  • What the business ultimately decided (outcome)
  • The degree and nature of any human involvement

Response timeline: 10 business days to confirm receipt, 45 calendar days to respond (extendable to 90).

Why this is harder than it sounds: These are not generic privacy disclosures. They require consumer-specific information about model inputs, outputs, logic, and human involvement for each individual request. Existing privacy platforms (OneTrust, DataGrail, Transcend) are extending DSAR infrastructure to handle these requests, but none have purpose-built capabilities for ADMT consumer rights. The gap is significant: privacy platforms understand data stores, not ML pipelines.

February 1, 2027 — Colorado AI Act Phase 2

Colorado's initial June 2026 requirements expand. Additional obligations may apply depending on how the Attorney General's office interprets the Act's rulemaking authority. Monitor the Colorado Attorney General's website for updated guidance.

December 2, 2027 — EU AI Act High-Risk (Annex III) (Proposed Delay)

If the Digital Omnibus passes, this becomes the new deadline for standalone high-risk AI systems under Annex III. This includes AI used in biometrics, critical infrastructure, education, employment, essential services, and law enforcement. Companies that have not yet begun compliance work should not wait for this date — the requirements are extensive and implementation takes 12-18 months for most organizations.


2028 Deadlines: Attestation and Embedded Systems

April 1, 2028 — CPPA Risk Assessment Attestation Submissions

California's longest-lead requirement: businesses must submit their risk assessments to the CPPA. This is not optional. You are not just required to have risk assessments — you must submit them to the regulator for review.

What to prepare now: Ensure your risk assessments are audit-quality, not internal-only documents. The CPPA will review them, and deficient assessments will trigger enforcement action.

Record retention: Risk assessment records must be maintained for 5 years or as long as ADMT processing continues, whichever is longer. Consumer request documentation must be retained for 24 months.

August 2, 2028 — EU AI Act High-Risk (Annex I) (Proposed Delay)

If the Digital Omnibus passes, this is the deadline for high-risk AI systems embedded in regulated products (medical devices, vehicles, machinery, toys, marine equipment, etc.). These systems must comply with the full EU AI Act requirements before being placed on the market.


Penalty Comparison: What Is at Stake

Not all penalties are equal. Here is how they stack up.

| Regulation | Maximum Penalty | Enforcement Model | Risk Level | |-----------|----------------|-------------------|-----------| | EU AI Act | 35M EUR or 7% global revenue | National authorities | Highest | | Texas TRAIGA | $200,000/violation (uncurable) | Attorney General | High | | CPRA ADMT | $7,500/violation, no cap | CPPA | High | | Colorado AI Act | Unfair trade practices (AG) | Attorney General | Moderate-High | | NYC LL 144 | $500-$1,500/day | DCWP | Moderate | | Illinois HB 3773 | Civil penalties (IHRA) | Dept. of Human Rights | Moderate |

Key insight: The EU AI Act's percentage-of-revenue penalty makes it the most dangerous regulation for large companies. A company with $10 billion in annual revenue faces a maximum penalty of $700 million. CPRA's per-violation model is the most dangerous for high-volume automated decision-making — if you process 100,000 decisions and each one is a violation, that is $750 million.


Jurisdiction Overlap: When Multiple Regulations Apply Simultaneously

Most companies operating AI at scale will be subject to multiple regulations simultaneously. Here is how they overlap.

Scenario 1: U.S. SaaS company with AI-powered hiring tools

  • California employees or candidates: CPRA ADMT (now + Jan 2027 consumer rights)
  • New York City candidates: NYC Local Law 144 (now)
  • Illinois candidates: HB 3773 + AIVIA (now)
  • Colorado candidates: Colorado AI Act (Jun 2026)
  • Texas candidates: TRAIGA (now)
  • EU candidates: EU AI Act (Aug 2026 or Dec 2027)

That is potentially six simultaneous regulatory frameworks for a single HR AI system.

Scenario 2: Insurance company using AI for underwriting

  • California policyholders: CPRA ADMT (now + Jan 2027)
  • Colorado policyholders: Colorado AI Act (Jun 2026)
  • EU policyholders: EU AI Act (Aug 2026 or Dec 2027)
  • All U.S. states: State insurance department regulations

Scenario 3: Healthcare AI platform

  • FDA: Clinical Decision Support rules (now)
  • California patients: CPRA ADMT (now + Jan 2027)
  • EU patients: EU AI Act (Aug 2026 or Dec 2027)
  • HIPAA: Always applies

The practical implication: You cannot build compliance one regulation at a time. You need a unified compliance architecture that satisfies the strictest requirements across all applicable jurisdictions — then layer jurisdiction-specific documentation on top.


How to Prioritize: A Framework for Sequencing Compliance Work

With limited resources and multiple deadlines, here is how to sequence your work.

Priority 1: Already Enforceable (Act Immediately)

If you have not already addressed these, you are exposed today:

  • CPRA ADMT risk assessments — Document every AI system making significant decisions. Conduct and retain risk assessments. Ensure meaningful human involvement meets the three-prong test.
  • NYC LL 144 bias audits — If you use AI in hiring decisions in NYC, ensure your annual independent bias audit is current and your candidate notices are compliant.
  • Illinois HB 3773 — If you use AI in employment decisions affecting Illinois employees or candidates, audit your systems for discriminatory outcomes.
  • Texas TRAIGA — Review AI systems against the prohibited practices list.

Priority 2: June-August 2026 (Start Now)

  • Colorado AI Act (June 30): Impact assessments take time. Begin your risk management program, identify all high-risk AI systems, and draft consumer notification language now.
  • EU AI Act high-risk (August 2): Even with the proposed Digital Omnibus delay, do not wait. Begin classifying your AI systems against Annex III categories, and evaluate whether the Article 6(3) derogation applies. If it does, document it thoroughly. If it does not, begin implementing the full high-risk compliance stack.

Priority 3: January 2027 (Begin Planning in Q2 2026)

  • CPRA consumer rights (ss 7220-7222): This requires building new infrastructure — pre-use notices, opt-out mechanisms, consumer-specific disclosure generation, appeal processes. These are not checkbox exercises. Begin scoping the engineering work now.

Priority 4: 2028 (Foundational Work Starts in 2027)

  • CPPA risk assessment submissions (April 2028): Ensure your risk assessments are submission-quality.
  • EU AI Act Annex I (August 2028): If you embed AI in regulated products, ensure your product conformity assessment process includes AI Act requirements.

The Federal Wildcard

On December 11, 2025, President Trump signed an executive order titled "Ensuring a National Policy Framework for Artificial Intelligence" that proposes to establish a uniform federal policy framework for AI that would preempt state AI laws deemed inconsistent with that policy.

As of March 2026, the executive order has not resulted in binding legislation. No federal AI regulation preempts any of the state laws discussed in this article. However, the political environment is shifting — monitor federal developments, but do not use the possibility of federal preemption as a reason to delay compliance with state regulations that are already enforceable.

The legislative landscape: As of March 2026, state lawmakers in 45 states have introduced 1,561 AI-related bills. In 2025, 145 AI-related bills were enacted into law across all 50 states. The pace of regulation is accelerating, not slowing.


Frequently Asked Questions

Does the advisory-only architecture really exempt my AI system from ADMT regulations?

It depends on the regulation. Under CPRA, if a human decision-maker satisfies the three-prong test (knows how to interpret the output, reviews the output alongside other information, has genuine authority to change the decision), the system is removed from ADMT classification entirely. Under the EU AI Act's Article 6(3), systems that do not materially influence decision outcomes can avoid high-risk classification. Under Colorado's Act, the test is whether the AI is "capable of altering the outcome" — which is much harder to satisfy because it is forward-looking and theoretical. No single architecture guarantees exemption across all jurisdictions.

My company has fewer than 50 employees. Am I exempt?

Partially, under Colorado's AI Act. Small deployers (fewer than 50 employees) that do not use their own data to train the system and use it for its intended purpose are exempt from most requirements — but consumer notification still applies. Under CPRA, there is no small business exemption for ADMT if you meet the general CCPA thresholds ($25M+ revenue, 100K+ consumers' data, or 50%+ revenue from selling/sharing data). NYC LL 144 and Illinois HB 3773 have no size exemptions.

What counts as a "significant decision" under CPRA?

The CPRA defines significant decisions as those involving employment, credit, insurance, housing, healthcare, and education. The regulations narrowed the scope from "substantially facilitate" (early drafts) to "substantially replace" — CPPA staff testified this reduced coverage from a wide swath of businesses to roughly 10% of CCPA-covered businesses. If your AI is truly advisory and a human makes the actual decision with genuine discretion, you are likely outside scope.

If the EU AI Act high-risk deadline is delayed, should I wait?

No. The Digital Omnibus proposal must still pass the full legislative process. Even if the delay is enacted, the backstop dates (December 2027 for Annex III, August 2028 for Annex I) give you 21 months at most. High-risk compliance — including risk management systems, technical documentation, quality management, post-market monitoring, and conformity assessments — takes 12-18 months for most organizations. Start now regardless of whether the delay passes.

Can I be subject to both CPRA and Colorado regulations for the same AI system?

Yes. If your AI system makes decisions affecting California consumers and Colorado consumers, both sets of obligations apply simultaneously. You need risk assessments under CPRA and impact assessments under Colorado, consumer notices under both (with different content requirements), and appeal/opt-out mechanisms under both. The practical approach is to build to the strictest standard across all jurisdictions and then generate jurisdiction-specific documentation.


What To Do Next

The regulatory landscape for AI is no longer theoretical. Multiple jurisdictions are actively enforcing AI-specific regulations with material penalties. The window for "wait and see" closed in January 2026.

Three steps to take this week:

  1. Inventory your AI systems. List every AI system that influences decisions about consumers — hiring, lending, insurance, healthcare, housing, education. For each, document what decisions it influences, what data it uses, and what human review process exists.

  2. Map your jurisdictional exposure. For each system, identify which regulations apply based on where your consumers, employees, and users are located. Use the timeline table above to identify your nearest deadline.

  3. Assess your compliance gaps. For each applicable regulation, determine what you have today versus what you need. Focus on the highest-penalty, nearest-deadline combinations first.

Take the free ADMT compliance assessment at admt.ai to see which deadlines apply to your company and where your gaps are. The assessment takes 10 minutes, covers all major U.S. and EU AI regulations, and delivers a prioritized remediation roadmap specific to your AI systems and jurisdictional exposure.


This article is for informational purposes only and does not constitute legal advice. Consult qualified legal counsel for guidance specific to your organization's circumstances. Regulatory deadlines and requirements are subject to change — we update this article regularly to reflect the latest developments.

Ready to assess your ADMT compliance?

Get a free, AI-powered gap assessment for your organization in minutes.