← All articles

Three AI Regulations Are About to Hit. Here's What They Actually Require.

If your company uses AI to make decisions about people — hiring, lending, insurance, underwriting, healthcare — you are about to have a very busy year.

Three major AI regulations take effect in the next nine months. Each one carries real penalties. Each one requires infrastructure that most companies have not built. And they overlap — a single AI system serving customers across California, Colorado, and the EU may need to satisfy all three simultaneously.

Here is what each one demands, in plain terms.


1. Colorado AI Act — June 30, 2026

Time remaining: 3 months.

Colorado's law is the broadest AI regulation in the United States. It uses a two-part trigger: the AI must be a "substantial factor" in a consequential decision, and it must be "capable of altering the outcome."

That second part is critical. It does not matter whether a human overrides the AI in practice. If the system could alter the outcome — because it produces a score, ranking, or recommendation that a decision-maker could follow — it qualifies. This is a forward-looking, theoretical test. Most AI systems that touch consequential decisions will trigger it.

What it covers: Education, employment, financial services, government services, healthcare, housing, insurance, legal services.

What you must do:

  • Implement a risk management program that identifies and mitigates algorithmic discrimination
  • Complete an impact assessment for each high-risk AI system
  • Notify consumers when AI factors into a consequential decision about them
  • Provide a plain-language description of the AI system
  • Offer consumers an opportunity to appeal adverse decisions through human review

Penalty: Up to $20,000 per violation. Violations are classified as deceptive trade practices under the Colorado Consumer Protection Act. Enforced exclusively by the Attorney General — no private right of action.

The engineering work: Consumer notification infrastructure at the point of decision. An appeal workflow with genuine human review. Impact assessment documentation tied to each system. If you have not started, three months is tight.

One useful detail: Organizations that comply with the NIST AI Risk Management Framework or ISO 42001 get a safe harbor — an affirmative defense if enforcement comes. If you are pursuing either certification, document it thoroughly.


2. EU AI Act High-Risk Requirements — August 2, 2026

Time remaining: 5 months.

The EU AI Act classifies AI systems into risk tiers. The obligations that matter take effect in August: the full requirements for high-risk AI systems in Annex III categories.

What it covers: Biometrics, critical infrastructure, education, employment, credit and lending, insurance (life and health), law enforcement, migration, and administration of justice.

What you must do for high-risk systems:

  • Establish a continuous risk management system (not a one-time assessment — ongoing throughout the system lifecycle)
  • Meet data governance standards: training data must be representative, bias-tested, and documented
  • Produce technical documentation covering design, development, and performance
  • Implement automatic event logging that records every use of the system
  • Design for human oversight — specifically, overseers must be able to understand the system, detect automation bias, interpret outputs, override decisions, and interrupt operation
  • Achieve documented accuracy, robustness, and cybersecurity standards
  • Maintain a quality management system
  • Complete a conformity assessment, register in the EU database, and affix CE marking

Penalty: Up to 15 million EUR or 3% of global annual turnover for high-risk requirement violations. Up to 35 million EUR or 7% of global turnover for prohibited practices. Whichever is higher.

The escape hatch: Article 6(3) offers a derogation. An Annex III system is not high-risk if it does not "materially influence the outcome of decision making." Four conditions qualify — the system performs a narrow procedural task, improves a prior human activity, detects patterns without replacing judgment, or performs a preparatory task before a human decides.

Advisory AI that presents recommendations for human review maps directly to Condition 4. If your AI gathers, analyzes, and recommends — but a human decides — you may avoid high-risk classification entirely.

The exception that kills the escape: Systems that profile natural persons are always high-risk. No derogation applies. If your AI evaluates personal characteristics to predict behavior, reliability, economic situation, or health, it is high-risk regardless of how advisory it is.

The engineering work: Event logging is the biggest gap for most companies. The regulation requires automatic, append-only records of every system use — not application logs, but structured audit trails. Human oversight design is the second gap: you need a review interface where overseers can understand, interpret, override, and interrupt. These are UI and infrastructure projects, not documentation exercises.

A timing note: The EU Digital Omnibus proposal would delay Annex III obligations to December 2027. It is not finalized. Do not plan around it.


3. CPRA Consumer Rights for ADMT — January 1, 2027

Time remaining: 9 months.

California's ADMT risk assessment requirements are already in effect (since January 2026). What arrives in January 2027 is harder: three new consumer-facing rights that require infrastructure most companies do not have.

What it covers: AI that "replaces or substantially replaces" human decision-making for employment, credit, insurance, housing, healthcare, or education decisions.

The three new consumer rights:

Pre-use notice (§7220). Before using ADMT on a consumer, you must deliver a conspicuous notice explaining what the system does, why, what data it uses, what outputs it produces, how to opt out, and what alternative human process is available. Generic language like "to improve our services" is explicitly prohibited. This must be specific to the system and delivered at or before the point of data collection.

Opt-out (§7221). Consumers can refuse automated decision-making. You must offer at least two opt-out methods. You do not need to verify identity for opt-out requests. If processing has already started, you must stop within 15 business days. You must notify downstream processors to stop as well. And — this is the part that hurts — you must actually provide the alternative human decision-making process described in your notice.

Access (§7222). Consumers can request specific information about how ADMT was used on them. Not generic system descriptions — consumer-specific records: what the AI produced for this person, how it influenced the decision, what the outcome was, what human involvement existed. You have 45 days to respond (extendable to 90).

Penalty: $7,500 per intentional violation per consumer, no cap. The CPPA has hundreds of open investigations and processes roughly 150 consumer complaints per week.

The engineering work: This is the heaviest lift. §7222 access requests require your system to produce, for any individual consumer, a structured record of the AI's recommendation, the decision outcome, and the human involvement. That means your audit trail must be consumer-queryable — indexed by consumer identity with structured, retrievable records for each AI-assisted decision. If your current evidence of human review is "we have a policy," you have nine months to build the infrastructure that generates actual records.

The opt-out right requires a fallback — a human-only decision process that works without the AI. If your current workflow depends on AI recommendations and has no manual path, you need to build one.


What Overlapping Compliance Looks Like

A U.S. company with an AI-powered underwriting system serving customers in California, Colorado, and the EU faces all three regulations simultaneously for the same system:

RequirementColorado (Jun 2026)EU AI Act (Aug 2026)CPRA (Jan 2027)
Risk/impact assessmentRequiredContinuousRequired
Consumer notificationAt decision pointInform of AI usePre-use notice with specifics
Audit trailImplicit (evidence for AG)Automatic event loggingConsumer-queryable records
Human oversightAppeal processFive specific capabilitiesThree-prong meaningful review
Opt-outNot requiredNot requiredRequired with alternative process

The practical approach: build to the strictest standard across all three, then layer jurisdiction-specific documentation on top. The CPRA's consumer-queryable audit trail is the strictest evidence requirement. The EU AI Act's human oversight design is the strictest architectural requirement. Colorado's broad trigger is the strictest scope. If you satisfy all three, you satisfy each individually.


Where to Start

If you have not started compliance work, the sequence that covers the most ground in the least time:

  1. Inventory your AI decision points. List every system where AI influences decisions about people. For each, document: what decisions it affects, what role humans play, and what evidence exists that human review is meaningful.

  2. Classify each system against all three triggers. Colorado's "capable of altering the outcome" is the broadest — if a system clears Colorado's bar, it almost certainly triggers the other two.

  3. Build the audit trail first. A structured, consumer-queryable log of AI recommendations, human reviews, and decision outcomes is required by all three frameworks. It is also the hardest to retrofit. Start here.

  4. Design human oversight that generates evidence. Not a policy. Infrastructure: review interfaces with tracked engagement, minimum dwell times, and the ability to override. The evidence that this process works — dwell time distributions, override rates, engagement metrics — is what regulators will ask for.

  5. Prepare consumer-facing mechanisms. Notification language, opt-out workflows, and the human fallback process for California. Consumer notification infrastructure for Colorado. These are largely front-end and process work, but they depend on the audit trail from step 3.

The deadlines are June, August, and January. The engineering work for all three shares a common foundation: know where AI touches decisions, ensure humans genuinely review those decisions, and produce records proving it happened. That foundation is worth building regardless of which jurisdiction reaches your customers first.


Run the free ADMT compliance assessment at admt.ai to see which of these regulations apply to your company and where the gaps are. Takes under 15 minutes.

Ready to assess your ADMT compliance?

Get a free, AI-powered gap assessment for your organization in minutes.