← All articles

What Is ADMT? The Complete Guide to Automated Decision-Making Technology

Every company using AI to make decisions about people — hiring, lending, insurance, healthcare — now faces a question that carries real financial consequences: does your system qualify as Automated Decision-Making Technology?

If the answer is yes, a rapidly expanding set of regulations imposes specific obligations on your business. Risk assessments. Consumer notices. Opt-out mechanisms. Audit trails. And penalties that start at $7,500 per violation with no aggregate cap.

If the answer is no — because your architecture ensures that humans, not algorithms, make the final call — those obligations may not apply to you at all.

This guide explains what ADMT is, which regulations define it, what triggers compliance obligations, and what your organization needs to do about it. Whether you are a CTO evaluating your AI stack, a compliance officer preparing for upcoming deadlines, or a product leader designing AI-assisted workflows, this is the context you need.

What Is ADMT? A Clear Definition

ADMT — Automated Decision-Making Technology — is any technology that processes personal information and uses computation to replace or substantially replace human decision-making.

That definition comes from California's CPRA regulations, which represent the most developed ADMT regulatory framework in the United States. But the concept is not unique to California. The EU AI Act, the Colorado AI Act, and New York City's Local Law 144 all regulate the same fundamental behavior: AI systems making consequential decisions about individuals.

The key phrase is "replace or substantially replace." This is not about AI that helps humans make decisions. It is about AI that takes over the decision itself — either by acting autonomously or by reducing the human role to a formality.

Everyday Examples of ADMT

To make this concrete, here are systems that likely qualify as ADMT:

  • A resume screening tool that automatically rejects candidates who score below a threshold, with no human review of rejected applications
  • A credit scoring algorithm that auto-declines loan applications based on a risk score, sending denial letters without a human underwriter reviewing the case
  • An insurance pricing engine that sets premiums based on AI-analyzed behavioral data, with adjusters only reviewing edge cases
  • A tenant screening service that generates accept/reject recommendations that property managers follow in 98% of cases without examining the underlying data
  • A healthcare prior authorization system that automatically denies claims based on pattern matching against policy criteria

And here are systems that likely do not qualify as ADMT:

  • An AI assistant that summarizes a patient's medical history for a physician who makes the treatment decision
  • A fraud detection system that flags suspicious transactions for a human analyst to investigate and resolve
  • A recruiting tool that organizes applications by skills match but leaves all shortlisting and interview decisions to hiring managers who review each candidate
  • An underwriting assistant that pulls relevant data points into a structured view for an underwriter who independently evaluates the risk

The distinction is not whether AI is involved. It is whether the AI is making the decision or a human is.

Why ADMT Matters Now: The Enforcement Timeline

ADMT has moved from a theoretical regulatory concept to an active enforcement priority. Three major regulatory regimes are converging simultaneously, and the penalties are substantial.

The Deadlines

| Date | Regulation | What Happens | |------|-----------|-------------| | January 1, 2026 | CPRA ADMT (California) | Core ADMT regulations took effect. Businesses must begin preparations for compliance obligations. | | June 30, 2026 | Colorado AI Act | Obligations for deployers and developers of high-risk AI systems take effect. | | August 2, 2026 | EU AI Act (High-Risk) | Requirements for high-risk AI systems become enforceable, including systems used in employment, credit, education, and law enforcement. | | April 1, 2027 | CPRA Significant Decisions | Full ADMT obligations for significant decisions take effect — risk assessments, pre-use notices, opt-out, and access rights. | | April 2028 | CPPA Assessment Submissions | Companies must submit completed risk assessments to the California Privacy Protection Agency. |

The Penalties

These are not symbolic fines.

  • California (CPRA): $7,500 per violation, no aggregate cap. The CPPA has a dedicated enforcement strike force and reported hundreds of investigations in progress as of early 2026. Recent enforcement actions include a $1.35 million fine against Tractor Supply Company and a $632,500 fine against American Honda Motor Co.
  • EU AI Act: Up to 35 million EUR or 7% of global annual turnover for prohibited practices. Up to 15 million EUR or 3% for other infringements. These penalties apply to non-EU companies offering AI systems in the EU.
  • Colorado AI Act: Violations constitute unfair trade practices under the Colorado Consumer Protection Act, enforced exclusively by the Attorney General.

The CPPA's enforcement trajectory is notable. It issued over $2.3 million in fines in 2025 alone, and its staff have testified that the pace of investigations is accelerating. ADMT-specific enforcement will follow the April 2027 compliance deadline, but companies that wait until then to begin preparation will be operating without margin.

Which Regulations Cover ADMT?

Three primary regulatory frameworks govern automated decision-making technology. Each defines the trigger slightly differently, but they share a common structure: if AI is making consequential decisions about people, specific obligations apply.

CPRA ADMT Regulations (California)

California's CPRA regulations, finalized in September 2025 and effective January 1, 2026, represent the most detailed ADMT framework in the United States.

The trigger: Technology that "replaces or substantially replaces human decision-making" for "significant decisions" — defined as decisions producing legal or similarly significant effects in employment, credit, insurance, housing, healthcare, or education.

A critical definitional shift: Early drafts of the CPRA regulations used the phrase "substantially facilitate," which would have captured any AI system that meaningfully influenced human decisions. The final regulations narrowed this to "substantially replace." CPPA staff testified that this change reduced coverage from a wide swath of businesses to roughly 10% of CCPA-covered businesses. This narrowing is significant: technology that helps humans make better decisions is explicitly distinguished from technology that replaces human judgment.

What it requires:

  • Conduct risk assessments documenting potential harms and mitigation measures
  • Provide pre-use notices to consumers before ADMT is used on them
  • Offer opt-out mechanisms allowing consumers to refuse automated processing
  • Explain the logic — meaningful information about how decisions are made
  • Provide a human appeal path for automated decisions
  • Document bias testing and fairness evaluations

Who it applies to: Any business subject to the CCPA/CPRA that uses AI to make significant decisions affecting California consumers.

EU AI Act

The EU AI Act, with high-risk system obligations taking effect August 2, 2026, takes a different approach. Rather than defining ADMT directly, it classifies AI systems into risk tiers and imposes obligations based on the use case.

The trigger: AI systems used in "high-risk" domains listed in Annex III — biometrics, critical infrastructure, education, employment, essential services, public services, law enforcement, and justice. A system deployed in these domains is presumed high-risk unless it qualifies for a specific derogation.

What it requires:

  • Mandatory conformity assessments before deployment
  • Risk management systems maintained throughout the system lifecycle
  • Data governance and quality requirements
  • Technical documentation and record-keeping
  • Transparency obligations to users
  • Human oversight requirements (Article 14)
  • Accuracy, robustness, and cybersecurity standards
  • Post-market monitoring and incident reporting
  • Registration in the EU AI database

Who it applies to: Any organization deploying AI in high-risk domains within the EU, regardless of where the organization is headquartered.

The derogation (Article 6(3)): An Annex III system is not considered high-risk if it does not pose a significant risk of harm, including by "not materially influencing the outcome of decision making." Four conditions can remove a system from high-risk classification — it performs a narrow procedural task, it improves a prior human activity, it detects patterns without replacing judgment, or it performs a preparatory task before a human decides. However, systems that perform profiling of natural persons are always high-risk, regardless of these conditions.

Colorado AI Act (SB 24-205)

Colorado's law, taking effect June 30, 2026, focuses specifically on "high-risk AI systems" that make consequential decisions.

The trigger: An AI system that "makes, or is a substantial factor in making, a consequential decision" — defined as a decision with a "material legal or similarly significant effect" on education, employment, financial services, government services, healthcare, housing, insurance, or legal services.

What it requires:

  • Implement a risk management policy and program
  • Complete impact assessments
  • Annually review each high-risk system for algorithmic discrimination
  • Notify consumers when a high-risk system makes or substantially factors into a consequential decision
  • Provide consumers the opportunity to correct incorrect personal data
  • Report to the Attorney General within 90 days of discovering discrimination

Who it applies to: "Deployers" of high-risk AI systems operating in Colorado. A safe harbor exists for organizations that comply with nationally recognized AI risk management frameworks (such as the NIST AI RMF).

Key difference from California: Colorado's "capable of altering the outcome" test is more aggressive than California's "substantially replace" standard. Even if a human always overrides the AI in practice, if the system could alter the outcome — for example, by providing a score that a human could follow — it may qualify as high-risk under Colorado law.

What Counts as a "Significant Decision"?

Across all three major ADMT regulatory frameworks, the same categories of decisions trigger compliance obligations. These are decisions that have a material impact on an individual's access to fundamental opportunities.

Employment

Hiring, termination, compensation, promotions, performance evaluation, work assignments, and scheduling. This covers AI systems used in applicant tracking, resume screening, interview analysis, workforce management, and performance review.

Credit and Lending

Loan approvals, interest rates, credit limits, credit scoring, and adverse action determinations. Any AI system that influences whether someone receives credit and on what terms falls into this category.

Insurance

Underwriting decisions, claims adjudication, premium pricing, and coverage determinations. AI used to assess risk profiles, price policies, or evaluate claims is covered.

Housing

Rental applications, tenant screening, mortgage qualification, and property valuations that affect housing access. This includes AI-powered tenant screening services and automated property assessment tools.

Healthcare

Treatment recommendations, prior authorization decisions, triage, coverage determinations, and care management. AI systems that influence what care a patient receives — or whether they receive it — trigger ADMT obligations.

Education

Admissions decisions, grading, academic placement, disciplinary actions, and financial aid determinations. AI used to evaluate student applications, assess academic performance, or allocate educational resources is covered.

The common thread: these are all domains where an adverse decision can meaningfully alter someone's life trajectory. Regulators have drawn the line around decisions that affect access to opportunity, not around every use of AI.

The Three-Prong Test: When AI Is Not Considered ADMT

This is where the regulatory framework becomes genuinely useful for companies that want to use AI responsibly without triggering the full weight of ADMT obligations.

California's CPRA regulations define a specific test for when human involvement removes a system from ADMT classification entirely. This is not about adding a perfunctory "approve" button to an automated workflow. It is about proving that a human being is genuinely making the decision.

The Three Prongs

For a system to be excluded from ADMT regulation, the human decision-maker must satisfy all three of the following conditions:

Prong A — Know: The human knows how to interpret the technology's outputs. They understand what the AI is telling them, what its confidence levels mean, and what its limitations are. This requires training, not just access.

Prong B — Analyze: The human reviews the AI's output AND other relevant information. They do not rely solely on what the algorithm says. They examine additional data, context, and factors beyond the AI recommendation.

Prong C — Authority: The human has genuine authority to make or change the decision based on their analysis. This is not a nominal authority where disagreeing with the AI triggers an escalation or creates friction. The human must be genuinely empowered to decide differently.

What This Looks Like in Practice

A hiring process where an AI ranks candidates, a trained recruiter reviews each candidate's full application alongside the AI score, considers factors the AI did not evaluate, and independently decides who to advance — with documented instances of diverging from the AI recommendation — satisfies the three-prong test.

A lending workflow where an AI generates a risk score, the screen auto-populates a denial letter, and an underwriter clicks "approve" or "deny" within three seconds without reviewing the underlying application — does not satisfy the three-prong test. The human review is perfunctory, and the AI is functionally making the decision.

The Rubber-Stamping Problem

The three-prong test exists precisely because regulators understand the gap between "human in the loop" as a policy and "human in the loop" as a reality.

Research consistently shows that humans view automated systems as authoritative and defer to AI recommendations even when they suspect errors. If a human agrees with the AI 99% of the time and spends an average of four seconds on each "review," regulators will argue — with justification — that the AI is substantially replacing human judgment regardless of the organizational chart.

This is why architecture matters more than policy. A policy that says "humans make all final decisions" is insufficient if the system design makes genuine review impractical. The infrastructure must actively support — and ideally enforce — meaningful human engagement with each decision.

Evidence that demonstrates genuine human involvement includes:

  • Dwell time distributions showing reviewers spend meaningful time (not seconds) on each decision
  • Override rates showing reviewers disagree with AI recommendations at a non-trivial rate (15-25% is a common benchmark)
  • Engagement tracking showing reviewers interact with the AI's reasoning, underlying data, and risk factors
  • Divergence documentation capturing the reviewer's rationale when they disagree with the AI
  • Training records demonstrating reviewers understand how to interpret AI outputs

What Companies Need to Do

If your organization uses AI in any of the significant decision categories — employment, credit, insurance, housing, healthcare, education — here is what the regulatory landscape requires.

1. Inventory Your AI Decision Points

Before you can assess compliance, you need a clear picture of where AI touches decisions about people. This includes:

  • AI systems you built internally
  • Third-party AI tools integrated into your workflows (vendor AI counts)
  • AI features embedded in existing software platforms you use
  • Any system that generates scores, rankings, classifications, or recommendations that influence decisions about individuals

Many organizations are surprised by how many AI decision points they have. The vendor's AI-powered feature that "just helps" your team work faster may qualify as ADMT if it substantially replaces the judgment that a human previously exercised.

2. Conduct Risk Assessments

All three major frameworks require some form of risk assessment or impact assessment. The specifics vary, but the core elements are consistent:

  • Description of the AI system — what it does, what data it processes, what outputs it generates
  • Purpose and intended use — why you use it and for what decisions
  • Potential harms — what could go wrong, who could be affected, and how severely
  • Mitigation measures — what you have done to reduce those risks
  • Bias evaluation — testing for disparate impact across protected classes
  • Data governance — how training data is sourced, validated, and maintained

California requires these assessments to be submitted to the CPPA by April 2028. Colorado requires annual review. The EU AI Act requires ongoing risk management throughout the system lifecycle.

3. Implement Consumer Rights

Beginning April 1, 2027 (California) and June 30, 2026 (Colorado), consumers gain specific rights regarding automated decisions:

  • Pre-use notice: Inform consumers before ADMT is used to make a decision about them. This is not a buried clause in a privacy policy — it must be clear, conspicuous, and timely.
  • Opt-out: Allow consumers to refuse automated processing for significant decisions. You must have a process for handling these requests, including a fallback to human-only decision-making.
  • Access and explanation: Consumers can request information about the logic of the ADMT, what data it used, and how its outputs influenced the decision.
  • Correction: Consumers can correct inaccurate personal data that the AI system processed.
  • Appeal: Provide a path for consumers to have an automated decision reviewed by a human.

4. Build and Maintain Audit Trails

Every regulation assumes you can demonstrate what happened in each decision. This means maintaining records that show:

  • What the AI recommended and why
  • What data the AI processed
  • Who reviewed the recommendation (if anyone)
  • What the final decision was
  • Whether the human agreed or disagreed with the AI
  • The basis for the decision

These records must be maintained for the duration required by each applicable regulation and must be producible in response to regulatory inquiries.

5. Evaluate Your Architecture

This is the strategic question that separates reactive compliance from proactive risk management. There are two paths:

Path 1: Comply with ADMT regulations. Accept that your system qualifies as ADMT and implement all required obligations — risk assessments, consumer notices, opt-out mechanisms, appeal processes, bias testing, and audit trails. This is the correct path for fully automated systems where removing the human from the loop is the point.

Path 2: Architect your system so it does not qualify as ADMT. If a human genuinely makes the final decision — supported by infrastructure that enforces meaningful review — ADMT regulations may not apply. This is not a shortcut. Done properly, it requires more architectural rigor than compliance alone. But it produces better outcomes: genuine human oversight, cryptographic proof of review, and a decision architecture that is more defensible than any compliance checklist.

The advisory-only architecture approach treats the AI as a tool that informs human judgment rather than a system that replaces it. The AI writes to a recommendation surface. Decisions are created only by authenticated human actions that demonstrate meaningful review. Write isolation, dwell time enforcement, engagement tracking, and cryptographic audit trails make this provable — not as a policy claim, but as an infrastructure fact.

Industry Impact: Which Sectors Are Most Affected

Financial Services and Lending

The highest exposure. AI-powered credit scoring and automated underwriting are widespread, and multiple overlapping regulations apply — CPRA ADMT, ECOA, CFPB guidance, and state-level consumer protection laws. The CFPB has explicitly stated there is "no advanced technology exception" — whether AI is advisory or auto-decisioning, lenders must provide adverse action notices and face disparate impact liability.

Insurance

Underwriting, claims adjudication, and pricing are increasingly AI-driven. Insurers face CPRA ADMT obligations, state insurance commissioner oversight, and the Colorado AI Act. The industry traditionally operates with human underwriters making final calls, which positions it well for the advisory-only approach — but only if the human review is genuine and documented.

HR Tech and Employment

The most legally complex sector. Federal anti-discrimination law (Title VII, ADA, ADEA) applies to AI-assisted hiring regardless of whether the system is advisory or autonomous. NYC Local Law 144 covers tools that "substantially assist" employment decisions. EEOC guidance emphasizes that disparate impact liability attaches even when AI provides recommendations that humans follow. Advisory-only architecture reduces ADMT exposure but does not eliminate employment discrimination obligations.

Healthcare

Clinical AI tools that inform physician decisions have the clearest regulatory path. The FDA's Clinical Decision Support exemption under the 21st Century Cures Act explicitly carves out advisory AI from medical device regulation — provided the system presents recommendations (not diagnoses) and allows clinicians to independently review the basis of the recommendation. For non-clinical healthcare AI (prior authorization, claims processing, care management), CPRA ADMT obligations apply.

Real Estate and Property Management

Tenant screening services, automated property valuations, and mortgage qualification tools all fall within ADMT scope. Fair housing laws add additional obligations beyond ADMT regulations, particularly regarding disparate impact on protected classes.

Education

Admissions scoring, automated grading, academic placement algorithms, and AI-driven disciplinary systems all qualify as consequential decisions under CPRA and Colorado. The EU AI Act classifies education AI as high-risk in Annex III.


Key Takeaways

1. ADMT is specifically defined. It is technology that replaces or substantially replaces human decision-making — not all AI, and not AI that assists humans. The definition matters because it determines whether compliance obligations apply.

2. Enforcement is real and accelerating. The CPPA has issued millions in fines, has hundreds of investigations underway, and ADMT-specific obligations begin in 2027. The EU AI Act brings 7% global turnover penalties in August 2026. Colorado takes effect June 2026.

3. Six decision categories trigger ADMT. Employment, credit, insurance, housing, healthcare, and education. If your AI touches any of these, you need to assess your compliance posture.

4. Architecture determines classification. The three-prong test for meaningful human involvement can remove a system from ADMT classification entirely — but only if the architecture proves genuine review, not just a checkbox.

5. Two valid paths exist. You can comply with ADMT regulations directly, or you can architect your system so it does not qualify as ADMT. Both are legitimate. The second is harder to build but produces stronger outcomes.

6. Start with an inventory. Most organizations do not have a complete picture of where AI influences decisions about people. That inventory is the prerequisite for everything else.


Frequently Asked Questions About ADMT

Does all AI qualify as ADMT?

No. ADMT specifically refers to technology that replaces or substantially replaces human decision-making for significant decisions. AI tools that assist humans — by organizing data, surfacing patterns, generating summaries, or presenting recommendations for human review — are generally not classified as ADMT, provided the human is genuinely making the decision. The distinction is between AI that decides and AI that informs.

What is the penalty for ADMT non-compliance?

Penalties vary by jurisdiction. Under California's CPRA, each violation carries a $7,500 fine with no aggregate cap — meaning a company that processes thousands of automated decisions could face exposure in the millions. The EU AI Act imposes fines of up to 35 million EUR or 7% of global annual turnover, whichever is higher. Colorado treats violations as unfair trade practices under consumer protection law, enforced by the Attorney General. Beyond direct fines, companies face litigation risk from affected individuals and reputational consequences.

When do ADMT regulations take effect?

The timeline is staggered. California's core ADMT regulations took effect January 1, 2026. Colorado's AI Act takes effect June 30, 2026. The EU AI Act's high-risk obligations become enforceable August 2, 2026. California's full significant decision obligations — including consumer opt-out, pre-use notice, and access rights — take effect April 1, 2027. Risk assessment submissions to the CPPA are due by April 2028.

Can I avoid ADMT classification by adding a human reviewer?

Adding a human reviewer is necessary but not sufficient. California's three-prong test requires that the reviewer (a) knows how to interpret the AI output, (b) analyzes the output alongside other relevant information, and (c) has genuine authority to make or change the decision. A perfunctory review — where the human rubber-stamps the AI recommendation in seconds without examining the underlying data — does not satisfy the test. The architecture must support and demonstrate meaningful human engagement.

Do ADMT regulations apply to AI tools from third-party vendors?

Yes. If your organization deploys a third-party AI tool that makes or substantially replaces human decisions for significant decisions, ADMT obligations apply to your organization as the deployer. You cannot outsource compliance responsibility to the vendor. This means you need to understand what the vendor's AI does, how it generates outputs, and whether your internal processes satisfy the meaningful human involvement test. Under the Colorado AI Act, developers (vendors) have separate obligations, including providing deployers with the information needed to comply.


Take the free ADMT compliance assessment at admt.ai to identify your gaps in under 15 minutes. The assessment maps your AI systems against current regulations, scores your compliance posture, and delivers a prioritized remediation plan — so you know exactly where to focus before enforcement deadlines arrive.

Ready to assess your ADMT compliance?

Get a free, AI-powered gap assessment for your organization in minutes.