← All articles

CPRA ADMT Compliance: Everything You Need to Know in 2026

If your company uses AI or algorithms to make decisions about people — in hiring, lending, insurance, housing, healthcare, or education — California's Automated Decision-Making Technology (ADMT) regulations now apply to you. And as of January 2026, enforcement is live.

This guide explains what the CPRA ADMT regulations require, who they apply to, what deadlines matter, and how to comply. It is written for engineering leaders, compliance officers, and legal teams at companies deploying AI in consumer-facing decisions.


What Is ADMT Under California Law?

California's ADMT regulations stem from the California Privacy Rights Act (CPRA), which empowered the California Privacy Protection Agency (CPPA) to create detailed rules governing automated decision-making. The CPPA Board adopted the final regulations on July 24, 2025, the Office of Administrative Law approved them on September 22, 2025, and they took effect January 1, 2026.

Under Section 7001(e) of Title 11 of the California Code of Regulations (11 CCR 7001(e)), Automated Decision-Making Technology is defined as any technology that processes personal information and uses computation to replace or substantially replace human decision-making.

Two elements are required for a system to qualify as ADMT:

  1. It processes personal information. The system takes in data about identifiable individuals.
  2. It replaces or substantially replaces human decision-making. The system's output is the decision, or functions as the decision for all practical purposes.

This definition matters enormously because of a critical change the CPPA made during the rulemaking process.

The Narrowing: From "Facilitate" to "Replace"

Early drafts of the regulation used the phrase "substantially facilitate" — meaning any AI tool that meaningfully helped a human make a decision would have been covered. The final regulations narrowed this to "substantially replace," a much higher bar.

During the rulemaking proceedings, CPPA staff testified that this change reduced coverage from a wide swath of CCPA-covered businesses to roughly 10% of CCPA-covered businesses. Advisory tools — systems that provide recommendations, analysis, or scoring for a human decision-maker — are explicitly excluded from the ADMT definition, provided there is genuine human involvement in the final decision.

This narrowing created a meaningful architectural distinction: systems that inform human decisions are not ADMT. Systems that make or effectively make decisions for humans are ADMT.


Timeline: What Is Already in Effect and What Is Coming

Understanding the phased enforcement timeline is essential for planning. The obligations do not all land at once.

January 1, 2026 — Risk Assessment Requirements (In Effect Now)

Businesses that process personal information in ways that present "significant risk to consumers' privacy" must begin conducting risk assessments. Under Section 7150, activities that trigger the risk assessment requirement include:

  • Using ADMT for a significant decision concerning a consumer
  • Selling or sharing personal information
  • Processing sensitive personal information
  • Using automated processing to infer characteristics about consumers from systematic observation (e.g., in employment or educational contexts)
  • Processing personal information intended to train ADMT for significant decision-making

This is not a future deadline. Risk assessments are required now, and the CPPA's enforcement division is actively investigating compliance.

January 1, 2027 — Consumer Rights (ADMT-Specific)

The consumer-facing obligations under Sections 7220, 7221, and 7222 take effect on this date. Businesses using ADMT for significant decisions must:

  • Deliver pre-use notices before using ADMT on a consumer
  • Provide at least two methods for consumers to opt out of automated processing
  • Respond to consumer requests for access to information about how ADMT was used on them specifically

April 2028 — Risk Assessment Attestation Submission

Businesses must submit risk assessment attestations to the CPPA. Under Section 7157, a corporate officer must attest, under penalty of perjury, that the business has conducted the required risk assessments. This is not a self-assessment sitting in a drawer — it is a sworn submission to a regulatory agency with enforcement authority.


Who Does This Apply To?

The ADMT regulations apply to any business (as defined under the CCPA) that uses automated decision-making technology to make significant decisions about consumers. The CCPA's definition of "business" captures for-profit entities that collect California consumers' personal information and meet any one of the following: annual gross revenue exceeding $25 million, annually buying/selling/sharing the personal information of 100,000 or more consumers or households, or deriving 50% or more of annual revenue from selling or sharing personal information.

If your company meets any of these thresholds and uses AI, algorithms, or automated systems in decisions that affect individuals, the ADMT regulations are likely relevant.

What Counts as a "Significant Decision"?

The regulations define "significant decisions" as decisions that produce legal or similarly significant effects on consumers in the following domains:

  • Employment — hiring, termination, compensation, promotions, work assignments, performance evaluation
  • Credit and lending — loan approvals, interest rate determination, credit limit decisions
  • Insurance — underwriting, claims adjudication, pricing, coverage decisions
  • Housing — rental applications, mortgage approvals, tenant screening
  • Healthcare — treatment recommendations, coverage decisions, prior authorization
  • Education — admissions, grading, disciplinary actions, financial aid

Notably, advertising is excluded. Automated ad targeting, even when it uses personal information and algorithmic processing, does not qualify as a "significant decision" under these regulations.

Concrete Examples

| Scenario | ADMT? | Why | |----------|-------|-----| | AI auto-rejects loan applications below a score threshold | Yes | Algorithm replaces human judgment entirely | | AI ranks candidates; recruiter reviews top 10 with full profiles | Likely no | Human reviews with additional information and authority to decide | | AI flags insurance claims as potentially fraudulent; adjuster reviews each flag | Likely no | System detects patterns; human investigates and decides | | AI auto-approves rental applications with no human review | Yes | Algorithm replaces human decision-making | | AI scores candidates and auto-advances top 20%; recruiter never sees the rest | Yes | Algorithm makes the cut decision for 80% of applicants |

The key question is always: Is a qualified human genuinely deciding, or is the system deciding and a human merely ratifying?


The Three-Prong Test for Meaningful Human Involvement

This is the most consequential provision in the entire regulation. Section 7001(e) establishes a three-prong test that determines whether technology is removed from ADMT classification entirely. If a human decision-maker satisfies all three prongs simultaneously, the system is not ADMT and the regulatory obligations do not apply.

The three prongs are:

Prong A: Interpretive Competence

The human must "know how to interpret and use the technology's output" to make the decision.

This means the reviewer has been trained on the specific system. They understand what the outputs mean, what confidence scores represent, what the known limitations are, and how the system can fail. Generic AI literacy training does not satisfy this prong — the training must be specific to the system being reviewed.

What proves it: System-specific training records, completion certificates, competency assessments showing the reviewer can interpret outputs correctly and identify edge cases.

Prong B: Analytical Rigor

The human must "review and analyze the output of the technology, and any other information that is relevant" to make or change the decision.

This prong has two components. The reviewer must examine the AI's output and they must examine additional information beyond what the AI produced. A reviewer who looks only at the AI recommendation — without consulting the underlying data, the consumer's application, contextual factors, or other relevant sources — fails Prong B.

What proves it: Decision logs showing what information the reviewer consulted, timestamps showing meaningful review duration, engagement records showing interaction with both AI output and non-AI data.

Prong C: Decision Authority

The human must "have the authority to make or change the decision" based on that analysis.

The reviewer must have genuine organizational authority to override the AI's recommendation. This is not satisfied by a junior employee rubber-stamping decisions in a queue. The CPPA commentary clarifies that this oversight should fall to senior staff, not low-level reviewers. The authority must be real, not nominal.

What proves it: Role specifications, delegation documents, organizational authority documentation, and critically, a non-zero override rate demonstrating that the authority is actually exercised.

The Test Is Conjunctive

All three prongs must be satisfied simultaneously, for every decision. Satisfying two out of three is not sufficient. A reviewer who is well-trained (Prong A) and has authority (Prong C) but does not consult additional information (Prong B) fails the test. The system remains ADMT.

Red Flags That Destroy a HITL Claim

Regulators and auditors know what rubber-stamping looks like. The following patterns will undermine any claim of meaningful human involvement:

| Red Flag | Why It Is Fatal | |----------|-----------------| | Zero overrides | If the reviewer has never overridden the system across hundreds or thousands of decisions, their "authority" is theoretical, not real | | Sub-minute review times | Reviewing an AI recommendation, consulting additional information, and making an independent judgment takes time. Reviews completed in under 60 seconds are not meaningful analysis | | Agreement rate above 99% | Statistical rubber-stamping. If a human agrees with the AI 99.5% of the time, regulators will argue the system is making the decision and the human is a formality | | No training records | No evidence the reviewer understands what the outputs mean — Prong A fails immediately | | Junior or unauthorized reviewers | Clerical staff processing a queue without genuine decision authority — Prong C fails | | No additional information consulted | Prong B explicitly requires reviewing information beyond the AI output. Logs showing the reviewer only looked at the recommendation are disqualifying |

The regulation language is direct: "There is no meaningful human involvement where no one can override the model or change the outcome." The standard demands informed review, independent judgment, and genuine authority to change outcomes.


Risk Assessment Requirements (Sections 7150-7157)

Even if your system satisfies the three-prong test and avoids ADMT classification, you may still need to conduct risk assessments if your processing of personal information presents "significant risk to consumers' privacy."

When Is a Risk Assessment Required?

Under Section 7150, a risk assessment is triggered when a business engages in any of the following:

  1. Selling or sharing personal information
  2. Processing sensitive personal information (with narrow exceptions for employee data)
  3. Using ADMT for a significant decision concerning a consumer
  4. Using automated processing to infer or extrapolate characteristics about a consumer from systematic observation in an educational, applicant, or employment context
  5. Using automated processing to infer characteristics based on a consumer's presence at a sensitive location
  6. Processing personal information to train ADMT for significant decision-making

What Must Be in the Risk Assessment?

The risk assessment must evaluate whether the privacy risks of the processing outweigh its benefits to the consumer, the business, and other stakeholders. If the risks outweigh the benefits, the business may not proceed unless it can implement measures to sufficiently mitigate the risks.

The assessment must include:

  • A description of the processing activity and its purpose
  • The categories of personal information processed
  • The benefits of the processing to the consumer, the business, and the public
  • The potential risks to consumers' privacy
  • The safeguards the business has implemented to mitigate those risks
  • Whether the benefits outweigh the risks after accounting for safeguards

Timing and Updates

Under Section 7155, risk assessments must be updated whenever there is a material change to the processing activity. The update must be completed as soon as feasibly possible, with an outer bound of 45 calendar days after the material change.

Retention

Businesses must retain each risk assessment — including original and updated versions — for as long as the processing continues or for five years after the completion of the assessment, whichever is later.

Executive Attestation (Section 7157)

Beginning April 2028, businesses must submit risk assessment attestations to the CPPA. The attestation requires a corporate officer to certify, under penalty of perjury, that the business has conducted the required assessments. The exact language mandated by the regulation:

"I attest that the business has conducted a risk assessment for the processing activities set forth in California Code of Regulations, Title 11, section 7150, subsection (b), during the time period covered by this submission, and that I meet the requirements of section 7157, subsection (c). Under penalty of perjury under the laws of the state of California, I hereby declare that the risk assessment information submitted is true and correct."

This is not a checkbox exercise. A corporate officer is putting their name on a sworn statement to a regulatory agency. If the underlying risk assessment is deficient, incomplete, or inaccurate, the personal and organizational exposure is significant.


Consumer Rights Coming in 2027 (Sections 7220-7222)

Starting January 1, 2027, businesses using ADMT for significant decisions must provide three categories of consumer rights. These are distinct from existing CCPA consumer rights and require new operational capabilities.

Section 7220: Pre-Use Notice

Before using ADMT on a consumer, the business must deliver a prominent, conspicuous notice that includes:

  • Purpose: A specific, plain-language explanation of what the ADMT does and why. Generic statements like "to improve our services" are prohibited.
  • How it works: What categories of personal information influence the outputs, what outputs the ADMT generates, and how those outputs feed the decision.
  • Alternative process: What happens if the consumer opts out — what human decision-making process replaces the automated one.
  • Opt-out link: A direct link to the opt-out mechanism.
  • Access rights: A statement that consumers can request information about the system's logic, output, and purposes.
  • Anti-retaliation: An affirmative statement that the business will not retaliate against consumers who exercise their rights.

The notice must be delivered at or before the point of data collection for ADMT purposes, via the primary method the business uses to interact with the consumer.

Section 7221: Right to Opt Out

Consumers can refuse automated decision-making. The operational requirements are specific:

  • The business must offer at least two easy opt-out methods, and at least one must match the primary interaction channel.
  • Identity verification is not required — consumers do not need to prove who they are to opt out. This is a lower bar than access requests.
  • If ADMT processing has already started, the business must cease within 15 business days.
  • The business must notify service providers, contractors, and other processors and instruct them to comply within the same 15-day window (downstream propagation).
  • The business must actually provide the alternative human decision-making process described in the pre-use notice.

Exceptions for HR and education: Businesses may decline the opt-out for admission, hiring, work allocation, and compensation decisions — but only if the ADMT works as intended and does not unlawfully discriminate. When this exception applies, the business must instead offer a right to appeal to a qualified human reviewer who satisfies the same three-prong test (interpretive competence, analytical rigor, decision authority). Appeal timelines: 10 business days to confirm receipt, 45 calendar days to respond (extendable to 90).

Section 7222: Right to Access ADMT Information

Consumers can request detailed information about how ADMT was used on them specifically. This is not a request for a generic description of the system — it is a request for consumer-specific information. Required disclosures include:

  1. Purpose — why the ADMT was used on this consumer
  2. Output — what the ADMT actually produced for this specific consumer
  3. How the output was used — was it the sole factor or one of many? What other factors contributed?
  4. Logic — how the system processed this consumer's personal information, what key parameters affected their output
  5. Outcome — what the business ultimately decided
  6. Human involvement — the degree and nature of any human role in the decision

The response must be in plain language, specific to the requesting consumer, and delivered within 45 calendar days (extendable to 90 with proper notice). Identity verification is required for access requests.

Record-keeping: Businesses must retain consumer request documentation for 24 months and risk assessment records for five years (or as long as processing continues).


Penalties and Enforcement

The enforcement environment is not theoretical. The CPPA has built the largest privacy enforcement division in the United States dedicated solely to privacy, and it is actively using it.

Fine Structure

  • $2,500 per unintentional violation
  • $7,500 per intentional violation (adjusted for inflation to approximately $7,988 as of 2026)
  • No aggregate cap — fines are assessed per violation, per consumer

The per-violation, per-consumer structure creates enormous potential exposure. Penalties scale directly with the scope of non-compliance — a company that uses ADMT on 100,000 consumers without required pre-use notices faces theoretical exposure in the hundreds of millions.

Active Enforcement

The CPPA is not waiting for 2027 to start investigating. Michael Macko, the CPPA's Deputy Director of Enforcement, has publicly stated that the agency has "hundreds of open investigations" in progress, many at stages where the target companies do not yet know they are being investigated. The agency receives approximately 150 consumer complaints per week.

Macko has characterized the CPPA as entering "a new era of privacy enforcement," and the enforcement division has described itself as the largest in the United States dedicated solely to privacy.

The Tractor Supply Precedent ($1.35M)

In September 2025, the CPPA Board required Tractor Supply Company to pay $1,350,000 — the largest fine in the agency's history — for CCPA violations including failure to properly notify consumers of their privacy rights, inadequate service provider agreements, and ineffective opt-out mechanisms. Tractor Supply was also required to implement broad remedial measures and have a corporate officer certify compliance annually for four years.

The significance is not the fine amount but the signal: the CPPA is willing to pursue substantial penalties and demand operational changes. As ADMT-specific obligations come online in 2027, the enforcement apparatus is already built, staffed, and practiced.

Enforcement Strike Force

The CPPA maintains a dedicated enforcement strike force focused on proactive investigations — not just responding to complaints. This is unusual among state privacy regulators and reflects California's intent to actively police compliance rather than rely on self-reporting.


How to Comply: Practical Steps

Compliance with the CPRA ADMT regulations is achievable, but it requires coordinated effort across engineering, legal, and operations teams. Here is a practical roadmap.

Step 1: Inventory Your AI Systems

Build a complete inventory of every system that processes personal information, uses computation, and influences or makes decisions about individuals. For each system, document its purpose, decision domain, categories of personal information processed, who reviews its output, and what authority that reviewer has.

Step 2: Classify Each System

For each system, determine whether it qualifies as ADMT under Section 7001(e). Apply the three-prong test: does a qualified human with interpretive competence, analytical rigor, and genuine decision authority review and act on the system's output? Document the classification analysis per system, mapping specific evidence to each of the three prongs.

Step 3: Conduct Risk Assessments

For every system that triggers a risk assessment under Section 7150, evaluate whether privacy risks outweigh benefits after accounting for safeguards. If the risks outweigh the benefits, either implement additional safeguards or discontinue the processing.

Step 4: Build an Audit Trail

For systems where you are claiming meaningful human involvement, implement logging that captures what the AI recommended and why, who reviewed it, how long the review took, what additional information was consulted, what the reviewer decided, whether the decision agreed with or diverged from the AI recommendation, and if it diverged, why. This audit trail is your primary defense in an investigation. Without it, a HITL claim is an assertion without evidence.

Step 5: Address Red Flags Proactively

Monitor your review process for patterns that undermine HITL claims. Track override rates — if they are at or near zero, investigate. Monitor review time distributions — sub-minute reviews are not meaningful analysis. Watch agreement rates — above 95% warrants investigation. Verify training records are current and system-specific.

Step 6: Prepare Consumer-Facing Mechanisms (Before January 2027)

Draft and deploy system-specific pre-use notices meeting Section 7220 requirements. Implement at least two opt-out methods and build the alternative human decision-making process that replaces ADMT when a consumer opts out. Build the capability to retrieve consumer-specific ADMT information for access requests — this requires your audit trail to be consumer-queryable.

Step 7: Prepare for Attestation (Before April 2028)

Identify the corporate officer who will sign the risk assessment attestation. Ensure they understand what they are attesting to and that the underlying documentation supports it. Conduct an internal audit of risk assessment completeness before the submission deadline.


The Advisory-Only Escape: When ADMT Regulations Do Not Apply

There is an architectural approach that removes a system from ADMT classification entirely: designing it so that AI is structurally incapable of making decisions and only produces recommendations for qualified human review.

This is not a loophole. It is the intended operation of the regulation. The CPPA explicitly narrowed the definition to exclude advisory tools with genuine human involvement. If your AI system:

  • Writes to a recommendation surface, not a decision database
  • Cannot execute or implement decisions without human action
  • Is reviewed by trained humans who consult additional information
  • Is subject to genuine override authority

Then the system is not ADMT, and the regulatory obligations do not apply. The regulatory trigger — technology that replaces or substantially replaces human decision-making — simply does not fire.

This approach, sometimes called advisory-only architecture, requires more than a policy statement. It requires infrastructure constraints: write isolation (AI service accounts cannot write to decision tables), dwell time enforcement, engagement tracking, and cryptographic audit trails proving the entire chain. The architecture must generate evidence, not just assert compliance.


Frequently Asked Questions

Does the CPRA ADMT regulation apply to AI used in advertising?

No. The ADMT regulations apply to "significant decisions" defined as those producing legal or similarly significant effects in employment, credit, insurance, housing, healthcare, and education. Advertising is explicitly excluded. However, if you use AI to make decisions about ad pricing, placement, or targeting that affect individuals in those protected categories (for example, using AI to determine which job ads to show to which candidates), the analysis may be more complex.

We have a human reviewer who looks at every AI recommendation before it becomes a decision. Are we compliant?

Not necessarily. Having a human in the process is necessary but not sufficient. The human must satisfy all three prongs of the test: they must be trained on the specific system (Prong A), they must review the AI output and additional relevant information (Prong B), and they must have genuine authority to override (Prong C). If your reviewer agrees with the AI 99.8% of the time, completes reviews in under 30 seconds, and has never overridden a recommendation, regulators will argue the system substantially replaces human judgment regardless of the process.

Our AI system is only used internally by employees — does the ADMT regulation still apply?

Yes, if the decisions affect consumers. An AI system used internally to screen rental applications, evaluate loan requests, or triage insurance claims is making decisions about consumers even though the system is operated by employees. The regulation applies based on the nature of the decision, not the audience for the system.

What is the difference between the CPRA ADMT regulation and the Colorado AI Act?

Both regulate automated decisions about individuals, but they differ in scope and trigger. The CPRA uses "replace or substantially replace" as its trigger, which is relatively narrow. Colorado's AI Act uses "substantial factor in making" a consequential decision, which is broader. Additionally, Colorado applies a "capable of altering the outcome" test — even if a human always overrides in practice, if the AI system could alter the outcome, it may qualify. Colorado also offers a safe harbor for companies that comply with NIST AI RMF or ISO 42001.

Do we need to conduct a risk assessment if our system passes the three-prong test?

Potentially. The risk assessment requirement under Section 7150 is triggered by processing activities that present significant risk to consumers' privacy, which includes using ADMT for significant decisions. If your system passes the three-prong test and is classified as not-ADMT, it may still trigger a risk assessment if the underlying processing of personal information meets other criteria in Section 7150 (such as processing sensitive personal information or using automated processing for systematic observation).

How long do we have to respond to a consumer's ADMT access request?

You must confirm receipt within 10 business days and provide a substantive response within 45 calendar days, extendable to 90 calendar days with proper notice to the consumer. The response must be consumer-specific — it must describe the ADMT output for that individual, how it was used in their decision, and the degree of human involvement.


What to Do Now

The CPRA ADMT regulations are not a future concern. Risk assessment requirements are already in effect. The CPPA has hundreds of investigations in progress and is processing approximately 150 consumer complaints per week. The Tractor Supply action demonstrated the agency's willingness to pursue substantial fines and demand operational remediation.

The practical path forward is straightforward:

  1. Inventory your AI systems and classify them against the ADMT definition
  2. Assess whether your human review processes satisfy all three prongs of the test — with evidence, not assumptions
  3. Document your risk assessments and retain them for five years
  4. Build the audit trail infrastructure that proves meaningful human involvement
  5. Prepare for consumer rights obligations before the January 2027 deadline

Companies that address these requirements now will have 9 months of operational data demonstrating compliance before the consumer rights obligations take effect. Companies that wait will be building infrastructure under deadline pressure — precisely the conditions that produce the gaps auditors find.

The requirements are specific enough to build against. The three-prong test provides a clear standard to design toward. Companies that treat this as an engineering problem — designing systems to produce evidence of genuine human judgment — will find compliance to be a natural output of good architecture, not a burden layered on top.


Want to know where your company stands? Take the free ADMT compliance assessment at admt.ai to get a detailed gap analysis, system-by-system classification, and a prioritized remediation roadmap — in under 10 minutes.

Ready to assess your ADMT compliance?

Get a free, AI-powered gap assessment for your organization in minutes.