← All articles

Advisory-Only Architecture: How to Make ADMT Regulations Structurally Irrelevant

Most companies approach AI compliance the same way: they read the regulation, build a checklist, and start documenting. Risk assessments. Pre-use notices. Opt-out mechanisms. Appeal processes. Bias audits. The compliance surface grows with every new jurisdiction, every amendment, every enforcement action.

But there is another path. Not a loophole. Not a shortcut. An architectural decision that makes the regulation's trigger condition impossible to satisfy — so the regulation never applies in the first place.

This is advisory-only architecture. And if you understand how PCI DSS tokenization eliminated 280 compliance controls for payment processing, you already understand the pattern.

The Compliance Paradox

Three regulatory regimes are converging on the same trigger: AI making or substantially influencing significant decisions about individuals.

  • CPRA ADMT (California): Technology that "replaces or substantially replaces" human decision-making for significant decisions. $7,500 per violation, no aggregate cap. Enforcement is active — the CPPA issued over $1.3M in fines in 2025 alone.
  • EU AI Act (August 2026): AI systems in high-risk domains face conformity assessments, risk management systems, technical documentation, and post-market monitoring. Penalties reach 7% of global annual turnover.
  • Colorado AI Act (June 2026): AI that is a "substantial factor" in consequential decisions triggers impact assessments, consumer notices, and AG reporting. $20,000 per violation.

The common thread is critical: all three regimes trigger on automated decisions, not automated analysis.

Most companies see these regulations and immediately begin compliance work — documenting risk assessments, building opt-out flows, designing appeal mechanisms. This is rational but potentially unnecessary. If your AI system never makes decisions — if it is architecturally incapable of doing so — then you are complying with a regulation that does not apply to you.

The paradox: companies spend hundreds of thousands of dollars complying with regulations whose trigger conditions their systems could be redesigned to never satisfy.

The PCI DSS Precedent: How Tokenization Eliminated 95% of Compliance Controls

Before diving into ADMT architecture, consider a precedent that every engineer who has touched payment processing understands.

PCI DSS is one of the most burdensome compliance frameworks in existence. The full standard encompasses over 300 security controls covering how companies store, process, and transmit cardholder data. A full PCI DSS assessment — the Report on Compliance (ROC) — requires documenting compliance with 251 individual requirements. Organizations spend $50,000 to $500,000 annually on PCI compliance, depending on their transaction volume and infrastructure complexity.

Then came tokenization.

In the late 2000s, payment processors introduced a simple architectural change: cardholder data never touches the merchant's servers. When a customer enters a credit card number, it goes directly to the payment processor (Stripe, Braintree, Adyen). The merchant's system receives a token — a random string that maps to the card on the processor's side but is meaningless on its own.

The architectural consequence was immediate: if card data never enters your system, most PCI DSS controls become inapplicable. You cannot mishandle data you never possess.

The compliance consequence was transformative. Merchants who use tokenized payments qualify for SAQ-A — the simplest PCI DSS self-assessment questionnaire. Instead of 251 requirements, they answer 22 questions. Instead of network segmentation, encryption-at-rest audits, key management procedures, and quarterly vulnerability scans across their cardholder data environment, they verify that they have outsourced card handling to a compliant processor.

That is a 95% reduction in applicable compliance controls. Not through exemptions or waivers — through architecture. The regulation still exists. PCI DSS still has 300+ controls. But for merchants who tokenize, the vast majority of those controls apply to Stripe, not to them.

The Structural Analogy

| PCI DSS | ADMT Regulations | |---------|-----------------| | Trigger: Storing, processing, or transmitting cardholder data | Trigger: AI making or substantially replacing human decisions | | Architectural escape: Tokenize — card data never touches your system | Architectural escape: Advisory-only — AI never makes decisions | | Result: SAQ-A (22 controls) instead of ROC (251 controls) | Result: ADMT regulations don't apply; audit trail exceeds what compliance would require | | Proof mechanism: Infrastructure audit shows no card data present | Proof mechanism: Infrastructure audit shows AI has zero write access to decision databases |

The parallel is exact. Tokenization does not help you comply with PCI DSS more efficiently. It makes most of PCI DSS inapplicable to your system. Advisory-only architecture does the same for ADMT regulations.

Why This Is Not a Loophole

A critical distinction: tokenization does not reduce your security posture. A merchant using Stripe's tokenization is arguably more secure than one storing card numbers in their own database with PCI-compliant encryption. The security outcome is better. The compliance burden is lower. Both are true simultaneously because the architecture eliminates the risk the regulation was designed to address.

Advisory-only architecture follows the same logic. You do not comply with less. You prove more. The audit trail generated by a properly implemented advisory-only system exceeds what full ADMT compliance would require — because it captures the entire chain from AI recommendation through human deliberation to final decision, with cryptographic proof at every step.

What Is Advisory-Only Architecture?

Advisory-only architecture (AOA) is an infrastructure pattern in which AI systems are structurally incapable of making decisions. They can analyze data, generate recommendations, compute risk scores, and surface insights. But the path from recommendation to decision is gated by infrastructure constraints that only authenticated humans can traverse.

This is not a policy. It is not a process document. It is not a checkbox in a compliance dashboard. It is a set of database constraints, IAM policies, API boundaries, and cryptographic controls that make it a system error — not a policy violation — for AI to write a decision record.

The distinction between policy and infrastructure is everything. A policy says "humans must review AI recommendations before acting on them." An infrastructure constraint says "the database rejects any INSERT to the decisions table that does not include a valid human session token, a minimum dwell time attestation, and a cryptographic signature from the decision gateway."

Policies can be circumvented. Infrastructure constraints cannot — not without modifying the infrastructure itself, which creates its own audit trail.

The Four-Layer Architecture

Advisory-only architecture comprises four layers, each enforcing a distinct constraint. Together, they make it provable — not merely claimable — that AI recommendations and human decisions are structurally separate processes.

Layer 1: Write Isolation

The foundational layer. AI services have zero write access to decision databases.

DATA STORES
├── RECOMMENDATION BUFFER          ├── DECISION DATABASE
│   - suggestions                  │   - hiring_decisions
│   - risk_scores                  │   - loan_decisions
│   - explanations                 │   - claims_decisions
│   - confidence_levels            │   - underwriting_outcomes
│                                  │
│   WRITE ACCESS:                  │   WRITE ACCESS:
│   [OK] AI Services               │   [OK] Human Auth Service
│   [NO] Decision API              │   [NO] AI Services
│                                  │   [NO] Batch Jobs
│                                  │   [NO] Automation Pipelines

Implementation details:

  • Separate databases or schemas for recommendations versus decisions. Not separate tables in the same database — separate schemas with row-level security, or ideally separate database instances entirely. This makes the isolation auditable at the infrastructure level, not just the application level.

  • IAM role separation. AI service accounts receive SELECT on decision tables (to understand context) and INSERT/UPDATE on recommendation tables. They never receive write permissions on decision tables. This is enforced at the cloud provider level (AWS IAM, GCP IAM, Azure RBAC), not at the application level.

  • Database constraints. Decision tables include a CHECK CONSTRAINT or TRIGGER that validates a human session token on every write. If a row arrives without a valid, non-expired session token from the decision gateway, the database rejects it. This is the last line of defense — even if an application bug bypasses the API layer, the database itself refuses the write.

  • No shared service accounts. The AI inference pipeline and the decision-writing pipeline use entirely separate credentials. There is no service account with write access to both recommendation and decision stores.

Why this matters for regulators: You can prove — through infrastructure audit, not process audit — that AI literally cannot make a decision. The database rejects it. The IAM policy blocks it. An infrastructure auditor can verify this in minutes by examining IAM policies and database constraints. No process interviews required. No reliance on employee training documentation. The proof is in the infrastructure.

Layer 2: Decision Gateway

The decision gateway is a middleware service that sits between the recommendation surface and the decision database. It is the only code path through which a decision record can be created.

AI Service → Recommendation Buffer → [HUMAN REVIEWS] → Decision Gateway → Decision DB
                                           |
                                     Requires:
                                     - Valid human session (JWT, not API key)
                                     - Minimum dwell time attestation
                                     - Key factors acknowledged
                                     - Divergence reason (if overriding AI)
                                     - Ed25519 cryptographic signature

The decision gateway enforces five constraints:

  1. Authentication. Only authenticated human users can submit decisions. The gateway validates a JWT issued by the identity provider, not an API key. Service accounts cannot call the decision endpoint. This is enforced at the API gateway level (rate limiting by identity type) and at the application level (session validation).

  2. Minimum dwell time. The human must have had the recommendation visible for a configurable minimum duration before the decision endpoint accepts a submission. For simple decisions (low-consequence, binary outcomes), this might be 30 seconds. For consequential decisions (hiring, lending, underwriting), 3-5 minutes is appropriate. The dwell timer starts when the recommendation content renders in the viewport, not when the page loads — preventing pre-loading or background tab gaming.

  3. Factor acknowledgment. Before the decision submission is accepted, the gateway validates that the reviewer interacted with key decision factors. This is implemented as a client-side event stream (scroll depth, section expansions, data panel opens) that the gateway cross-references before accepting the submission.

  4. Divergence documentation. When the human's decision differs from the AI recommendation, the gateway requires a free-text reason. This serves two purposes: it forces genuine deliberation, and it creates a high-value training signal for model improvement.

  5. Cryptographic proof. Each decision record includes a signature chain: recommendation_hash → human_session_id → interaction_proof_hash → decision_timestamp → Ed25519_signature. This chain is tamper-evident — modifying any element invalidates the signature.

Layer 3: Meaningful Review Enforcement (The Three-Prong Test)

This layer is what separates advisory-only architecture from "human in the loop theater" — the common pattern where a human clicks "approve" in under three seconds on 99.8% of AI recommendations, and the company claims human oversight.

The CPRA's final ADMT regulations define a three-prong test for meaningful human involvement. If all three prongs are satisfied, the technology is removed from ADMT classification entirely. Layer 3 maps each prong to an enforceable infrastructure control.

Prong A: Know — The reviewer understands how to interpret the AI's output.

The review interface presents AI outputs with plain-language explanations, confidence levels, and risk flags. The system tracks whether the reviewer expanded the explanation section, viewed the confidence interval, and read any flagged risk factors. Engagement tracking proves the reviewer encountered the interpretive context, not just the recommendation itself.

Implementation: The AI explanation panel is collapsed by default. The reviewer must expand it. The gateway will not accept a decision submission unless the explanation panel was opened and remained visible for a minimum duration.

Prong B: Analyze — The reviewer examined both the AI output and other relevant information.

The review interface surfaces non-AI data alongside the recommendation in a dual-pane layout: the AI recommendation on one side, the source data on the other. For a hiring decision, this means the AI's ranking and reasoning on the left, the candidate's resume, cover letter, and interview notes on the right. For a lending decision, the AI's risk score on the left, the applicant's financial documents on the right.

The gateway validates that the reviewer accessed both panes. If the source data panel was never opened, the submission is rejected.

Implementation: The review UI tracks scroll depth and time-in-viewport for both the AI pane and the source data pane. Both must exceed configurable thresholds before the decision endpoint becomes available.

Prong C: Authority — The reviewer has genuine authority to agree, disagree, or modify the decision.

The decision gateway accepts any outcome: agree with the AI, disagree with the AI, or modify the recommendation. There is no default selection. The reviewer must actively choose an outcome. When the reviewer disagrees, the system logs it without friction — no "are you sure?" dialogs, no escalation requirements, no manager approval. The reviewer's authority is absolute within the gateway.

The proof that authority is real, not nominal, comes from aggregate divergence rates. If reviewers disagree with AI recommendations 12-25% of the time, that is strong evidence of genuine authority. If the disagreement rate is 0.2%, that is evidence of rubber-stamping — and the system flags it.

Layer 4: Immutable Audit Trail

Every step in the recommendation-review-decision pipeline produces an immutable, append-only audit record. These records are hash-chained — each record includes the hash of the previous record — and Ed25519-signed by the decision gateway's private key.

A single audit record captures:

{
  "recommendation": {
    "id": "rec_abc123",
    "model_id": "underwriting-v3.1",
    "input_hash": "sha256:a1b2c3...",
    "output": {
      "recommendation": "approve",
      "confidence": 0.84,
      "risk_score": 0.23
    },
    "explanation": "Applicant meets 9/12 qualifying criteria...",
    "risk_flags": ["thin_credit_file"],
    "timestamp": "2026-03-22T14:30:00Z"
  },
  "human_review": {
    "reviewer_id": "user_jsmith",
    "session_id": "sess_xyz789",
    "dwell_time_seconds": 187,
    "engagement_score": 0.89,
    "factors_reviewed": [
      "credit_history",
      "income_verification",
      "risk_flags",
      "ai_explanation"
    ],
    "source_data_accessed": true,
    "explanation_expanded": true,
    "review_start": "2026-03-22T14:32:15Z",
    "decision_submitted": "2026-03-22T14:35:22Z"
  },
  "decision": {
    "id": "dec_def456",
    "outcome": "approve_with_conditions",
    "agreed_with_ai": false,
    "modification": "Reduced credit limit from AI-suggested $25K to $15K",
    "divergence_reason": "Thin credit file warrants conservative initial limit",
    "chain_hash": "sha256:d4e5f6...",
    "previous_hash": "sha256:g7h8i9...",
    "signature": "ed25519:j0k1l2..."
  }
}

This record proves, for any given decision:

  • What the AI recommended and why
  • That a specific, authenticated human reviewed it
  • How long they spent reviewing
  • What information they examined
  • What they decided
  • Whether they agreed or disagreed with the AI, and why
  • A cryptographic chain tying the entire sequence together in a tamper-evident structure

The audit trail is append-only. Records cannot be modified or deleted. Hash-chaining makes insertion or removal of records detectable. Ed25519 signatures make forgery computationally infeasible.

Anti-Rubber-Stamping Controls

The most common criticism of human-in-the-loop systems is that they devolve into rubber-stamping. Research consistently shows that humans tend to defer to automated recommendations — a phenomenon called automation bias. If your advisory-only architecture permits a reviewer to click "agree" in two seconds on every recommendation, a regulator will reasonably argue that the AI is making the decision, regardless of your architecture diagrams.

Advisory-only architecture addresses this with five categories of anti-rubber-stamping controls:

Dwell Time Gates

The decision UI does not render the submit button until the minimum review duration has elapsed. The timer begins when the recommendation content enters the browser viewport — not on page load, not on API response. This prevents pre-loading tabs or using browser automation to skip the review.

Different decision categories carry different minimums, calibrated to consequence severity:

| Decision Category | Minimum Dwell Time | Rationale | |-------------------|-------------------|-----------| | Low-consequence (content moderation flags) | 15 seconds | Binary decision, limited downstream impact | | Medium-consequence (insurance claim routing) | 60 seconds | Financial impact, requires context review | | High-consequence (hiring, lending, underwriting) | 180 seconds | Life-altering outcome, demands thorough review |

Engagement Tracking

Client-side instrumentation (implemented with tamper-resistant techniques — obfuscated event handlers, server-side validation of event plausibility) tracks whether the reviewer:

  • Scrolled through the full recommendation content
  • Expanded the AI's reasoning and explanation sections
  • Viewed the risk factors or flags
  • Accessed the underlying source data
  • Spent time in both the AI pane and the source data pane

A composite engagement score must exceed a configurable threshold before submission is enabled. An engagement score of 0.3 (opened the page, glanced at the recommendation, never looked at source data) blocks submission. A score of 0.7+ (read the recommendation, expanded the explanation, reviewed source data, examined risk flags) permits it.

Batch Prevention

Rate limiting on decisions per reviewer per hour prevents high-throughput rubber-stamping. If a reviewer is processing lending decisions at a rate of one every 45 seconds, the system introduces mandatory cool-down periods.

Alert triggers fire when:

  • A reviewer's approval rate exceeds a configurable threshold (e.g., >95% agreement with AI over the last 100 decisions)
  • Average dwell time drops below the minimum by category
  • Engagement scores trend downward across a session

These alerts go to compliance administrators, not to the reviewer, preventing gaming.

Random Deep Review

A configurable percentage of decisions (typically 5-10%) are flagged for "deep review." On these decisions, the reviewer must write a narrative explanation of their reasoning — not just select an outcome. The narrative is logged in the audit trail and available for compliance review.

This serves two purposes: it maintains cognitive engagement across the full decision stream (reviewers cannot predict which decisions will require narrative), and it generates rich qualitative data for auditors and regulators.

Override Rate Monitoring

The system continuously monitors the divergence rate between AI recommendations and human decisions. This metric is the single strongest indicator of genuine human authority.

Healthy ranges vary by domain, but general benchmarks:

  • 12-25% override rate: Strong evidence of meaningful human review. Reviewers are exercising genuine judgment.
  • 5-12% override rate: Moderate evidence. May be acceptable for domains where the AI is well-calibrated, but warrants investigation.
  • Below 5% override rate: Potential rubber-stamping. The system escalates for review — are decisions too easy, is the AI too influential, or are reviewers disengaged?
  • Above 40% override rate: The AI may be poorly calibrated. Investigate model performance before interpreting override rates as evidence of human authority.

Why This Satisfies Every Major Regulation

Advisory-only architecture does not merely dodge one regulation. It addresses the trigger condition common to every major AI governance framework simultaneously.

CPRA (California)

Trigger: Technology that "replaces or substantially replaces" human decision-making for significant decisions.

Why AOA makes it inapplicable: The CPPA's final regulations narrowed the trigger from "substantially facilitate" (early drafts) to "substantially replace." CPPA staff testified that this narrowing reduced coverage from a wide swath of businesses to approximately 10% of CCPA-covered businesses. Advisory tools with meaningful human review are explicitly excluded.

Under AOA, the three-prong test is satisfied by infrastructure, not policy:

| Prong | CPRA Requirement | AOA Enforcement | |-------|-----------------|-----------------| | (A) Know | Reviewer understands how to interpret AI output | Engagement tracking proves explanation panels were opened and read | | (B) Analyze | Reviewer examined AI output AND other relevant information | Dual-pane UI with tracked access to both AI recommendation and source data | | (C) Authority | Reviewer has genuine authority to make or change the decision | Decision gateway accepts any outcome; divergence rates prove authority is real |

The result: the technology is removed from ADMT classification entirely. Not compliant with ADMT requirements — excluded from them.

EU AI Act

Trigger: AI system falls in an Annex III high-risk category (employment, credit, insurance, education, essential services, law enforcement).

Why AOA makes it inapplicable: Article 6(3) of the EU AI Act provides a derogation — an Annex III system is NOT high-risk if it does not pose a significant risk of harm, including by "not materially influencing the outcome of decision making." Four conditions qualify (any one suffices):

  1. Narrow procedural task
  2. Improving a prior human activity
  3. Pattern detection without replacing human judgment
  4. Preparatory task — gathering and organizing data before a human decides

Condition 4 maps directly to advisory-only architecture. The AI gathers data, analyzes it, and presents recommendations. The human decides. The AI performs a preparatory task.

Additionally, Article 14 of the EU AI Act requires human oversight design for high-risk systems. AOA's decision gateway, meaningful review enforcement, and audit trail satisfy Article 14's requirements even for systems that do qualify as high-risk — providing defense in depth if the derogation is challenged.

Important exception: Systems that perform profiling of natural persons (as defined in GDPR Article 4(4)) are always classified as high-risk. The Article 6(3) derogation does not apply. If your AI system evaluates personal characteristics to build profiles, AOA alone is insufficient — you need full compliance.

Colorado AI Act

Trigger: AI that is a "substantial factor" in making a consequential decision, where the AI is "capable of altering the outcome."

Why AOA makes it inapplicable: Colorado's two-part test requires that the AI both (i) assists in making a consequential decision AND (ii) is capable of altering the outcome. Under AOA with write isolation, the AI is architecturally incapable of altering any outcome. It cannot write to decision databases. It cannot trigger downstream actions. It can only populate a recommendation buffer that a human must independently act upon through the decision gateway.

The "capable of altering the outcome" standard is forward-looking — it asks whether the system could alter the outcome, not whether it does in practice. AOA answers this definitively: no. The database constraints and IAM policies make it a system error, not a policy choice. The AI is not merely configured to avoid altering outcomes — it is incapable of doing so.

ISO 42001

Impact: Advisory-only architecture legitimately reduces the number of applicable controls in the Statement of Applicability (SoA).

Controls that become excludable under AOA:

  • Autonomous decision-making controls (the system makes no autonomous decisions)
  • Appeal mechanism requirements (decisions are already made by humans)
  • Human override requirements (there is nothing to override — humans are the primary decision-makers)
  • Several impact assessment rigor requirements (reduced consequence from advisory-only posture)

Organizations pursuing ISO 42001 certification with AOA in place report that the applicable control set decreases significantly, reducing both audit preparation time and certification costs.

When Advisory-Only Does Not Work

Advisory-only architecture is not a universal solution. There are domains where the regulatory trigger is not "automated decisions" but something broader — and where federal law attaches liability regardless of your architecture.

Hiring and Employment (Title VII, EEOC)

Federal anti-discrimination law applies to AI "selection procedures" regardless of whether the AI is advisory or decisional. The EEOC has explicitly stated that disparate impact liability attaches to AI tools used in hiring even when a human makes the final decision. If your AI resume screener produces a candidate ranking, and that ranking correlates with a protected class, Title VII liability exists — even if a recruiter reviews every ranking and makes independent hiring decisions.

NYC Local Law 144 reinforces this: it covers tools that "substantially assist" employment decisions, not just tools that make them. Advisory-only architecture provides no meaningful regulatory avoidance in hiring. The correct approach for HR tech is full compliance: annual bias audits, adverse impact analysis, and disparate impact testing.

Lending and Credit (ECOA, CFPB)

The CFPB has stated explicitly that there is "no advanced technology exception." Whether AI is advisory or auto-decisioning, lenders must provide adverse action notices when credit is denied, cannot use unexplainable models for credit decisions, and face disparate impact liability under the Equal Credit Opportunity Act.

AOA reduces operational risk in lending (by ensuring humans review AI recommendations before decisions), but it does not reduce regulatory obligations. You still need the adverse action notice infrastructure, the model explainability requirements, and the fair lending analysis.

Biometrics and Profiling (EU AI Act)

Under the EU AI Act, the Article 6(3) derogation that AOA leverages dies immediately if the system performs profiling of natural persons. Profiling — automated evaluation of personal characteristics to analyze or predict behavior, reliability, economic situation, or health — is always classified as high-risk. No architectural escape exists. Full compliance is required.

Medical Devices (FDA)

If your AI system processes medical images (radiology, pathology, dermatology), it is a medical device under FDA classification. No architectural pattern makes this inapplicable. The FDA's Clinical Decision Support exemption applies only to systems that do NOT process images or signals and that present recommendations (not diagnoses) to healthcare professionals. If your system touches images, it is a device.

Fully Autonomous Business Models

If your business model depends on zero-human-touch decisioning — high-frequency trading, programmatic advertising, automated content moderation at scale — advisory-only architecture is structurally incompatible with your product. You cannot add a human decision gateway to a system that processes 10,000 decisions per second. These systems require compliance infrastructure, not avoidance architecture.

The Path from Advisory to Autonomous

Advisory-only architecture is not a permanent constraint. It is a starting posture — one that allows you to deploy AI in regulated domains immediately while building the data and confidence to transition toward greater automation over time.

The transition path is deliberate and data-driven:

Phase 1: Advisory Deployment

Deploy AI in advisory mode with full AOA infrastructure. Every recommendation generates an audit record. Every human decision is logged with engagement data, dwell time, and divergence documentation.

This is where most organizations should operate today for high-consequence decisions in regulated domains.

Phase 2: Decision Corpus Analysis

After accumulating a substantial corpus of recommendation-decision pairs (typically 10,000+), analyze the data:

  • Agreement patterns: Where does the AI align with human decisions? Where does it diverge?
  • Override analysis: When humans override the AI, are they correcting errors or introducing bias?
  • Outcome tracking: When humans agree with the AI versus override it, which decisions produce better outcomes?
  • Bias detection: Do AI recommendations show disparate impact across protected classes? Do human overrides amplify or correct that impact?

This corpus is extraordinarily valuable. It is labeled training data generated by domain experts in production conditions — the most expensive and hardest-to-acquire dataset in machine learning.

Phase 3: Automation Readiness Scoring

Using the decision corpus, compute an automation readiness score for each decision category:

  • High-agreement, low-consequence decisions: If humans agree with AI 97% of the time on a decision category with limited downstream impact, that category is a candidate for automation (with appropriate compliance for fully automated decisions).
  • High-agreement, high-consequence decisions: Even with strong agreement, high-consequence decisions may require continued human oversight due to regulatory requirements or organizational risk tolerance.
  • Low-agreement decisions: Categories where humans frequently override AI should remain advisory until the model improves or the disagreement is understood.

Phase 4: Deliberate Transition

When a decision category is ready for automation, the transition is documented, compliant, and reversible:

  1. Conduct the required risk assessment for the newly automated category
  2. Implement pre-use notices, opt-out mechanisms, and appeal processes (the ADMT compliance infrastructure you previously avoided)
  3. Deploy automation with monitoring for drift, bias, and outcome quality
  4. Maintain the ability to revert to advisory mode if issues emerge

The key insight: you transition to automation from a position of strength. You have the decision corpus proving the AI's track record. You have the outcome data justifying automation. You have the compliance infrastructure ready because you built the audit trail layer first. This is not a rush to automate — it is a deliberate, evidence-based transition with full regulatory documentation.

Implementation Checklist

For engineering teams evaluating advisory-only architecture, the following checklist provides a practical starting point:

Database and Infrastructure

  • [ ] Separate recommendation and decision stores (different schemas with RLS, or separate database instances)
  • [ ] AI service accounts have zero write access to decision tables (verify via IAM policy audit)
  • [ ] Decision tables include CHECK CONSTRAINT or TRIGGER validating human session tokens
  • [ ] No shared service accounts between AI inference and decision-writing pipelines
  • [ ] Infrastructure-as-code (Terraform, Pulumi) encodes the separation, making it version-controlled and auditable

Decision Gateway

  • [ ] Deploy decision gateway as the sole code path for creating decision records
  • [ ] Gateway validates JWT-based human authentication (not API keys)
  • [ ] Minimum dwell time enforcement implemented per decision category
  • [ ] Factor acknowledgment tracking integrated with the review UI
  • [ ] Divergence documentation required when decision differs from AI recommendation
  • [ ] Ed25519 key pair provisioned for cryptographic signing

Review Interface

  • [ ] Dual-pane layout: AI recommendation + source data side by side
  • [ ] AI explanation panel collapsed by default, requiring active expansion
  • [ ] Engagement tracking instrumentation deployed (scroll depth, time-in-viewport, panel interactions)
  • [ ] Submit button disabled until engagement threshold is met
  • [ ] No default selection on decision outcome (reviewer must actively choose)

Anti-Rubber-Stamping

  • [ ] Dwell time gates calibrated per decision category
  • [ ] Rate limiting on decisions per reviewer per hour
  • [ ] Alert thresholds configured for high agreement rates (>95% over rolling window)
  • [ ] Random deep review percentage set (5-10% of decisions require narrative)
  • [ ] Override rate monitoring dashboard deployed

Audit Trail

  • [ ] Append-only log storage provisioned (no UPDATE or DELETE permissions on audit tables)
  • [ ] Hash-chaining implemented (each record includes hash of previous record)
  • [ ] Ed25519 signing implemented for each decision record
  • [ ] Retention policy configured per regulatory requirement (CPRA: duration of processing + 5 years; EU AI Act: 10 years after system decommissioning)
  • [ ] Tamper detection mechanisms tested (verify that hash chain breaks are detected)

Organizational

  • [ ] Reviewer training program covering AI output interpretation (supports Prong A)
  • [ ] Decision authority documented in role descriptions (supports Prong C)
  • [ ] Escalation procedures defined for flagged rubber-stamping patterns
  • [ ] Regular audit schedule established (quarterly review of override rates, dwell times, engagement scores)

Frequently Asked Questions

Does advisory-only architecture mean AI adds no value?

No. The AI's recommendations are extremely valuable — they surface patterns in data that humans would miss, they accelerate review by pre-analyzing complex inputs, and they improve consistency across reviewers. The architecture changes who makes the final decision, not whether AI contributes to it. In most deployments, reviewers report that AI recommendations make them significantly faster and more accurate. The architecture simply ensures that "faster and more accurate" does not become "automatic and unchecked."

What if reviewers agree with the AI 95% of the time? Does that invalidate the architecture?

High agreement rates are expected when the AI is well-calibrated. The question is not whether reviewers agree — it is whether they could disagree, and whether they demonstrate genuine engagement when they do agree. A 95% agreement rate with 180-second average dwell times, high engagement scores, and documented overrides on the remaining 5% is strong evidence of meaningful review. A 95% agreement rate with 3-second average dwell times is rubber-stamping. The architecture distinguishes between these by measuring how the agreement was reached, not just the agreement itself.

How does this affect system performance and throughput?

Advisory-only architecture introduces latency at the decision point — you cannot make 10,000 decisions per second with a human in the loop. But for the decision categories where ADMT regulations apply (hiring, lending, insurance, housing, healthcare, education), these are already human-timescale decisions. Nobody approves a mortgage in 100 milliseconds. The architecture adds minutes to decisions that already take hours or days. For the vast majority of use cases, the throughput impact is negligible.

Can the architecture be gamed by building an automated system that pretends to be a human?

The defense against this is multi-layered. Authentication requires a valid human session from the identity provider (not an API key). Engagement tracking validates plausible interaction patterns (a script that scrolls at exactly 100px/second is distinguishable from a human). Rate limiting prevents superhuman throughput. Random deep reviews require narrative text that is auditable. And fundamentally, building a system to fake human review would require deliberate circumvention of infrastructure controls — which is both detectable in audit and legally indefensible if discovered.

Is advisory-only architecture the same as "human in the loop"?

No. "Human in the loop" is a design pattern. Advisory-only architecture is an infrastructure enforcement regime. Many systems claim to be "human in the loop" while permitting a reviewer to click "approve" in under a second with no engagement tracking, no dwell time enforcement, and no audit trail. Advisory-only architecture makes these shortcuts impossible — not through training or policy, but through database constraints, API validation, and cryptographic proof. The distinction is between a sign that says "speed limit 25" and a road that physically prevents driving faster than 25.

What does it cost to implement?

For a team with existing cloud infrastructure, the core implementation (write isolation, decision gateway, basic audit trail) can be deployed in 4-8 weeks of engineering effort. The review UI components and anti-rubber-stamping controls add 4-6 weeks. Full deployment including organizational changes (reviewer training, escalation procedures, monitoring dashboards) typically takes 3-4 months. The infrastructure cost is modest — the decision gateway is a lightweight middleware service, and the audit trail storage grows linearly with decision volume. For most organizations, the ongoing infrastructure cost is well under $5,000/month.


The Bottom Line

Advisory-only architecture is not about avoiding regulation through clever legal interpretation. It is about designing systems where the regulation's trigger condition — AI making or substantially replacing human decisions — is architecturally impossible.

The audit trail you generate exceeds what full ADMT compliance would require. The meaningful review enforcement you implement is more rigorous than any regulation demands. The cryptographic proof you produce is stronger than any compliance documentation.

You do not comply with less. You prove more.

And you do it through infrastructure, not paperwork.


Take the free ADMT compliance assessment at admt.ai to see if advisory-only architecture works for your use case. The assessment analyzes your AI systems, maps them against CPRA, EU AI Act, and Colorado triggers, and identifies which systems qualify for architectural avoidance versus which require full compliance.

Ready to assess your ADMT compliance?

Get a free, AI-powered gap assessment for your organization in minutes.