Colorado AI Act: What Credit Unions Using AI Lending Need Before June 30
Your credit union deployed Zest AI, Upstart, or Scienaptic. Your vendor runs fair lending testing. There's a bank and credit union exemption in Colorado's AI Act. So you're covered.
You're probably not.
Sixty-three percent of community banks and credit unions have zero AI governance policies in place. Only 29% of financial institutions have implemented any AI compliance controls. If you've been assuming your vendor or your federal charter handles Colorado compliance, you're in the majority — but the majority isn't where you want to be when the statute takes effect.
Colorado's AI Act (SB 24-205) takes effect June 30, 2026. It applies to any organization deploying AI in "consequential decisions" — and credit decisioning is explicitly listed. The conditional exemption for federally regulated institutions sounds reassuring until you map it against what Colorado actually requires. There are four gaps that SR 11-7 doesn't touch.
If your credit union uses AI anywhere in lending — underwriting, pricing, fraud screening, collections — this is what you need to know with 90 days left on the clock.
What Colorado Actually Requires
The Act targets "high-risk AI systems" used in "consequential decisions." Lending decisions are listed as consequential in the statute (Section 6-1-1702(2)). If your credit union uses any AI or machine learning system that influences who gets approved, at what rate, or on what terms, the Act applies to your institution.
Here's what it requires, in plain English:
-
Impact assessments. Before deploying a high-risk AI system (or annually after deployment), you must complete and document an assessment covering the system's purpose, intended benefits, known risks, data inputs, outputs, and performance metrics.
-
Disparate impact testing. You must evaluate whether the AI system produces discriminatory outcomes across protected classes. This goes beyond what you do for ECOA — Colorado requires testing specific to how the AI model itself generates disparate outcomes, not just whether the final lending decision is fair.
-
Consumer disclosure. You must tell consumers that AI was used in the decision affecting them. Not buried in fine print — a clear, meaningful disclosure.
-
Consumer appeal rights. Consumers who receive an adverse decision involving AI must have a pathway to appeal and request human review. This isn't the same as an adverse action notice under Reg B. It's a separate process specific to the AI component of the decision.
-
Written risk management program. You need a documented program governing how your institution develops, deploys, and monitors AI systems. Not a policy statement — an operational program with assigned responsibilities, review cadence, and escalation procedures.
The penalty for non-compliance: up to $20,000 per violation, enforced by the Colorado Attorney General.
The Exemption Myth
Section 6-1-1706 provides a conditional exemption for banks, credit unions, and other federally regulated financial institutions. The key word is "conditional."
The exemption applies only if your federal regulator (NCUA, in most CU cases) has published guidance that is "substantially equivalent" to the Colorado AI Act's requirements. The logic: if you're already meeting equivalent federal standards, Colorado won't double-regulate you.
The problem is that the federal guidance everyone points to — SR 11-7, the Fed/OCC model risk management framework — doesn't cover what Colorado requires. Here's the gap:
| Requirement | SR 11-7 | Colorado AI Act |
|---|---|---|
| Model validation and documentation | Yes | Yes |
| Ongoing model monitoring | Yes | Yes |
| Independent review | Yes | Yes |
| Consumer disclosure of AI use | No | Required |
| Consumer appeal rights for AI decisions | No | Required |
| AI-specific disparate impact testing | No | Required |
| Annual impact assessment (Colorado format) | No | Required |
SR 11-7 was written in 2011. It covers model risk management broadly — validation, documentation, governance. It does not address the AI-specific obligations Colorado has created: disclosing AI use to consumers, giving consumers appeal rights, testing for AI-specific disparate impact, or producing annual impact assessments in the format the Act specifies.
The NCUA has not published guidance that fills these gaps. In January 2026, the NCUA released an updated AI Resource Hub directing credit unions to NIST AI frameworks and existing safety-and-soundness standards. But the agency was explicit: "supervisory expectations around AI will be grounded in existing, well-known frameworks rather than in a bespoke AI rulebook." Existing frameworks. Not new ones that match Colorado's four additional requirements. The NCUA's position is that AI governance lives within third-party oversight and traditional exam disciplines — which means SR 11-7 is still the ceiling.
Until federal regulators publish AI-specific guidance that matches Colorado's requirements point-for-point, the conditional exemption has no substance.
Claiming the exemption in an exam or enforcement inquiry means asserting that SR 11-7 is "substantially equivalent" to the Colorado AI Act. That assertion has four visible holes in it. An examiner who has read both documents will see them.
What Your AI Vendor Covers (and Doesn't)
If you're using Zest AI, Upstart, or Scienaptic AI, their tools are strong for what they do: adverse action explainability, disparate impact analysis for ECOA/Reg B, and lending decision analytics.
But your vendor's compliance is their compliance. Colorado's obligations sit on the deployer — your credit union — not on the vendor.
Here's the split:
What Zest AI / Upstart / Scienaptic typically covers:
- ECOA fair lending analysis (disparate impact testing for Reg B)
- Adverse action reason code generation
- Model performance monitoring
What Colorado requires that your vendor doesn't provide:
- Impact assessment documentation in Colorado's required format
- Consumer-facing disclosure that AI influenced the credit decision
- A consumer appeal process for AI-influenced adverse decisions
- A written institutional risk management program for AI systems
- Disparate impact testing calibrated to Colorado's AI-specific standard (which differs from the ECOA standard because it tests the AI model's contribution to outcomes, not just the final decision)
Your vendor will never claim to cover these. They can't — these are institutional obligations that depend on your policies, your disclosures, and your processes. But many compliance teams assume the vendor "has it handled" because the vendor handles the fair lending piece they're familiar with.
The question to ask internally: apart from what Zest, Upstart, or Scienaptic provides, what documentation does your credit union have governing how AI is used in lending decisions?
The Earnest Precedent
In July 2025, the Massachusetts AG settled with Earnest, a mid-market student loan lender, for $2.5 million. Earnest's AI underwriting model used a "Cohort Default Rate" variable that penalized HBCU and community college graduates. An automated "Knockout Rule" denied anyone without a green card. The company had never conducted fair lending testing on its AI model — not once.
That profile — mid-market lender, AI vendor platform, understaffed compliance function, no AI-specific testing — describes a significant number of credit unions that deployed AI lending in the past 18 months. And the Massachusetts AG did it with general consumer protection law. Colorado's AG has an AI-specific statute with enumerated requirements and $20,000/violation penalties. The enforcement toolkit is sharper.
What to Do Before June 30
Here's a concrete checklist. Not principles — steps.
1. Inventory every AI/ML system used in lending decisions. List every system that touches a credit decision: underwriting models, fraud screening, pricing engines, collections scoring. Include vendor-provided AI (Zest AI, Upstart, Scienaptic) and any internally built models. For each, document what data goes in, what comes out, and how the output influences the lending decision.
2. Determine whether the conditional exemption actually applies to you. Map SR 11-7 against Colorado's four additional requirements (consumer disclosure, appeal rights, AI-specific disparate impact testing, annual impact assessments). If your institution can't document substantial equivalence across all four, the exemption doesn't protect you. Don't assume — map it.
3. Conduct or commission AI-specific disparate impact testing. This isn't the same as your annual HMDA fair lending analysis. Colorado requires testing focused on how the AI model itself contributes to outcome disparities across protected classes. If your vendor provides disparate impact testing, confirm it covers the AI model's specific contribution, not just the aggregate lending outcomes.
4. Draft consumer disclosure language. When AI influences a credit decision, Colorado requires you to disclose that to the consumer. Draft the disclosure language. Determine where in your lending workflow it gets delivered — application acknowledgment, approval letter, adverse action notice, or all three. Your legal team should review, but having a draft is step one.
5. Establish a consumer appeal process for AI-influenced denials. This is separate from adverse action procedures under Reg B. Colorado requires a pathway for consumers to challenge the AI component of a decision and request human review. Define the process: how does a consumer invoke it, who reviews, what's the timeline, how is the outcome documented.
6. Document your risk management program. Write down how your institution governs AI in lending. Who is responsible for AI oversight? What's the review cadence? How are model updates approved? How is performance monitored? How are issues escalated? This doesn't have to be a 50-page document. It needs to be specific, operational, and current.
7. Brief your board. Your board or risk committee has likely seen the Earnest settlement, NCUA's AI Resource Hub, or the Colorado AI Act in trade publications. Give them a specific answer: "Here's what Colorado requires. Here are the gaps we've identified. Here's our plan to close them by June 30." That conversation is easier to have now than after an enforcement inquiry.
Frequently Asked Questions
"We're a small credit union. Colorado isn't going to come after us."
The Colorado AG has consistently enforced consumer protection against mid-market and small institutions — not just large banks. The Earnest settlement was a mid-market lender with fewer than 500 employees. Colorado's $20,000 per violation penalty structure is designed for per-consumer enforcement: if your AI system processes 1,000 lending decisions without proper disclosure, that's 1,000 potential violations. Small institutions aren't below the enforcement threshold — they're often less prepared for it.
"Our vendor handles compliance. We use Zest AI / Upstart / Scienaptic."
Your vendor handles their compliance obligations as the developer. Colorado's deployer obligations — impact assessments, consumer disclosure, appeal rights, institutional risk management — sit on your credit union. Your vendor can't write your risk management program, draft your consumer disclosures, or build your appeal process. These depend on your institution's policies, workflows, and member-facing processes. Ask your vendor directly: "Do you cover Colorado AI Act deployer obligations?" The answer will clarify the gap.
"There's a credit union exemption. We're exempt."
The exemption is conditional on your federal regulator (NCUA) having published guidance "substantially equivalent" to what Colorado requires. SR 11-7 doesn't cover consumer AI disclosure, appeal rights, AI-specific disparate impact testing, or annual impact assessments. The NCUA's 2026 AI Resource Hub explicitly anchors its expectations in existing frameworks — not new AI-specific rules matching Colorado's requirements. Until that changes, the exemption is a sentence in a statute with no federal guidance behind it.
"We'll wait for NCUA to issue guidance before acting."
The NCUA has been clear: AI governance expectations will be grounded in existing frameworks, not new AI-specific rulemaking. Waiting for the NCUA to publish Colorado-equivalent guidance is waiting for something the agency has signaled it doesn't plan to do. Meanwhile, Colorado's statute takes effect June 30 regardless of what the NCUA publishes. The AG doesn't need the NCUA's permission to enforce state law against a Colorado-chartered or Colorado-operating credit union.
We built a free assessment that maps your AI lending systems against Colorado, ECOA, and CPRA requirements in one report. Takes about 3 minutes: admt.ai