Compliance Library Blog Product Sign In

Colorado AI Act: 24 Obligations, 113 Days

March 15, 2026 | 11 min read | ReguLume
colorado-ai-act us-state-laws compliance-deadline nist-ai-rmf

107 days.

That’s the distance between today and June 30, 2026 — the date Colorado’s Artificial Intelligence Act (SB 24-205) takes effect. If you’re building, deploying, or procuring AI systems that touch consumers in Colorado, that number matters more than you think.

Most compliance programs take 8-14 months to stand up. You have three and a half.

The Colorado AI Act is not a long regulation. 24 obligations. Compare that to the EU AI Act’s 334 or GDPR’s 630, and it looks manageable. It is — until you read Section 6-1-1703(1), which creates a safe harbor for organizations that follow the NIST AI Risk Management Framework. One clause. It turns NIST’s 137 voluntary recommendations into something a Colorado AG can ask whether you’ve done.

24 obligations just became 161.


What This Law Actually Covers

The Colorado AI Act doesn’t regulate all AI. It regulates “consequential decisions” — and its definition is narrower and more precise than most people realize.

Section 6-1-1701(3) defines a “consequential decision” as any decision that has a material legal or similarly significant effect on a consumer’s access to:

  • Education enrollment or opportunity
  • Employment or employment opportunity
  • Financial or lending services
  • Essential government services
  • Healthcare services
  • Housing
  • Insurance

Not every AI system. Not chatbots. Not recommendation engines serving up product suggestions. Specifically: AI that makes or substantially contributes to decisions about whether someone gets a loan, a job, insurance coverage, housing, healthcare, or access to government services.

If your AI system recommends products on an e-commerce site — you’re out of scope. If your AI system screens resumes, underwrites insurance policies, or pre-approves credit applications — you’re in scope. The line is sharp.

Two roles carry obligations: deployers (organizations that use high-risk AI systems to make consequential decisions) and developers (organizations that build or substantially modify those systems). Most compliance teams will fall under deployer obligations. Some will carry both.


The 24 Obligations, Grouped

Colorado’s obligations fall into five categories. Here they are — not paraphrased, not summarized into a framework bucket. The actual requirements.

Risk Management (5 obligations)

  1. Implement a risk management policy and program for high-risk AI systems (Section 6-1-1702(2)(a))
  2. Complete an impact assessment before deploying any high-risk AI system (Section 6-1-1702(2)(b))
  3. Review and update the impact assessment on a regular basis and after any intentional, material change to the system (Section 6-1-1702(2)(b))
  4. Identify and document known or foreseeable risks of algorithmic discrimination (Section 6-1-1702(1)(b))
  5. Map data inputs and outputs that the system uses to make consequential decisions (Section 6-1-1702(2)(b))

Transparency and Disclosure (8 obligations)

  1. Notify consumers that an AI system is being used to make a consequential decision about them (Section 6-1-1702(2)(c))
  2. Provide a plain-language description of the AI system’s purpose and how it contributes to the decision (Section 6-1-1702(2)(c))
  3. Inform consumers of their right to opt out or appeal an AI-driven consequential decision (Section 6-1-1702(2)(c))
  4. Publish a statement on the deployer’s website describing the types of high-risk AI systems currently deployed (Section 6-1-1702(2)(d))
  5. Developers must disclose known limitations, intended uses, and reasonably foreseeable misuses of high-risk systems (Section 6-1-1703(2)(a))
  6. Developers must provide documentation sufficient for deployers to complete their impact assessments (Section 6-1-1703(2)(b))
  7. Developers must publish a general statement on their website describing the high-risk AI systems they’ve developed or substantially modified (Section 6-1-1703(2)(c))
  8. Provide consumers with an explanation of the principal reasons for a consequential decision made by the AI system (Section 6-1-1702(2)(c))

Governance and Accountability (5 obligations)

  1. Use reasonable care to protect consumers from known or foreseeable risks of algorithmic discrimination (Section 6-1-1702(1))
  2. Developers must use reasonable care to protect against algorithmic discrimination in systems they make available (Section 6-1-1703(1))
  3. Maintain records sufficient to demonstrate compliance with the Act (Section 6-1-1702(2)(e))
  4. Designate responsibility — ensure human oversight of AI systems making consequential decisions (Section 6-1-1702(2)(a))
  5. Developers must make available to deployers and the AG information necessary to understand the outputs and operation of the system (Section 6-1-1703(2)(a))

Consumer Rights (3 obligations)

  1. Allow consumers to correct inaccurate personal data used by the AI system (Section 6-1-1702(2)(c))
  2. Provide a process for consumers to appeal a consequential decision (Section 6-1-1702(2)(c))
  3. If technically feasible, allow consumers to opt out of the AI system’s use in making a consequential decision about them (Section 6-1-1702(2)(c))

Incident Response and Reporting (3 obligations)

  1. Report to the AG within 90 days of discovering that a high-risk AI system has caused algorithmic discrimination (Section 6-1-1702(2)(f))
  2. Developers must report to the AG and known deployers within 90 days of discovering algorithmic discrimination (Section 6-1-1703(2)(d))
  3. Cooperate with the AG during any investigation related to the Act (Section 6-1-1706)

That’s the complete list. Twenty-four requirements. Each one is an auditable control point.


The NIST Safe Harbor — One Clause That Changes Everything

Section 6-1-1703(1) contains six words that should have every compliance team’s attention: “rebuttable presumption of reasonable care.”

Here’s what it means. If a developer or deployer can demonstrate compliance with “a nationally or internationally recognized risk management framework” for AI — and the statute names NIST AI RMF and ISO 42001 explicitly — they receive a legal presumption that they’ve exercised reasonable care under the Act.

Not immunity. Not a guarantee. A presumption that the AG must rebut to prove liability.

In practice, that distinction is enormous.

Without the safe harbor, you’re defending your compliance posture from scratch in every enforcement action. With it, the burden shifts. The AG has to prove your NIST AI RMF implementation was inadequate — not just that discrimination occurred.

This is why NIST AI RMF’s 137 obligations suddenly matter for Colorado compliance. They’re not “best practices” anymore. They’re your evidence base for a legal defense.

What NIST AI RMF Actually Requires

The framework organizes into four functions — Govern, Map, Measure, Manage — with 137 individual practices across them.

Govern (37 practices): Organizational policies, roles, accountability structures, risk appetite, compliance monitoring. The governance scaffolding that proves your AI program isn’t ad hoc.

Map (43 practices): Context and use-case identification. Who’s affected by your AI system? What are the intended and unintended impacts? What data goes in, what decisions come out? The mapping function generates the raw material for Colorado’s required impact assessments.

Measure (32 practices): Quantitative and qualitative assessments of AI system performance, bias, reliability, and robustness. This is where you prove your system doesn’t discriminate — with data, not assertions.

Manage (25 practices): Risk treatment, mitigation, monitoring, and incident response. The ongoing operational discipline that demonstrates continuous compliance, not a point-in-time checkbox.

Colorado’s 24 obligations map directly to specific NIST AI RMF practices. Impact assessments align with MAP 1.1-1.6. Algorithmic discrimination monitoring aligns with MEASURE 2.6-2.11. Incident reporting aligns with MANAGE 4.1-4.3.

The alignment isn’t coincidental. Colorado’s drafters built the safe harbor around NIST deliberately.


Who’s Actually Affected

“Any deployer or developer of high-risk AI systems making consequential decisions in Colorado.”

That definition is broader than it sounds.

You don’t need to be headquartered in Colorado. You don’t need a physical presence. If your AI system makes or contributes to a consequential decision about a Colorado consumer — regardless of where your servers sit or your company is incorporated — the Act applies.

Financial services firms using AI for credit scoring, loan underwriting, or fraud detection that affects Colorado residents. In scope.

Insurance companies using algorithmic risk assessment for policy pricing or claims adjudication in Colorado. In scope.

Employers using AI-powered resume screening, interview analysis, or promotion recommendation tools — if any candidates or employees are Colorado residents. In scope.

Healthcare organizations using AI for treatment recommendations, insurance pre-authorization, or patient triage in Colorado. In scope.

HR-tech and fintech SaaS vendors whose customers deploy their AI tools in Colorado. In scope as developers — even if they never interact with a Colorado consumer directly.

The developer obligations are particularly significant for SaaS companies. If your client uses your AI tool to make consequential decisions in Colorado, you carry disclosure, documentation, and incident reporting obligations under Sections 6-1-1703(2)(a)-(d). Your client’s compliance depends partly on information only you can provide.


What 107 Days Actually Means

Compliance timelines are not calendar timelines. Here’s why 107 days is less time than it looks.

Weeks 1-3: Scoping. Identify which AI systems make or contribute to consequential decisions. Most organizations undercount on the first pass — that credit scoring model is obvious, but what about the chatbot that pre-qualifies loan applicants? The resume ranking tool that feeds into interview scheduling? Scoping takes longer than expected because the “consequential decision” definition cuts across departments.

Weeks 4-8: Impact assessments. Each high-risk system needs a documented impact assessment before deployment. Section 6-1-1702(2)(b) requires assessment of inputs, outputs, known limitations, risk of algorithmic discrimination, and the data used to train or customize the system. If you have 5 high-risk systems, you need 5 assessments. Each one requires input from engineering, legal, and the business unit that owns the system.

Weeks 6-10: Policy and governance. Risk management policy and program. Consumer notification procedures. Appeal and opt-out processes. AG reporting procedures. Website disclosures. None of these are technically difficult. All of them require cross-functional coordination that takes longer than the document itself.

Weeks 8-14: Implementation and testing. Consumer-facing notices need to be built into product flows. Appeal processes need to actually work. Opt-out mechanisms need to be technically feasible and tested. Record-keeping systems need to capture the evidence you’ll need if the AG comes asking.

Stack those phases and the arithmetic is clear. Starting today, you’re already compressing a 14-month timeline into less than 4 months. Starting next month — you’re past the point where a thorough implementation is possible without cutting scope.


Five Steps for Teams Starting Now

If you haven’t started, here’s where to focus. Not theory. The operational sequence.

1. Inventory your AI systems against the “consequential decision” definition. Be aggressive with inclusion. If you’re debating whether a system qualifies, it probably does. The AG won’t give you credit for a narrow interpretation you can’t defend.

2. Prioritize NIST AI RMF alignment. The safe harbor is the most valuable provision in the Act. Start with the Govern function — policies, accountability, risk appetite — because it’s the foundation everything else sits on and it doesn’t require engineering work. You can make progress in weeks, not months.

3. Draft impact assessments for your highest-risk systems first. Credit decisions. Employment screening. Insurance underwriting. These are the systems most likely to draw AG scrutiny and the ones where algorithmic discrimination risk is highest.

4. Build consumer notification into your product roadmap now. The transparency obligations — notice, explanation, appeal, opt-out — require UI changes, copy review, and legal sign-off. These have the longest lead time in most organizations because they span engineering, product, legal, and compliance.

5. Establish your AG reporting procedure before you need it. Section 6-1-1702(2)(f) gives you 90 days to report discovered algorithmic discrimination. If you discover it on July 1 and don’t have a reporting procedure, you’ll spend the first two weeks figuring out process instead of responding. Build the playbook now, when you’re not under pressure.


The Bigger Picture

Colorado is first. It won’t be last.

Texas TRAIGA carries 99 obligations across healthcare, financial services, education, and criminal justice. Illinois AIVIA imposes 62 obligations on AI in video interviews. NYC LL144’s 48 obligations target automated employment decisions with mandatory bias audits. Oregon and California have enacted AI companion safety laws with 59 obligations between them.

Each law has different scope, different definitions, different enforcement mechanisms. An AI system deployed across multiple states faces overlapping — and occasionally contradictory — requirements.

Colorado’s NIST safe harbor is the clearest signal yet of where US AI regulation is heading: state-level enforcement with federal framework alignment. Organizations that invest in NIST AI RMF compliance now aren’t just preparing for Colorado. They’re building the foundation for every state law that follows.

The deadline is June 30. The compliance timeline started months ago.


Browse all 24 Colorado AI Act obligations — and cross-reference them against NIST AI RMF’s 137 practices — at regulume.com/compliance. No account required.


ReguLume maps 2,964 obligations across 15 AI and data regulations. The Colorado AI Act’s 24 obligations and NIST AI RMF’s 137 practices are searchable, filterable, and cross-referenced in the obligation database.

Map obligations to your AI systems

ReguLume covers 2,964 obligations across 15 regulations. Score your compliance posture in hours, not months.

Get Started

Start your compliance assessment

Map obligations to your AI systems, identify gaps, and generate board-ready reports. Plans start at $149/mo.

Get Started