Colorado AI Act

Colorado AI Act compliance guide

The first comprehensive US state AI law. Colorado SB 24-205 requires deployers and developers to prevent algorithmic discrimination in consequential decisions. Effective February 1, 2026, with affirmative defenses available through NIST AI RMF or ISO 42001 compliance.

What is the Colorado AI Act?

The Colorado Artificial Intelligence Act (Senate Bill 24-205) is the first comprehensive US state law regulating artificial intelligence systems. Signed into law on May 17, 2024, it establishes obligations for developers and deployers of high-risk AI systems used in consequential decisions.

Why this matters: Colorado's law sets a precedent for US AI regulation. It focuses on preventing algorithmic discrimination while providing affirmative defenses for organizations that adopt recognized AI risk management frameworks.

Effective date

February 1, 2026

Affirmative defense

NIST AI RMF or ISO 42001 compliance

Complements NIST AI RMF implementation and EU AI Act compliance.

Who needs to comply?

Deployers using AI in consequential decisions

Organizations using high-risk AI for employment, education, finance, healthcare, housing, insurance, legal or government services

AI system developers

Companies developing or substantially modifying AI systems intended for deployment in Colorado

Employers using AI hiring tools

Companies using algorithmic systems for resume screening, candidate evaluation or employment decisions

Financial institutions

Lenders and financial service providers using AI for credit decisions, loan approvals or financial services

Healthcare providers

Organizations using AI for diagnosis, treatment recommendations or healthcare service delivery

Government agencies

Colorado state and local agencies deploying AI systems affecting residents

How VerifyWise supports Colorado AI Act compliance

VerifyWise provides a Colorado SB 24-205 preset operating in impact assessment mode, delivering structured assessment templates covering every section the law requires

Colorado SB 205 requirement
VerifyWise coverage
Impact assessment for algorithmic discrimination
Structured impact assessment template with dedicated sections for each statutory requirement
Coverage of 13 protected classes
Pre-configured categories including race, sex, disability, age, gender identity, and more
NIST AI RMF alignment documentation
Dedicated assessment section for NIST AI RMF alignment evidence
Risk analysis and mitigation records
Required metadata fields for risk analysis, performance metrics, and mitigation measures

Additional compliance capabilities

Risk management policy framework

Generate Colorado-compliant risk management policies that address algorithmic discrimination prevention. The platform maintains the documentation and policy structure required under SB 24-205 for both deployers and developers.

Addresses: Deployer obligation: Risk management policy and program

Impact assessment workflows

Conduct and document algorithmic discrimination impact assessments for high-risk AI systems. The platform captures purpose, deployment metrics, transparency measures and risk mitigation documentation required before deployment.

Addresses: Deployer obligation: Impact assessment before deployment

Consumer notification management

Track consumer notification obligations for consequential decisions. The platform helps you identify when notifications are required and maintains records of disclosure compliance for high-risk AI system usage.

Addresses: Deployer obligation: Consumer notification and opt-out

Annual review scheduling

Schedule and document annual reviews of high-risk AI systems. The platform tracks review deadlines, maintains historical review records and ensures continuous compliance with ongoing monitoring requirements.

Addresses: Deployer obligation: Annual impact assessment reviews

Developer documentation tracking

Maintain documentation of AI system capabilities, known limitations, high-risk use cases and data usage. The platform generates the technical documentation developers must provide to deployers under Colorado law.

Addresses: Developer obligation: Documentation and disclosure

Affirmative defense preparation

Demonstrate compliance with NIST AI RMF or ISO 42001 to establish affirmative defense against penalties. The platform maps your controls to recognized frameworks and generates evidence packages for enforcement proceedings.

Addresses: Both roles: Affirmative defense through framework compliance

All compliance activities include timestamps, assigned owners and audit trails. This systematic documentation demonstrates good faith compliance efforts and supports affirmative defense preparation.

Complete Colorado AI Act requirements coverage

VerifyWise provides dedicated tooling for all compliance obligations

12

Compliance requirement areas

12

Areas with dedicated tooling

100%

Coverage of core obligations

Risk management4/4

Policies, assessments, reviews, documentation

Impact assessments3/3

Purpose, benefits, deployment metrics

Consumer notifications2/2

High-risk disclosure, opt-out rights

Annual compliance3/3

Annual reviews, updates, records

Built for Colorado AI Act compliance from the ground up

Algorithmic discrimination tracking

Monitor for discrimination across protected classes

Impact assessment templates

Colorado-specific templates with all required elements

Affirmative defense preparation

NIST AI RMF and ISO 42001 compliance evidence packages

Multi-state compliance

Integrated view across Colorado, Texas, and federal requirements

Key compliance requirements

Core obligations under the Colorado AI Act

Algorithmic discrimination prevention

High-risk AI systems must not discriminate on protected class basis when making consequential decisions.

  • Discrimination prevention protocols
  • Protected class monitoring
  • Bias detection systems
  • Corrective action procedures

Impact assessments

Deployers must conduct impact assessments before using high-risk AI systems in consequential decisions.

  • Purpose and use case documentation
  • Benefit and cost analysis
  • Deployment and usage metrics
  • Data governance transparency

Consumer notifications

Consumers must be notified when high-risk AI systems are used in consequential decisions affecting them.

  • Clear disclosure requirements
  • Opt-out mechanism provision
  • Alternative decision-making options
  • Statement of AI system purpose

Annual reviews

Impact assessments must be reviewed and updated at least annually or when substantial modifications occur.

  • Annual review scheduling
  • Performance metric updates
  • Risk mitigation effectiveness
  • Documentation of changes

Consequential decision areas

High-risk AI systems are those used in decisions that materially affect these areas

Employment

Hiring, firing, promotion, compensation, work assignment

Education

Enrollment, scholarships, financial aid, admissions

Financial services

Credit, lending, loan approval, financial products

Healthcare

Diagnosis, treatment, care access, insurance coverage

Housing

Rental applications, tenant screening, housing access

Insurance

Pricing, underwriting, claims decisions, coverage

Legal services

Legal representation access, case evaluation

Government services

Benefits eligibility, public service access

Deployer vs developer obligations

Different requirements based on your role in the AI lifecycle

Deployer

Organizations that use high-risk AI systems to make or substantially assist consequential decisions

Key obligations

  • Implement risk management policy and program
  • Conduct impact assessments before deployment
  • Provide consumer notifications
  • Establish opt-out mechanisms
  • Conduct annual reviews
  • Maintain compliance documentation
  • Report discrimination discoveries to AG
Compliance deadline: February 1, 2026

Developer

Persons or entities that develop or intentionally and substantially modify AI systems

Key obligations

  • Provide general information statement
  • Document known limitations
  • Identify intended high-risk uses
  • Disclose data usage requirements
  • Provide deployer management materials
  • Make documentation publicly available
  • Report discrimination discoveries to AG
Compliance deadline: February 1, 2026

12-month implementation roadmap

A practical path to Colorado AI Act compliance before February 1, 2026

Phase 1Months 1-3

Inventory and classification

  • Identify all AI systems in use
  • Classify high-risk vs non-high-risk systems
  • Determine deployer vs developer roles
  • Identify consequential decision points
  • Establish governance committee
Phase 2Months 4-6

Risk management foundation

  • Develop risk management policy
  • Create impact assessment templates
  • Design consumer notification processes
  • Establish opt-out mechanisms
  • Implement documentation systems
Phase 3Months 7-10

Impact assessments

  • Conduct impact assessments for high-risk systems
  • Document purpose and benefits
  • Analyze deployment metrics
  • Evaluate discrimination risks
  • Implement mitigation measures
Phase 4Months 11-12

Compliance activation

  • Activate consumer notification systems
  • Deploy opt-out mechanisms
  • Schedule annual review cycles
  • Train staff on obligations
  • Prepare affirmative defense evidence

Penalties and enforcement

Understanding the enforcement landscape and affirmative defenses

Colorado Attorney General enforcement

The Colorado Attorney General has sole enforcement authority under the Act. Private right of action is not available. The AG may investigate violations, issue civil investigative demands, and bring enforcement actions for non-compliance. Contact the AG office at coag.gov.

Deployer violations

$20,000 per violation

Per consequential decision affected

Developer violations

$20,000 per violation

Per deployer affected

Discrimination discovery (failure to report)

Additional penalties

For not notifying AG within 90 days

Cure period available

60 days to cure

First violation with good faith compliance

Affirmative defense through framework compliance

Organizations that substantially comply with a recognized AI risk management framework and continue reasonable compliance efforts can establish an affirmative defense against monetary penalties (though not against other enforcement actions).

NIST AI RMF

Compliance with NIST AI Risk Management Framework

Use case: Demonstrates systematic risk management approach

VerifyWise: Full NIST AI RMF implementation and evidence generation

ISO 42001

Certification to ISO 42001 AI Management System

Use case: Shows recognized international AI governance standard

VerifyWise: ISO 42001 readiness assessment and controls mapping

Policy templates

Colorado AI Act policy templates

Access ready-to-use policy templates aligned with Colorado AI Act requirements, NIST AI RMF, and ISO 42001

Deployer policies

  • • Risk Management Policy
  • • Impact Assessment Policy
  • • Consumer Notification Policy
  • • Opt-Out Procedures
  • • Annual Review Policy
  • • Discrimination Reporting
  • + 3 more policies

Developer policies

  • • Documentation Standards
  • • Known Limitations Disclosure
  • • High-Risk Use Case Policy
  • • Data Usage Requirements
  • • Deployer Support Materials
  • • Public Information Policy
  • + 2 more policies

Shared policies

  • • Algorithmic Discrimination Prevention
  • • Protected Class Monitoring
  • • Bias Detection & Mitigation
  • • AG Reporting Procedures
  • • Affirmative Defense Preparation
  • • Multi-State Compliance
  • + 4 more policies

Frequently asked questions

Common questions about Colorado AI Act compliance

The Colorado Artificial Intelligence Act (SB 24-205) was signed into law on May 17, 2024, and becomes effective on February 1, 2026. Organizations have until this date to establish compliance programs. View the official bill text at leg.colorado.gov/bills/sb24-205.
A high-risk AI system is any AI system that, when deployed, makes or is a substantial factor in making a consequential decision. Consequential decisions include those affecting employment, education, financial services, healthcare, housing, insurance, legal services, or government benefits and services.
You're a deployer if you use a high-risk AI system to make consequential decisions in Colorado. You're a developer if you create or substantially modify AI systems intended for use as high-risk systems. Some organizations may be both deployers (for AI they use) and developers (for AI they build).
Deployers must implement risk management policies, conduct impact assessments, notify consumers, and provide opt-out mechanisms. Developers must provide documentation, disclose known limitations, identify high-risk uses, and make certain information publicly available. Both must report discrimination discoveries to the Colorado Attorney General.
Algorithmic discrimination occurs when an AI system unlawfully differentiates in treatment or impact based on protected classifications such as age, race, disability, ethnicity, religion, sex, or other classes protected under Colorado or federal law. The Act prohibits this in consequential decisions.
Impact assessments must document the AI system's purpose, intended use cases and benefits, categories of data processed, known limitations, transparency measures, metrics for performance and fairness evaluation, risk mitigation steps, and post-deployment monitoring procedures. Assessments must be reviewed annually.
When using high-risk AI systems in consequential decisions, you must provide clear and conspicuous notice to affected consumers. The notice must explain that AI is being used, the system's purpose, the nature of the consequential decision, and how to opt out or request an alternative process.
Deployers must provide consumers the opportunity to opt out of profiling in furtherance of decisions that produce legal or similarly significant effects. This means offering an alternative human-based decision-making process when consumers request it, though limited exceptions apply.
Yes. If you substantially comply with a recognized risk management framework like NIST AI RMF or ISO 42001 and continue making reasonable efforts to comply with the Act, you can establish an affirmative defense against penalties. This doesn't eliminate enforcement actions but protects against financial penalties.
The Act applies when high-risk AI systems are used to make consequential decisions concerning Colorado residents, regardless of where the deployer or developer is located. If your AI systems affect people in Colorado in covered decision areas, you need to comply.
Both deployers and developers must notify the Colorado Attorney General within 90 days of discovering that a high-risk AI system has caused algorithmic discrimination. The AG office can be contacted through coag.gov. Failure to report can result in additional penalties.
Yes. The Act exempts AI systems used solely to detect, prevent or investigate fraud; perform narrow procedural tasks; conduct anti-malware/cybersecurity functions; and certain AI used in compliance with federal laws. Financial institutions complying with federal AI risk management requirements also receive limited exemptions.
While Colorado focuses on algorithmic discrimination in consequential decisions, the EU AI Act uses a broader risk-tier classification system. Both are risk-based approaches. Organizations operating globally should implement controls that satisfy both frameworks' requirements.
You must maintain impact assessments, risk management policies, consumer notifications, opt-out requests and responses, annual review documentation, and records of any discrimination discoveries and remediation efforts. These records may be requested by the Colorado Attorney General during enforcement actions.
Yes, VerifyWise provides impact assessment workflows, risk management policy templates, consumer notification tracking, annual review scheduling, and affirmative defense preparation. We map your controls to NIST AI RMF and ISO 42001 to help establish the affirmative defense protection.

Ready for Colorado AI Act compliance?

Start your compliance journey with our Colorado AI Act assessment and implementation tools. Prepare before the February 1, 2026 effective date.

Colorado AI Act Compliance Guide | SB 24-205 | VerifyWise