Volver al blog
Compliance
Apr 20, 2026
25 min read

Colorado SB 21-169 compliance playbook for insurers

A step-by-step compliance playbook for Colorado SB 21-169. Model inventory, quantitative bias testing, ECDIS oversight, remediation and annual attestation for life, auto and health insurers, including a worked example.

How Colorado regulates insurance AI today

For almost five years, Colorado has been the state that matters most for insurance AI regulation in the United States, and 2026 is the year the full weight of that settles on insurers. If your business writes policies, prices risk, pays claims or designs models for a carrier licensed in the state, this post is for you. We wrote it to be the single thing you bookmark when someone on the compliance team asks how to run this well, rather than one more explainer on what the statute says.

Colorado Rocky Mountain range at sunrise representing the Colorado SB 21-169 compliance playbook for insurers

Three concrete changes have landed in the last twelve months, and none of them made much noise outside specialist circles. On October 15, 2025, the bias testing regime under C.R.S. §10-3-1104.9 stopped being a life insurance rule and became a rule for private passenger auto and health benefit plans as well. Most large personal lines carriers are now in scope and many of them are not yet operating at the standard the Division of Insurance expects. On December 1, 2025, life insurers filed their second annual attestation under Regulation 10-1-1, which means the Division now has two years of data, two years of comparisons and two years of reason to go deeper on filings that look suspiciously clean. And in June 2026 the broader Colorado AI Act (SB 24-205) will come into force, enforced by the Attorney General rather than the Division, adding a second enforcement channel that every Colorado-licensed insurer has to handle on top of SB 21-169.

This playbook walks through what a serious SB 21-169 program looks like from the inside. It covers the statute and its implementing regulations, the operational components you need to stand up, the quantitative methods the Division has come to expect, a worked bias test using a fictional mid-sized auto insurer, the common failure modes that turn up during market conduct exams, and how the law relates to the two adjacent AI regimes your compliance team will also have to satisfy. If you only need the short explainer, our SB 21-169 solution page handles that. If you need something you can hand to the risk committee, read on.

What the statute and regulation ask of you

Most summaries of SB 21-169 blur a distinction that turns out to matter quite a bit. The bill itself, signed by Governor Polis in July 2021, is an enabling statute. It lives in the Colorado Revised Statutes as C.R.S. §10-3-1104.9 under the banner "Protecting Consumers from Unfair Discrimination in Insurance Practices," and its substance is short. The statute prohibits insurers from using algorithms, predictive models or external consumer data if the result is unfair discrimination against a protected class, and it directs the Colorado Division of Insurance to write the rules that give those words operational meaning.

Those rules live in Regulation 10-1-1, finalized in September 2023 and effective November 14, 2023. For life insurance, 10-1-1 is the live rulebook today, and it is where every reference to quantitative testing, governance, documentation and annual attestation comes from. When a compliance team says we comply with SB 21-169, what they usually mean is that they comply with Regulation 10-1-1, because the statute on its own is too general to comply with directly.

Private passenger auto and health benefit plans are a different story. The October 2025 scope expansion put them inside the statute, but the equivalent of Regulation 10-1-1 for auto and for health is still in rulemaking as of early 2026. The Division has signaled it intends to follow the same structural approach it used for life, so most carriers are building to the 10-1-1 standard by default and hoping the final sector rules land close to it. That is a defensible bet, but it is still a bet.

Four terms the regulation relies on

The statute and Regulation 10-1-1 use four pieces of vocabulary so often that nothing else makes sense until you pin them down:

  • ECDIS (External Consumer Data and Information Sources) is any data the insurer did not collect directly from the consumer: credit-based insurance scores, purchase history, telematics signals, wearable device data, geographic features, third-party broker files, consortium feeds. The Division's data-source review is mostly a review of ECDIS.
  • Predictive model covers any statistical or machine-learning construct that produces a score, a class or a prediction feeding an insurance decision. Tree models, GLMs, neural networks, hybrid rule-plus-model stacks are all in scope if they influence a regulated outcome.
  • Algorithm is broader still. The regulation uses it for any computational process whose output informs an insurance decision, which means a hand-coded rating engine with no learned parameters can qualify if it drives consequential outcomes.
  • Unfair discrimination is the phrase the regulation exists to prevent. It means differential treatment or disparate impact on a protected class not justified by a legitimate actuarial basis, operationalised through the quantitative testing regime described later in this post.

Who enforces SB 21-169

The Colorado Division of Insurance enforces SB 21-169, which is a deliberate choice with real consequences for how compliance programs are built. The Division is the same agency that approves your rating plans, handles market conduct examinations and oversees the day-to-day operation of your business in the state. It knows your policy forms, it knows your reserving practices, and it tends to ask questions that a general-purpose AI regulator would never think of. That makes the evidence bar higher than it would be elsewhere. Staff will compare your bias testing to the rating memoranda you filed, and they will notice inconsistencies.

The Attorney General, by contrast, owns enforcement of SB 24-205, the Colorado AI Act that layers on top of SB 21-169 for every Colorado-licensed insurer. Two regulators, two enforcement styles, one business. The comparison table later in this post explains how the regimes differ; for now, keep in mind that a Colorado-licensed insurer using AI has at least two distinct compliance relationships to manage, and three once the NAIC Model Bulletin is also counted.

How the rollout happened

Colorado SB 21-169 rollout timeline from 2021 to 2026
Life insurance went first. Auto and health followed in late 2025. SB 24-205 arrives in June 2026.

The pace of the rollout has been deliberately uneven. Between the July 2021 signing and the November 2023 launch of Regulation 10-1-1, the Division spent two years consulting with actuaries, consumer advocates and carriers on how to translate the statute's prohibition into measurable practice. Life insurers then filed their first attestation in December 2024, a second in December 2025, and broke the cadence ten months later when the October 2025 scope expansion pulled auto and health into the statute before their sector rules were ready.

DateEvent
July 2021SB 21-169 signed by Governor Polis
November 2023Regulation 10-1-1 effective for life insurance
December 2024First life insurer annual attestation filed
October 2025C.R.S. §10-3-1104.9 scope expanded to private passenger auto and health benefit plans
December 2025Second annual life insurer attestation filed
June 30, 2026SB 24-205 takes effect and applies to every Colorado-licensed insurer using high-risk AI

The seven-part compliance program

Running an SB 21-169 program is less about policy prose and more about operational discipline. The Division expects artefacts, not assurances, and those artefacts come from seven components that compound on each other. An inventory without governance is just a list. Governance without testing is just a committee. Testing without vendor oversight leaves the largest source of risk unmanaged. Each component makes the next one possible, and each one feeds evidence into the annual attestation that closes the cycle.

Program components

  1. Build the inventory. Every algorithm, predictive model and ECDIS feed in scope.
  2. Set up governance. Written policies, defined roles, senior management ownership.
  3. Run quantitative testing. Disparate impact analysis at a defensible cadence.
  4. Worked example: Mesa Mutual's quarterly bias test on an auto rating factor.
  5. Oversee vendors and ECDIS. The obligation stays with the insurer.
  6. Handle failed tests. Corrective action, remediation, documentation.
  7. File the annual attestation. Senior management sign-off to the Division.

Part 01: Build the inventory

Every credible SB 21-169 program starts with an honest inventory of the AI and data surface that insurance decisions rest on. Most insurers find their first draft uncomfortably long.

The opening question from an examiner is almost always some variant of "show us what you have." A carrier that cannot answer concretely has effectively told the Division that its program does not yet exist. The inventory is both the foundation and the artefact the Division will want to compare against your rating filings.

For each entry, the fields that matter are the obvious ones plus the ones that tie the record back into the rest of the program:

  • Purpose: the decision the system influences (underwriting, pricing, claims, fraud, marketing, retention).
  • Data sources: every input the system consumes, flagged for which inputs qualify as ECDIS.
  • Owner: a named human with authority to make decisions about the system. Not "the data team."
  • Vendor attribution: where the model or data came from and under which contract, linked to the contract text.
  • Risk tier: how much consumer harm the system can cause, so testing cadence scales accordingly.
  • Testing status: the last disparate impact test run on the system and its result.
  • Lifecycle stage: in development, in production, or in retirement.

A spreadsheet works for a pilot and becomes a liability by year two, because the inventory is a living record that changes whenever a vendor pushes an update and has to stay linked to testing results, incidents and attestations. Most insurers at scale use a structured model inventory, which is what VerifyWise is built around, but the tool matters less than the discipline of keeping the list current and connected.

The more dangerous failure is quiet. ECDIS feeds like telematics data, aerial imagery, third-party fraud consortium scores and appended demographic attributes from data brokers tend to arrive through operational teams who do not think of themselves as model owners. They end up outside the inventory unless someone is specifically looking for them. That is the gap the Division's data-source review is designed to find.

Part 02: Set up governance

Regulation 10-1-1 is unusually clear that an SB 21-169 program is a senior management program, not a technical one. An examiner's first three requests during a market conduct review are your written policies, your documented roles and the minutes of your governance committee. A carrier that cannot produce those on demand has effectively confirmed that the program is informal, which is a finding in itself.

The governance layer has four ingredients:

  • Written policies covering acquisition, validation, deployment, monitoring and retirement of models, algorithms and ECDIS, with senior management or board approval on a dated record.
  • A role map with named individuals from actuarial, data science, underwriting, compliance, legal and IT, each with authority clearly delegated rather than assumed.
  • A cross-functional committee, typically monthly or quarterly, where decisions about high-impact models are made and recorded rather than handled in email threads that vanish into archived inboxes.
  • Ongoing oversight, which means senior management stays in the loop on testing results, incidents, remediations and vendor changes throughout the year, not only at attestation time.

The most common governance failure we see is not absence but strategic vagueness. A policy that says models will be validated regularly is worse than no policy, because it creates an explicit obligation the insurer cannot demonstrate it has met and hands the Division a direct line of questioning. Good policies specify who validates, on what cadence, using which methodology, against which threshold, and with what consequence when the threshold is breached. Our risk management module encodes that as a structured framework with evidence tied to each control, but the same rigor can be produced on paper.

Part 03: Run quantitative testing

The testing regime is what sets SB 21-169 apart from other AI governance frameworks. Where other regimes ask for testing as a broad obligation, Regulation 10-1-1 asks for quantitative testing with specific statistical methods on data the Division can inspect.

What the Division tests for. The object is disparate impact, not disparate treatment. Nothing turns on whether your models use race as an input; the industry stopped doing that long ago. The question is whether the output produces materially different rates across protected classes after legitimate actuarial variation is accounted for. Protected classes under 10-1-1 include race, color, national origin, religion, sex, sexual orientation, disability, gender identity and gender expression. Age is handled separately under existing rating rules.

The four-fifths rule. For each protected class, compute the selection rate, which is the proportion of applicants in that class who receive the favorable outcome. Divide the minority-class selection rate by the reference-class rate to get the impact ratio. A ratio of 0.80 or higher is within tolerance; anything below is a finding that requires explanation, justification or remediation.

The math

  • Selection rate = favorable outcomes in class ÷ total applicants in class
  • Impact ratio = selection rate for class X ÷ selection rate for reference class

0.80 or above: within tolerance. Below 0.80: a finding the Division expects you to explain, justify or remediate.

The race data problem and BISG. Insurers do not collect race directly and cannot start doing so. Regulation 10-1-1 expects carriers to use validated probabilistic proxies. The default is Bayesian Improved Surname Geocoding (BISG), which combines a surname and a geographic location to estimate a probability distribution across racial categories. The Division has accepted BISG in attestations so far, with the caveat that carriers document how they validated the proxy against their own book. BISG results come with confidence intervals, not point estimates, and saying we used BISG is a starting point rather than an answer.

Cadence and triggers. Regulation 10-1-1 does not prescribe a frequency. Life insurers have settled into a quarterly rhythm with a deeper annual review tied to the December attestation; auto and health carriers are mostly following suit. Any material model change, data change or rate revision triggers its own test, because waiting for the next quarterly cycle to discover that a pushed change produced a new disparate impact is the kind of governance gap the Division looks for.

Part 04: Worked example for Mesa Mutual

A disparate impact test is easier to understand once you see one run from start to finish.

Meet Mesa Mutual

A fictional mid-sized auto insurer writing in Colorado, with roughly 180,000 policies. Its pricing team has tuned a credit-based rating factor, and the SB 21-169 program requires a disparate impact test before the new factor goes live.

Mesa pulls the last quarter's new-business applications that would have been priced under the new factor: 42,000 applicants. The favorable outcome is being placed into the standard preferred tier rather than a higher-priced tier. Mesa runs BISG against surnames and ZIP codes, treats an applicant as a single class when BISG assigns a probability above 0.80, and marks the rest as mixed.

Native Hawaiian and Pacific Islander applicants (0.8%) fall below the 2% exclusion threshold and are dropped from the ratio calculation.

ClassApplicantsPreferred tierSelection rate
White (reference)22,10015,25069.0%
Hispanic8,4004,75056.5%
Black3,9002,05052.6%
Asian2,8002,10075.0%
American Indian / Alaska Native96056058.3%
ClassImpact ratio vs. referencePass or fail
Hispanic56.5 / 69.0 = 0.82Pass
Black52.6 / 69.0 = 0.76Fail
Asian75.0 / 69.0 = 1.09Pass
American Indian / Alaska Native58.3 / 69.0 = 0.85Pass

Finding

Impact ratio of 0.76 for Black applicants, below 0.80. Under Regulation 10-1-1, this is a finding Mesa cannot ignore.

Mesa's governance committee considers three options: justify the factor actuarially, remediate by adjusting the rating plan, or retire the factor. It picks remediation: reduce the credit factor's weight and add a compensating underwriting factor that empirical testing suggests narrows the gap without degrading predictive accuracy.

Two weeks later Mesa reruns the test on the adjusted plan. The Black class ratio moves from 0.76 to 0.83; Hispanic moves from 0.82 to 0.87. Mesa documents the original finding, the committee rationale, the adjustment, the retest result and the final sign-off. All of it goes into the testing log, linked to the inventory entry, and surfaces in the December 1 attestation as a documented finding with a clean remediation.

The point is not the math, which is straightforward once the proxy step is settled. The point is the artefacts. A program that only produces the final revised rating plan has nothing to show the Division. A program that produces the testing log, the committee minute, the retest evidence and the revised inventory record survives examination without stress.

Part 05: Oversee vendors and ECDIS

The most expensive mistake we see in SB 21-169 programs is assuming vendor diligence transfers legal responsibility. It does not. If a vendor's model produces a disparate impact in your book, the finding is against you.

That produces three operational obligations:

  • Pre-onboarding diligence that records, in writing, the vendor's approach to bias, testing methodology and data lineage. A vendor certification is evidence, not a substitute for the insurer's own diligence.
  • Contract terms that bite: audit rights, access to testing data, cooperation with regulatory inquiries, termination rights tied to compliance failures. Boilerplate agreements rarely include any of this.
  • Ongoing monitoring. Vendor models are updated, data feeds drift, and a vendor system that passed six months ago can fail the next test without any change on the insurer's side.

The inheritance chain runs through MGAs, TPAs and reinsurers. An MGA writing business on your paper operates under your program. A TPA running claims triage on your behalf handles the regulated decisions the statute covers. The question is never whether a vendor is inside or outside the enterprise boundary; it is whose program owns the evidence when a Division examiner asks. Our vendor management module is built around that question, but any approach that keeps vendor oversight connected to the testing engine rather than parked in procurement meets the standard.

Part 06: Handle failed tests

Regulation 10-1-1 does not prohibit disparate impact findings. It prohibits ignoring them.

When a test fails, three paths are open:

  • Actuarial justification: show that the factor reflects genuine loss experience and that no less-discriminatory alternative achieves the same actuarial objective.
  • Remediation: adjust the model, data, rating plan or rule until the ratio returns to tolerance. Retest and document.
  • Retirement: take the system out of service when neither justification nor remediation is feasible.

The documentation requirement is the same across all three. The Division wants to see the failed test, the decision rationale, the action taken, and evidence that the action worked.

The workflow looks structurally identical to incident management: a finding opens a case, governance reviews it, an owner executes the response, a retest or justification closes it. Our incident management module encodes that workflow and ties each finding back to the model inventory and the testing log. Carriers that handle findings through email threads invariably lose track of them.

Part 07: File the annual attestation

The annual attestation confirms that a program exists, is operating, and produces the evidence the regulation calls for. For life insurers, the deadline is December 1 each year. Auto and health carriers should expect a similar cadence once sector rules finalize.

The sign-off sits with senior management, usually the CRO, CCO or equivalent. The Division expects the signatory to have genuinely reviewed the filing.

A defensible attestation confirms that:

  • A written governance framework exists and is approved at the appropriate level.
  • The model inventory is current.
  • Quantitative testing ran at the required cadence across in-scope models and data sources.
  • Findings during the year have been logged and remediated.
  • Any material changes to the program are documented.

A thin attestation usually offers a general statement of compliance without the underlying artefacts. The Division has signaled it will follow up more aggressively on filings that look light. Hiding findings is worse than reporting them: a year with no disparate impact findings on a large book is statistically implausible and tends to invite the scrutiny the carrier was trying to avoid.

Twelve questions an examiner will ask

One of the most useful exercises any insurer can run before its first market conduct exam under 10-1-1 is a simple test: can the program produce, on demand, each of the following artefacts within a business day? A carrier that clears ten of twelve is in good shape. A carrier clearing fewer than eight should treat the gap as the near-term priority.

Examiner-day checklist

  • The current written governance framework, with senior management approval on the record.
  • An up-to-date model inventory covering every algorithm, predictive model and ECDIS used in regulated practices.
  • Each model's purpose, owner, data sources, risk tier and vendor attribution.
  • Testing logs for each in-scope model, covering the current and prior reporting periods.
  • The proxy methodology used for protected-class identification, with a validation record.
  • A full record of every disparate impact finding, the decision rationale, the corrective action and the retest result.
  • Governance committee minutes for the period, showing active oversight of findings.
  • Vendor due-diligence records for each third-party model or ECDIS feed in use.
  • Vendor contracts containing audit rights, cooperation obligations and ongoing monitoring terms.
  • The annual attestation filed with the Division and the supporting evidence behind it.
  • A change log for any material updates to in-scope systems during the period and how each change was tested.
  • Evidence that consumer complaints and adverse-outcome reports tied to algorithmic decisions were investigated and resolved.

SB 21-169 in the wider Colorado AI stack

SB 21-169 does not operate in isolation. Every Colorado-licensed insurer using AI is also under the NAIC Model Bulletin if its home state has adopted it, and every one of them will be under SB 24-205 from June 30, 2026 onward. Three regimes, three regulators, one business.

Three Colorado AI laws stacking on top of a single insurer program: SB 21-169 quantitative testing, SB 24-205 consumer notice and impact assessments, NAIC Model Bulletin AIS Program
One insurer, three regimes. Build the program once and feed the evidence into each.
DimensionSB 21-169SB 24-205 (Colorado AI Act)NAIC Model Bulletin
ScopeInsurance only. Life today, auto + health since Oct 2025Any high-risk AI used in consequential decisions, across sectorsInsurance only. Any AI used by insurers
Statute in effectSince July 2021Enforcement from June 30, 2026Template adopted Dec 4, 2023; state-by-state adoption
RegulatorColorado Division of InsuranceColorado Attorney GeneralState insurance departments that adopt the bulletin
Core obligationQuantitative testing for unfair discrimination in algorithms, ECDIS and predictive modelsDuty of care, impact assessments, consumer notice, affirmative defenseWritten AIS Program: governance, risk management, testing, vendor oversight
Testing requirementQuantitative, with documented methodologyLess prescriptive. Impact assessments and risk managementTesting for errors and unfair discrimination, proportional to harm
Annual filingYes. December 1 for life insurersNone. Enforcement-drivenNone. Exam-driven
Cure periodNone60 daysNone
Enforcement modelMarket conduct exam plus administrative actionAttorney general enforcement with affirmative defenseExisting market conduct authority under each state's adoption

A serious SB 21-169 program already covers most of SB 24-205 and the NAIC bulletin. Governance, inventory, vendor oversight and incident management are effectively identical. SB 24-205 adds consumer notice and a formal impact assessment document; the NAIC bulletin adds an AIS Program document in a specific format. Both additions are incremental rather than parallel builds. For the broader treatments, our Colorado AI Act page covers SB 24-205, and our NAIC AI Principles page walks through the Model Bulletin clause by clause.

What could still change in 2026

Three developments could move the ground. None are reasons to delay the program.

  • Auto and health rulemaking. Sector-specific rules equivalent to 10-1-1 are being drafted. Expect drafts for public comment during 2026. The final rules will likely track 10-1-1 closely, but specific thresholds and filing cadences could differ.
  • SB 24-205 in force. June 30, 2026 brings the Colorado AI Act into effect. Carriers already running SB 21-169 programs face modest incremental work: consumer notice, impact assessment documentation, affirmative defense evidence.
  • Litigation. xAI has sued Colorado over SB 24-205 on First Amendment grounds. The case targets SB 24-205 specifically; SB 21-169 is not in the litigation. A successful challenge would leave the Division's regime intact but change the political environment around Colorado AI rules more broadly.

The first week of a serious program

A full SB 21-169 implementation is a multi-quarter build for a carrier starting from zero. The first week, though, is surprisingly simple. Five actions cover it:

Five actions for week one

  1. Assemble the inventory. Put together the first-draft list of every algorithm, predictive model and ECDIS feed currently in use across the business, including every vendor-supplied system.
  2. Assign a named owner to each entry. A human being with authority to make decisions about that system. If no owner can be named, the system is not ready to remain in scope.
  3. Pick a proxy methodology and document it. BISG is the default. Write down the method, the inputs, the limitations and the validation plan for the insurer's own book.
  4. Run one real disparate impact test. Pick the highest-impact system on the inventory, pull last quarter's applications, run the test and record the result honestly.
  5. Bring senior management into the loop. Share the draft inventory and the first test result with whoever will eventually sign the annual attestation. Their reaction indicates how much governance still needs to be built.

None of these five actions require a platform, a consulting engagement or significant upfront spend. They require discipline and honesty about the state of the program.

Most insurers at scale eventually converge on the same operational needs: a structured model inventory, a vendor oversight layer, a testing engine that logs methodology and results, an incident management workflow for failed tests, and a compliance framework that assembles the evidence pack on demand. VerifyWise is built around that shape and is used by carriers running SB 21-169 programs today, but the tooling question is for month three, not week one.

Dates to put on the compliance calendar

Three dates over the next eighteen months will determine how much running room your program has.

Key dates

  • First half of 2026: draft auto and health sector rules expected from the Colorado Division of Insurance. Comment periods will be real opportunities to shape specifics.
  • June 30, 2026: SB 24-205 takes effect and applies to every Colorado-licensed insurer using high-risk AI.
  • December 1, 2026: annual life insurer attestation under Regulation 10-1-1. Auto and health carriers should expect a similar cadence once sector rules finalize.

If your program is behind, the path forward is shorter than it looks. The Division does not expect perfection on the first attestation, and carriers that arrive with a genuine program and honest findings tend to have productive conversations rather than enforcement actions. If you want a sanity check on where your program stands, get in touch or start a compliance assessment.

¿Le resultó útil este artículo? Compártalo con su red.

Share:

Sobre el equipo de VerifyWise

VerifyWise desarrolla software de gobernanza de IA con código disponible (source-available) utilizado por organizaciones para gestionar riesgos, cumplimiento y supervisión en sus carteras de IA. Nuestro equipo editorial se basa en experiencia práctica implementando flujos de trabajo de gobernanza para industrias reguladas y equipos de IA en rápido crecimiento.

Más información sobre VerifyWise

¿Listo para gobernar su IA de manera responsable?

Comience hoy su viaje de gobernanza de IA con VerifyWise.

Colorado SB 21-169 compliance playbook for insurers - VerifyWise Blog