NAIC AI Principles and Model Bulletin compliance for insurers
NAIC AI Principles ยท Model Bulletin ยท Evaluation Tool

NAIC AI Principles and Model Bulletin compliance

Insurers in adopting states need a written AIS Program that governs every AI system in the business, tests for adverse consumer outcomes and stands up to examiner review. VerifyWise encodes the Model Bulletin, the FACTS principles and the AI Systems Evaluation Tool pilot into one operational workspace.

Principles adopted
Aug 14, 2020
Model Bulletin
Dec 4, 2023
States adopted
24+ and growing

What are the NAIC AI Principles?

The National Association of Insurance Commissioners adopted the NAIC AI Principles unanimously on August 14, 2020. Five principles, captured by the FACTS acronym, set the expectations for every AI actor across the insurance lifecycle: insurers, producers, vendors and supporting organisations.

The principles stayed largely aspirational until December 4, 2023, when NAIC members adopted the Model Bulletin on the Use of Artificial Intelligence Systems by Insurers. The bulletin is a template that state departments issue as their own regulatory guidance. It takes the principles and turns them into a concrete AIS Program obligation: written governance, risk management, testing, vendor oversight and documentation.

Written AIS Program

Documented governance and controls required

Risk-proportional

Controls scale with potential consumer harm

Where insurers are right now

24+ states have adopted the bulletin

And the list is still growing quarter by quarter

AI Systems Evaluation Tool pilot is live

12 states running the 4-exhibit examination in 2026

Existing laws still apply to AI decisions

Unfair trade, unfair discrimination and market conduct rules are not waived

Vendor AI is the insurer's problem

Liability does not transfer through a procurement contract

The FACTS principles

Five principles adopted in 2020. Everything the NAIC has written on AI since, including the Model Bulletin and the 2026 Evaluation Tool, traces back to these.

F

Fair and ethical

AI actors should respect the rule of law and pursue beneficial consumer outcomes aligned with the risk-based foundation of insurance, avoiding and correcting unintended discriminatory consequences.

A

Accountable

AI actors are accountable for ensuring that AI systems operate in compliance with the guiding principles and for the outcomes those systems produce.

C

Compliant

AI actors must have the knowledge and resources in place to comply with all applicable insurance laws, regulations and sub-regulatory guidance in every state where they operate.

T

Transparent

AI actors should commit to transparency and responsible disclosure regarding AI systems. Regulators and consumers need a way to inquire about, review and seek recourse for AI-driven insurance decisions.

S

Secure, safe and robust

AI systems must have reasonable traceability of datasets, processes and decisions, with a systematic risk management process that detects and corrects privacy, security and unfair-discrimination risks.

The four pillars of an AIS Program

The Model Bulletin is explicit about what a documented AIS Program must cover. These are the pillars regulators will test against.

Governance

A written AIS Program with board- or senior-management-approved policies, documented roles and authority, and a cross-functional structure that brings together actuarial, data science, underwriting, compliance and legal.

  • Written AIS Program document
  • Board or senior management oversight
  • Defined roles across actuarial, data science, underwriting, compliance and legal
  • Policies for selection, development, validation and retirement of AI systems

Risk management and internal controls

Controls and procedures commensurate with the Degree of Potential Harm to Consumers. The higher the stakes of the decision, the tighter the controls, testing cadence and human oversight.

  • AI system inventory with use-case risk tiers
  • Validation, testing and ongoing monitoring
  • Model drift detection and re-validation triggers
  • Human-in-the-loop controls calibrated to risk

Testing for adverse consumer outcomes

Verification and testing methods that identify errors, bias and unfair discrimination in predictive models and AI systems. Results feed corrective action, not a filing-cabinet report.

  • Disparate-impact testing across protected classes
  • Accuracy, stability and explainability metrics
  • Proxy-discrimination screens for data inputs
  • Documented remediation when tests fail

Third-party vendor oversight

Responsibility for vendor models, data sources and AI systems stays with the insurer. That means diligence before onboarding, contractual audit rights and ongoing monitoring of vendor performance.

  • Vendor AI diligence and risk scoring
  • Contractual audit and cooperation clauses
  • Ongoing monitoring of vendor model changes
  • Evidence trail for every vendor AI system in use

Where AI typically sits inside an insurer

The bulletin is risk-proportional. Controls must scale with the Degree of Potential Harm to Consumers. Here's how regulators currently prioritise use cases.

High risk

Underwriting and risk selection

AI that accepts, declines, rates or tiers risk. Highest regulatory scrutiny because of direct consumer impact and disparate-impact exposure.

High risk

Pricing and rate setting

Predictive models that inform premium, factors or rating plans. Often combined with ECDIS, which pulls the data-source review into scope.

High risk

Claims triage, handling and fraud detection

AI that denies, delays, steers or flags claims. The 2026 evaluation tool pilot explicitly examines claims handling and total-loss decisions.

High risk

Agent-facing policyholder interactions

AI agents handling claims inquiries, endorsement processing, billing disputes and certificate issuance. Rising regulator focus because consumers interact with the model directly.

Medium risk

Marketing, targeting and eligibility

Models that decide who sees an offer or who qualifies to apply. Still in scope for unfair-discrimination rules even when the decision looks upstream.

Lower risk

Back-office operations

Internal drafting, knowledge search, coding assistants. Lower regulatory exposure but still needs governance, especially when outputs reach regulated processes.

The 2026 AI Systems Evaluation Tool pilot

NAIC's pilot has run across 12 states since March 2026. Four exhibits, each answerable from a well-run AIS Program. If you want to know what an insurance AI exam looks like in practice, this is it.

Exhibit A

AI usage inventory

Breadth and depth of AI adoption across the enterprise. Regulators quantify how extensively each insurer uses AI and machine learning.

Exhibit B

Governance framework

How the insurer governs AI end to end. Evaluates the AIS Program itself, risk tiers, policies and oversight structures.

Exhibit C

High-risk systems detail

A deep dive on specific high-risk AI systems, with a current emphasis on agent-facing systems like claims inquiries, billing disputes and total-loss decisions.

Exhibit D

Data source review

Data lineage, quality controls, representativeness and proxy-discrimination screening. Special focus on rate-setting data, social-media signals and aerial imagery.

Even insurers outside pilot states should rehearse these exhibits. The pilot is the template regulators will reuse when they examine your AIS Program.

State adoption at a glance

A non-exhaustive list of states that have adopted the NAIC Model Bulletin with little to no material change. Consult each state's bulletin for effective dates and state-specific additions.

StateCitationAdopted
AlaskaBulletin B 25-02Mar 2025
ArkansasBulletin 5-2024Jun 2024
ConnecticutBulletin No. MC-25Feb 2024
DelawareBulletin No. 148Feb 2025
District of ColumbiaBulletin 25-IB-01-06/25Jun 2025
HawaiiMemorandum No. 2025-13ADec 2025
IllinoisBulletin 2024-15Sep 2024
IndianaBulletin 274Jul 2024
KentuckyBulletin 2024-02Apr 2024
MarylandBulletin 24-11Apr 2024
MassachusettsBulletin 2024-10Dec 2024
MichiganBulletin 2024-14-INSJul 2024
NebraskaGuidance Doc IGD-H1Jun 2024
NevadaBulletin 24-003Jul 2024
New HampshireBulletin INS No. 24-028-ABApr 2024
New JerseyBulletin 25-03Feb 2025
North CarolinaBulletin 24-B-19Dec 2024
OklahomaBulletin 2024-11Nov 2024
PennsylvaniaNotice 2024-04Apr 2024
Rhode IslandBulletin 2024-03Mar 2024
VermontBulletin 229Mar 2024
WashingtonTechnical Assistance Advisory 2024-01Apr 2024
West VirginiaInformational Letter 214Apr 2024

Source: NAIC Implementation of Model Bulletin tracker and state insurance department bulletins. Dates reflect the month each state's bulletin was issued; check the tracker or the state's own bulletin for the authoritative version.

Who is covered

If you write policies, price risk, pay claims or sell AI into the insurance stack, you're in scope.

Life, health, P&C, auto and specialty insurers

Any insurer licensed in an adopting state that uses AI, machine learning or predictive models in a regulated insurance practice.

Reinsurers with ECDIS or model inputs

Reinsurers whose models or data shape ceding insurers' decisions usually sit inside the ceding insurer's governance program.

Managing general agents and third-party administrators

MGAs and TPAs that run pricing, underwriting or claims AI on behalf of insurers fall within the insurer's AIS Program responsibility.

Insurtech and AI vendors

Vendors providing scoring, ECDIS, claims AI or agent-facing systems should expect downstream diligence, audit clauses and evidence obligations tied to the bulletin.

How VerifyWise covers NAIC expectations

Each row below is an NAIC obligation on the left and the VerifyWise capability that produces the evidence on the right. Built for insurers who need the artefacts, not just a policy library.

NAIC area
VerifyWise coverage
Bias and disparate-impact testing
Run selection-rate and impact-ratio tests across protected classes. Export results and corrective actions with timestamps.
Consumer notice and disclosure records
Track where and when consumer notices were delivered, with evidence stored against the relevant AI system.
Evidence pack and regulator exam kit
One-click export bundling policies, inventory extracts, test results, incidents and vendor records for an examination or pilot submission.

NAIC clause by clause, mapped to VerifyWise

For insurers and legal teams who want to see the bulletin translated into the concrete feature that produces the evidence. One row per clause, no hand-waving.

NAIC clauseRegulatory expectationVerifyWise featureHow it satisfies the clause
Section 3 โ€” AIS ProgramInsurers must develop, implement and maintain a documented AIS ProgramStructured AI governance frameworkPre-built governance structure, policies and workflows that formalise and operationalise the AIS Program
Section 3 โ€” Governance and oversightClear accountability, roles and oversight mechanisms must be definedRole-based access and approval workflowsAssigns ownership, approval flows and accountability across every AI system in the inventory
Section 3.1 โ€” Risk managementRisk-based approach proportional to potential consumer harmAI risk assessment engineScores and classifies AI systems by impact and risk level, then aligns controls to the Degree of Potential Harm
Section 3.2 โ€” Policies and proceduresWritten policies for AI development, deployment and monitoringPolicy templates and control mappingBuild and manage AI policies aligned with NAIC expectations and map each policy to the controls it enforces
Section 3.3 โ€” DocumentationDocument AI systems, decisions and controlsCentral documentation hubStructured records of models, decisions, approvals and governance artefacts, linked to the relevant control
Section 3 โ€” Scope of AI usageIdentify where AI is used across the businessAI system registryCentral inventory of every AI use case: underwriting, pricing, claims, fraud, marketing and back-office
Section 3 โ€” Lifecycle managementAI systems must be governed across their lifecycleAI inventory and lifecycle trackingTracks each system from development through deployment, monitoring and retirement, with stage-specific controls
Section 3 โ€” Third-party riskInsurers remain responsible for vendor AI systemsVendor risk management moduleAssesses and monitors third-party AI vendors, captures contract clauses and tracks ongoing vendor performance
Section 3 โ€” Data governanceData sources and data quality must be governedData input documentation and risk flagsTracks datasets, sources, representativeness and proxy-discrimination flags, tying each to an AI system
Section 3 โ€” Consumer impactEvaluate potential consumer harm from AI decisionsImpact assessment layerLinks each AI use case to its consumer impact, fairness implications and the controls that mitigate them
Section 4 โ€” Unfair discriminationDetect, prevent and document unfair discriminationBias audit moduleStatistical bias testing across protected classes with auditable reports on discrimination risk
Section 4 โ€” Testing and validationOngoing testing and validation of AI systemsContinuous bias and risk testingRecurring assessments logged over time, with trend charts and triggered reviews when metrics move
Section 4 โ€” Transparency and explainabilityInsurers must understand and explain AI outcomesModel documentation and explainability inputsCaptures purpose, inputs, outputs and decision logic per model so regulators and consumers can follow the reasoning
Section 4 โ€” Ongoing monitoringContinuous monitoring of AI systems requiredMonitoring logs and reassessment triggersPeriodic reviews with alerts for drift, data change or performance degradation that require re-evaluation
Section 4 โ€” Regulatory examinationProvide evidence to regulators upon requestAudit trail and evidence vaultImmutable logs and exportable reports produce a regulator-ready evidence pack in one step

Section numbers reference the structure of the NAIC Model Bulletin on the Use of Artificial Intelligence Systems by Insurers (adopted Dec 4, 2023). State-adopted versions may re-number; the substance tracks.

Before you pick a platform

Four questions that decide your scope

Every NAIC AIS Program starts from the same four questions. The answers shape the inventory, the risk tiers and the evidence you need to produce first.

01

Which states do you write business in?

Adoption is state by state. Colorado layers SB 21-169 and SB 24-205 on top. New York and California are moving fast. Scope depends entirely on your footprint.

02

Which AI use cases are in scope?

Underwriting, pricing, fraud detection, claims triage and agent-facing AI all carry different risk tiers. Use-case mix decides how tight the controls need to be.

03

Do you already have a model inventory?

If you don't, Exhibit A of the Evaluation Tool stalls on day one. A defensible inventory is usually the first deliverable of an AIS Program.

04

Have you done bias or model risk assessments before?

This tells us your maturity. If you have results, we map them to the bulletin. If you don't, we start with the highest-harm models first.

Common NAIC compliance mistakes

Patterns that repeatedly trip up insurers when examiners arrive. Worth pressure-testing your AIS Program against each one.

Treating the bulletin as a policy drafting exercise

The NAIC is explicit: regulators expect to see testing results, incident records and vendor evidence, not just a signed policy. An AIS Program without operational artefacts fails the first serious review.

Leaving ECDIS and vendor data sources out of scope

External consumer data, credit-based scores, wearables, social signals and aerial imagery all sit inside Exhibit D. Insurers that inventory only their own models miss the largest part of the regulator's data-review workload. This also intersects the Colorado SB 21-169 regime for Colorado-licensed insurers.

Applying one control level to every AI use case

The bulletin is risk-proportional. An underwriting model and an internal drafting assistant don't need the same controls. Insurers who over-govern low-risk use cases burn capacity they need on the high-risk ones.

Assuming vendor diligence transfers responsibility

Responsibility stays with the insurer. A vendor certification is evidence, not a substitute. Regulators will ask the insurer to produce testing, monitoring and incident records even when the model is third-party.

Ignoring the AI Systems Evaluation Tool pilot

The pilot is not the bulletin, but it's the template for how regulators will ask questions. Insurers in non-pilot states who ignore it lose the chance to rehearse the format they'll eventually be examined on.

Frequently asked questions

Straight answers to what insurers, brokers and AI vendors ask most often about the NAIC regime.

Further reading

Colorado SB 21-169

Insurance-specific bias testing and ECDIS rules for Colorado-licensed insurers, enforced by the Division of Insurance.

Read the guide โ†’

Colorado AI Act (SB 24-205)

Cross-sector AI law enforced by the Attorney General. Applies on top of insurance-specific rules.

Read the guide โ†’

NIST AI RMF

The risk management framework the Model Bulletin recognises as a reference for building defensible AIS Program controls.

Read the guide โ†’

ISO/IEC 42001

The AI management system standard. A solid structural backbone when an insurer wants certification alongside NAIC alignment.

Read the guide โ†’

Ready to stand up an AIS Program?

Spin up the NAIC framework in VerifyWise, import your models and vendors, and start producing the evidence regulators will ask for. No spreadsheet stack, no policy-only theatre.

NAIC AI Principles and Model Bulletin compliance for insurers