NAIC AI Principles and Model Bulletin compliance for insurers
NAIC AI Principles ยท Model Bulletin ยท Evaluation Tool

NAIC AI Principles and Model Bulletin compliance

If you write business in an adopting state, you owe a written AIS Program. It has to govern every AI system you run, test for adverse consumer outcomes and survive an examiner walking through your evidence. VerifyWise puts the Model Bulletin, the FACTS principles and the Evaluation Tool exhibits in one place.

Principles adopted
Aug 14, 2020
Model Bulletin
Dec 4, 2023
States adopted
24+ and growing

What are the NAIC AI Principles?

Five principles โ€” the FACTS acronym โ€” adopted by the NAIC on August 14, 2020 to set expectations for insurers, producers and vendors across the AI lifecycle.

On December 4, 2023, NAIC members adopted the Model Bulletin on the Use of Artificial Intelligence Systems by Insurers, turning the principles into a concrete AIS Program obligation: written governance, risk management, testing, vendor oversight and documentation. State insurance departments then issue it as their own regulatory guidance.

Written AIS Program

Documented governance and controls required

Risk-proportional

Controls scale with potential consumer harm

Where insurers are right now

24+ states have adopted the bulletin

List growing each quarter

AI Systems Evaluation Tool pilot is live

12 states running the 4-exhibit examination in 2026

Existing laws still apply to AI decisions

Unfair trade, discrimination and market conduct rules are not waived

Vendor AI is the insurer's problem

Liability does not transfer through procurement

The FACTS principles

Five principles adopted in 2020. Everything the NAIC has written on AI since, including the Model Bulletin and the 2026 Evaluation Tool, traces back to these.

  1. F

    Fair and ethical

    AI actors respect the rule of law and pursue consumer outcomes consistent with the risk-based foundation of insurance. When an AI system produces unintended discrimination, the actor finds it and corrects it.

  2. A

    Accountable

    AI actors are answerable for whether their systems follow these principles and for the outcomes those systems produce. Responsibility doesn't transfer through a procurement contract.

  3. C

    Compliant

    AI actors keep the knowledge and resources to comply with applicable insurance laws, regulations and sub-regulatory guidance in every state where they operate.

  4. T

    Transparent

    AI actors disclose how their AI systems work, in line with responsible-disclosure norms. Regulators and consumers need a path to inquire about, review and seek recourse for AI-driven insurance decisions.

  5. S

    Secure, safe and robust

    AI systems carry reasonable traceability of datasets, processes and decisions. A systematic risk management process detects and corrects privacy, security and unfair-discrimination risks across the lifecycle.

The four pillars of an AIS Program

What a documented AIS Program must cover โ€” the pillars regulators test against.

Governance

A written AIS Program with senior-management approval and clear cross-functional roles.

  • Written AIS Program
  • Board or senior management oversight
  • Cross-functional roles
  • Lifecycle policies

Risk management and internal controls

Controls scaled to the Degree of Potential Harm to Consumers โ€” tighter for higher-stakes decisions.

  • AI inventory with risk tiers
  • Validation and ongoing monitoring
  • Drift detection and re-validation
  • Human-in-the-loop calibrated to risk

Testing for adverse consumer outcomes

Test methods that surface errors, bias and unfair discrimination, with results that drive remediation.

  • Disparate-impact testing
  • Accuracy and explainability metrics
  • Proxy-discrimination screens
  • Remediation when tests fail

Third-party vendor oversight

Responsibility for vendor models stays with the insurer โ€” diligence, audit rights, ongoing monitoring.

  • Vendor AI diligence and risk scoring
  • Contractual audit rights
  • Monitoring of vendor model changes
  • Evidence trail per vendor system

Who is covered

If you write policies, price risk, pay claims or sell AI into the insurance stack, you're in scope.

Life, health, P&C, auto and specialty insurers

Any insurer licensed in an adopting state that uses AI, machine learning or predictive models in a regulated insurance practice.

Reinsurers with ECDIS or model inputs

Reinsurers whose models or data shape ceding insurers' decisions usually sit inside the ceding insurer's governance program.

Managing general agents and third-party administrators

MGAs and TPAs that run pricing, underwriting or claims AI on behalf of insurers fall within the insurer's AIS Program responsibility.

Insurtech and AI vendors

Vendors providing scoring, ECDIS, claims AI or agent-facing systems should expect downstream diligence, audit clauses and evidence obligations tied to the bulletin.

What VerifyWise produces for an AIS Program review

Concrete deliverables an insurer can hand to a state insurance department, generated from the live system without a manual write-up.

AIS Program package

Versioned policies, procedures and a control library mapped to the bulletin. Exports as one bundle a regulator can read.

AI and model inventory export

Exhibit A in CSV and PDF: every AI system with use-case, risk tier, owner, vendor, data sources and the evidence behind each entry.

Bias and disparate-impact test results

Selection-rate and impact-ratio tables for each protected class, plus the test data, proxy-variable notes and timestamped corrective actions.

Vendor evidence pack

One file per vendor: AI risk score, contract clauses, attestations, monitoring records and remediation history.

Adverse outcome log

Incident records tied to the affected model and control, with the triage steps and the resolution.

Regulator submission package

Pull evidence packages, inventory extracts and audit logs from one repository, ready when a state insurance department asks.

NAIC clause by clause, mapped to VerifyWise

For insurers and legal teams who want to see the bulletin translated into the concrete feature that produces the evidence. One row per clause, no hand-waving.

NAIC clauseRegulatory expectationVerifyWise featureHow it satisfies the clause
Section 3 โ€” AIS ProgramInsurers must develop, implement and maintain a documented AIS ProgramStructured AI governance frameworkPre-built governance structure, policies and workflows that formalise and operationalise the AIS Program
Section 3 โ€” Governance and oversightClear accountability, roles and oversight mechanisms must be definedRole-based access and approval workflowsAssigns ownership, approval flows and accountability across every AI system in the inventory
Section 3.1 โ€” Risk managementRisk-based approach proportional to potential consumer harmAI risk assessment engineScores and classifies AI systems by impact and risk level, then aligns controls to the Degree of Potential Harm
Section 3.2 โ€” Policies and proceduresWritten policies for AI development, deployment and monitoringPolicy templates and control mappingBuild and manage AI policies aligned with NAIC expectations and map each policy to the controls it enforces
Section 3.3 โ€” DocumentationDocument AI systems, decisions and controlsCentral documentation hubStructured records of models, decisions, approvals and governance artefacts, linked to the relevant control
Section 3 โ€” Scope of AI usageIdentify where AI is used across the businessAI system registryCentral inventory of every AI use case: underwriting, pricing, claims, fraud, marketing and back-office
Section 3 โ€” Lifecycle managementAI systems must be governed across their lifecycleAI inventory and lifecycle trackingTracks each system from development through deployment, monitoring and retirement, with stage-specific controls
Section 3 โ€” Third-party riskInsurers remain responsible for vendor AI systemsVendor risk management moduleAssesses and monitors third-party AI vendors, captures contract clauses and tracks ongoing vendor performance
Section 3 โ€” Data governanceData sources and data quality must be governedData input documentation and risk flagsTracks datasets, sources, representativeness and proxy-discrimination flags, tying each to an AI system
Section 3 โ€” Consumer impactEvaluate potential consumer harm from AI decisionsImpact assessment layerLinks each AI use case to its consumer impact, fairness implications and the controls that mitigate them
Section 4 โ€” Unfair discriminationDetect, prevent and document unfair discriminationBias audit moduleStatistical bias testing across protected classes with auditable reports on discrimination risk
Section 4 โ€” Testing and validationOngoing testing and validation of AI systemsContinuous bias and risk testingRecurring assessments logged over time, with trend charts and triggered reviews when metrics move
Section 4 โ€” Transparency and explainabilityInsurers must understand and explain AI outcomesModel documentation and explainability inputsCaptures purpose, inputs, outputs and decision logic per model so regulators and consumers can follow the reasoning
Section 4 โ€” Ongoing monitoringContinuous monitoring of AI systems requiredMonitoring logs and reassessment triggersPeriodic reviews with alerts for drift, data change or performance degradation that require re-evaluation
Section 4 โ€” Regulatory examinationProvide evidence to regulators upon requestAudit trail and evidence vaultImmutable logs and exportable reports produce a regulator-ready evidence pack in one step

Section numbers reference the structure of the NAIC Model Bulletin on the Use of Artificial Intelligence Systems by Insurers (adopted Dec 4, 2023). State-adopted versions may re-number; the substance tracks.

Before you pick a platform

Four questions that decide your scope

Every NAIC AIS Program starts from the same four questions. The answers shape the inventory, the risk tiers and the evidence you need to produce first.

01

Which states do you write business in?

Adoption is state by state. Colorado layers SB 21-169 and SB 24-205 on top. New York and California are moving fast. Scope depends entirely on your footprint.

02

Which AI use cases are in scope?

Underwriting, pricing, fraud detection, claims triage and agent-facing AI all carry different risk tiers. Use-case mix decides how tight the controls need to be.

03

Do you already have a model inventory?

If you don't, Exhibit A of the Evaluation Tool stalls on day one. A defensible inventory is usually the first deliverable of an AIS Program.

04

Have you done bias or model risk assessments before?

This tells us your maturity. If you have results, we map them to the bulletin. If you don't, we start with the highest-harm models first.

How VerifyWise compares to other platforms

Public-source snapshot of how four AI governance and compliance platforms map to the eight themes of the NAIC Model Bulletin. Verified 2026-04-29.

Best fit for insurers
NAIC themeVantaDrataOneTrustVerifyWise
AI System Program / Governanceโœ“โœ“โœ“โœ“
Risk-Based Approachโœ“โœ“โœ“โœ“
Unfair Discrimination / Bias Testing~~~โœ“
Accountability~~โœ“โœ“
Third-Party / Vendor Oversightโœ“โœ“โœ“โœ“
Documentation & Governance Evidenceโœ“โœ“โœ“โœ“
Lifecycle Managementโœ“โœ“โœ“โœ“
Deployment model (self-host)โœ•โœ•โœ•โœ“
โœ“Supported~Partialโœ•Not supported

Derived from publicly available vendor documentation, product pages, press releases, and the NAIC Model Bulletin text. Vendors update their products frequently; capabilities marked Partial today may be Supported in a future release. We refresh this comparison quarterly.

Frequently asked questions

Straight answers to what insurers, brokers and AI vendors ask most often about the NAIC regime.

The NAIC AI Principles are five guiding principles adopted by the National Association of Insurance Commissioners on August 14, 2020: Fair and Ethical, Accountable, Compliant, Transparent, and Secure/Safe/Robust. They apply to all AI actors in the insurance ecosystem and were later operationalized through the NAIC Model Bulletin on the Use of Artificial Intelligence Systems by Insurers, adopted December 4, 2023.
The NAIC Model Bulletin, adopted December 4, 2023, is a template regulation that state insurance departments can issue. It requires each insurer to adopt, implement and maintain a written AIS Program covering governance, risk management, testing, documentation and third-party vendor oversight. The controls must be calibrated to the Degree of Potential Harm to Consumers from each AI use case.
More than 24 NAIC jurisdictions have adopted the Model Bulletin with little to no material change, and additional states have issued related guidance or enacted AI-specific legislation. Adoption is ongoing, so insurers should track the NAIC state-by-state adoption map and consult adopting bulletins for state-specific effective dates.
The AIS Program is the written, documented program that governs an insurer's use of Artificial Intelligence Systems. It must include governance policies, risk management and internal controls, testing methods for errors and unfair discrimination, third-party vendor oversight, documentation practices and processes for cooperating with regulatory inquiries.
The AI Systems Evaluation Tool pilot launched in March 2026 with 12 participating states. It has four exhibits: Exhibit A measures how extensively an insurer uses AI, Exhibit B evaluates the governance framework, Exhibit C examines high-risk systems including agent-facing AI, and Exhibit D reviews data sources and proxy discrimination. The pilot is shaping how regulators will examine AI use going forward.
Yes. The insurer is responsible for AI systems used on its behalf, including vendor models, ECDIS, MGA and TPA systems. The AIS Program must include vendor diligence, contractual audit and cooperation rights, ongoing monitoring of vendor performance, and documented evidence that the insurer reviewed the vendor's approach to testing and bias mitigation.
The NAIC Principles and Model Bulletin are industry-specific guidance from insurance regulators. Colorado SB 21-169 is a Colorado insurance law with its own Regulation 10-1-1 testing regime for life insurers and pending rules for auto and health. Colorado SB 24-205 is a broader, cross-sector AI law enforced by the Attorney General. Colorado-licensed insurers can owe compliance under all three at once.
The NAIC Model Bulletin does not mandate a specific framework, but practitioners and regulators widely treat the NIST AI Risk Management Framework as a defensible backbone for an AIS Program. Mapping controls to NIST AI RMF gives insurers a structure that aligns with the bulletin's expectations around governance, transparency, testing and risk management.
No. The bulletin is clear that existing laws on unfair trade practices, unfair discrimination, rating, market conduct and corporate governance already apply to AI-driven decisions. The Model Bulletin is a mechanism for regulators to confirm insurers have the governance and testing in place to meet those existing standards when AI is in the loop.
VerifyWise gives insurers what the bulletin requires: a written AIS Program, an AI and model inventory with risk tiers, bias and adverse-outcome testing, vendor oversight, consumer-notice tracking and an evidence pack ready for an exam or the Evaluation Tool pilot. Every artefact is linked to the control it evidences, so when a regulator asks, the answer is one export away.

Further reading

Colorado SB 21-169

Insurance-specific bias testing and ECDIS rules for Colorado-licensed insurers, enforced by the Division of Insurance.

Colorado AI Act (SB 24-205)

Cross-sector AI law enforced by the Attorney General. Applies on top of insurance-specific rules.

Read the guide โ†’

NIST AI RMF

The risk management framework the Model Bulletin recognises as a reference for building defensible AIS Program controls.

Read the guide โ†’

ISO/IEC 42001

The AI management system standard. A solid structural backbone when an insurer wants certification alongside NAIC alignment.

Read the guide โ†’

Ready to stand up an AIS Program?

Spin up the NAIC framework in VerifyWise, import your models and vendors, and start producing the evidence regulators will ask for. No spreadsheet stack, no policy-only theatre.

NAIC AI Principles and Model Bulletin compliance for insurers