
NAIC AI Principles and Model Bulletin compliance
Insurers in adopting states need a written AIS Program that governs every AI system in the business, tests for adverse consumer outcomes and stands up to examiner review. VerifyWise encodes the Model Bulletin, the FACTS principles and the AI Systems Evaluation Tool pilot into one operational workspace.
What are the NAIC AI Principles?
The National Association of Insurance Commissioners adopted the NAIC AI Principles unanimously on August 14, 2020. Five principles, captured by the FACTS acronym, set the expectations for every AI actor across the insurance lifecycle: insurers, producers, vendors and supporting organisations.
The principles stayed largely aspirational until December 4, 2023, when NAIC members adopted the Model Bulletin on the Use of Artificial Intelligence Systems by Insurers. The bulletin is a template that state departments issue as their own regulatory guidance. It takes the principles and turns them into a concrete AIS Program obligation: written governance, risk management, testing, vendor oversight and documentation.
Written AIS Program
Documented governance and controls required
Risk-proportional
Controls scale with potential consumer harm
Where insurers are right now
24+ states have adopted the bulletin
And the list is still growing quarter by quarter
AI Systems Evaluation Tool pilot is live
12 states running the 4-exhibit examination in 2026
Existing laws still apply to AI decisions
Unfair trade, unfair discrimination and market conduct rules are not waived
Vendor AI is the insurer's problem
Liability does not transfer through a procurement contract
The FACTS principles
Five principles adopted in 2020. Everything the NAIC has written on AI since, including the Model Bulletin and the 2026 Evaluation Tool, traces back to these.
Fair and ethical
AI actors should respect the rule of law and pursue beneficial consumer outcomes aligned with the risk-based foundation of insurance, avoiding and correcting unintended discriminatory consequences.
Accountable
AI actors are accountable for ensuring that AI systems operate in compliance with the guiding principles and for the outcomes those systems produce.
Compliant
AI actors must have the knowledge and resources in place to comply with all applicable insurance laws, regulations and sub-regulatory guidance in every state where they operate.
Transparent
AI actors should commit to transparency and responsible disclosure regarding AI systems. Regulators and consumers need a way to inquire about, review and seek recourse for AI-driven insurance decisions.
Secure, safe and robust
AI systems must have reasonable traceability of datasets, processes and decisions, with a systematic risk management process that detects and corrects privacy, security and unfair-discrimination risks.
The four pillars of an AIS Program
The Model Bulletin is explicit about what a documented AIS Program must cover. These are the pillars regulators will test against.
Governance
A written AIS Program with board- or senior-management-approved policies, documented roles and authority, and a cross-functional structure that brings together actuarial, data science, underwriting, compliance and legal.
- Written AIS Program document
- Board or senior management oversight
- Defined roles across actuarial, data science, underwriting, compliance and legal
- Policies for selection, development, validation and retirement of AI systems
Risk management and internal controls
Controls and procedures commensurate with the Degree of Potential Harm to Consumers. The higher the stakes of the decision, the tighter the controls, testing cadence and human oversight.
- AI system inventory with use-case risk tiers
- Validation, testing and ongoing monitoring
- Model drift detection and re-validation triggers
- Human-in-the-loop controls calibrated to risk
Testing for adverse consumer outcomes
Verification and testing methods that identify errors, bias and unfair discrimination in predictive models and AI systems. Results feed corrective action, not a filing-cabinet report.
- Disparate-impact testing across protected classes
- Accuracy, stability and explainability metrics
- Proxy-discrimination screens for data inputs
- Documented remediation when tests fail
Third-party vendor oversight
Responsibility for vendor models, data sources and AI systems stays with the insurer. That means diligence before onboarding, contractual audit rights and ongoing monitoring of vendor performance.
- Vendor AI diligence and risk scoring
- Contractual audit and cooperation clauses
- Ongoing monitoring of vendor model changes
- Evidence trail for every vendor AI system in use
Where AI typically sits inside an insurer
The bulletin is risk-proportional. Controls must scale with the Degree of Potential Harm to Consumers. Here's how regulators currently prioritise use cases.
Underwriting and risk selection
AI that accepts, declines, rates or tiers risk. Highest regulatory scrutiny because of direct consumer impact and disparate-impact exposure.
Pricing and rate setting
Predictive models that inform premium, factors or rating plans. Often combined with ECDIS, which pulls the data-source review into scope.
Claims triage, handling and fraud detection
AI that denies, delays, steers or flags claims. The 2026 evaluation tool pilot explicitly examines claims handling and total-loss decisions.
Agent-facing policyholder interactions
AI agents handling claims inquiries, endorsement processing, billing disputes and certificate issuance. Rising regulator focus because consumers interact with the model directly.
Marketing, targeting and eligibility
Models that decide who sees an offer or who qualifies to apply. Still in scope for unfair-discrimination rules even when the decision looks upstream.
Back-office operations
Internal drafting, knowledge search, coding assistants. Lower regulatory exposure but still needs governance, especially when outputs reach regulated processes.
The 2026 AI Systems Evaluation Tool pilot
NAIC's pilot has run across 12 states since March 2026. Four exhibits, each answerable from a well-run AIS Program. If you want to know what an insurance AI exam looks like in practice, this is it.
AI usage inventory
Breadth and depth of AI adoption across the enterprise. Regulators quantify how extensively each insurer uses AI and machine learning.
Governance framework
How the insurer governs AI end to end. Evaluates the AIS Program itself, risk tiers, policies and oversight structures.
High-risk systems detail
A deep dive on specific high-risk AI systems, with a current emphasis on agent-facing systems like claims inquiries, billing disputes and total-loss decisions.
Data source review
Data lineage, quality controls, representativeness and proxy-discrimination screening. Special focus on rate-setting data, social-media signals and aerial imagery.
Even insurers outside pilot states should rehearse these exhibits. The pilot is the template regulators will reuse when they examine your AIS Program.
State adoption at a glance
A non-exhaustive list of states that have adopted the NAIC Model Bulletin with little to no material change. Consult each state's bulletin for effective dates and state-specific additions.
| State | Citation | Adopted |
|---|---|---|
| Alaska | Bulletin B 25-02 | Mar 2025 |
| Arkansas | Bulletin 5-2024 | Jun 2024 |
| Connecticut | Bulletin No. MC-25 | Feb 2024 |
| Delaware | Bulletin No. 148 | Feb 2025 |
| District of Columbia | Bulletin 25-IB-01-06/25 | Jun 2025 |
| Hawaii | Memorandum No. 2025-13A | Dec 2025 |
| Illinois | Bulletin 2024-15 | Sep 2024 |
| Indiana | Bulletin 274 | Jul 2024 |
| Kentucky | Bulletin 2024-02 | Apr 2024 |
| Maryland | Bulletin 24-11 | Apr 2024 |
| Massachusetts | Bulletin 2024-10 | Dec 2024 |
| Michigan | Bulletin 2024-14-INS | Jul 2024 |
| Nebraska | Guidance Doc IGD-H1 | Jun 2024 |
| Nevada | Bulletin 24-003 | Jul 2024 |
| New Hampshire | Bulletin INS No. 24-028-AB | Apr 2024 |
| New Jersey | Bulletin 25-03 | Feb 2025 |
| North Carolina | Bulletin 24-B-19 | Dec 2024 |
| Oklahoma | Bulletin 2024-11 | Nov 2024 |
| Pennsylvania | Notice 2024-04 | Apr 2024 |
| Rhode Island | Bulletin 2024-03 | Mar 2024 |
| Vermont | Bulletin 229 | Mar 2024 |
| Washington | Technical Assistance Advisory 2024-01 | Apr 2024 |
| West Virginia | Informational Letter 214 | Apr 2024 |
Source: NAIC Implementation of Model Bulletin tracker and state insurance department bulletins. Dates reflect the month each state's bulletin was issued; check the tracker or the state's own bulletin for the authoritative version.
Who is covered
If you write policies, price risk, pay claims or sell AI into the insurance stack, you're in scope.
Life, health, P&C, auto and specialty insurers
Any insurer licensed in an adopting state that uses AI, machine learning or predictive models in a regulated insurance practice.
Reinsurers with ECDIS or model inputs
Reinsurers whose models or data shape ceding insurers' decisions usually sit inside the ceding insurer's governance program.
Managing general agents and third-party administrators
MGAs and TPAs that run pricing, underwriting or claims AI on behalf of insurers fall within the insurer's AIS Program responsibility.
Insurtech and AI vendors
Vendors providing scoring, ECDIS, claims AI or agent-facing systems should expect downstream diligence, audit clauses and evidence obligations tied to the bulletin.
How VerifyWise covers NAIC expectations
Each row below is an NAIC obligation on the left and the VerifyWise capability that produces the evidence on the right. Built for insurers who need the artefacts, not just a policy library.
NAIC clause by clause, mapped to VerifyWise
For insurers and legal teams who want to see the bulletin translated into the concrete feature that produces the evidence. One row per clause, no hand-waving.
| NAIC clause | Regulatory expectation | VerifyWise feature | How it satisfies the clause |
|---|---|---|---|
| Section 3 โ AIS Program | Insurers must develop, implement and maintain a documented AIS Program | Structured AI governance framework | Pre-built governance structure, policies and workflows that formalise and operationalise the AIS Program |
| Section 3 โ Governance and oversight | Clear accountability, roles and oversight mechanisms must be defined | Role-based access and approval workflows | Assigns ownership, approval flows and accountability across every AI system in the inventory |
| Section 3.1 โ Risk management | Risk-based approach proportional to potential consumer harm | AI risk assessment engine | Scores and classifies AI systems by impact and risk level, then aligns controls to the Degree of Potential Harm |
| Section 3.2 โ Policies and procedures | Written policies for AI development, deployment and monitoring | Policy templates and control mapping | Build and manage AI policies aligned with NAIC expectations and map each policy to the controls it enforces |
| Section 3.3 โ Documentation | Document AI systems, decisions and controls | Central documentation hub | Structured records of models, decisions, approvals and governance artefacts, linked to the relevant control |
| Section 3 โ Scope of AI usage | Identify where AI is used across the business | AI system registry | Central inventory of every AI use case: underwriting, pricing, claims, fraud, marketing and back-office |
| Section 3 โ Lifecycle management | AI systems must be governed across their lifecycle | AI inventory and lifecycle tracking | Tracks each system from development through deployment, monitoring and retirement, with stage-specific controls |
| Section 3 โ Third-party risk | Insurers remain responsible for vendor AI systems | Vendor risk management module | Assesses and monitors third-party AI vendors, captures contract clauses and tracks ongoing vendor performance |
| Section 3 โ Data governance | Data sources and data quality must be governed | Data input documentation and risk flags | Tracks datasets, sources, representativeness and proxy-discrimination flags, tying each to an AI system |
| Section 3 โ Consumer impact | Evaluate potential consumer harm from AI decisions | Impact assessment layer | Links each AI use case to its consumer impact, fairness implications and the controls that mitigate them |
| Section 4 โ Unfair discrimination | Detect, prevent and document unfair discrimination | Bias audit module | Statistical bias testing across protected classes with auditable reports on discrimination risk |
| Section 4 โ Testing and validation | Ongoing testing and validation of AI systems | Continuous bias and risk testing | Recurring assessments logged over time, with trend charts and triggered reviews when metrics move |
| Section 4 โ Transparency and explainability | Insurers must understand and explain AI outcomes | Model documentation and explainability inputs | Captures purpose, inputs, outputs and decision logic per model so regulators and consumers can follow the reasoning |
| Section 4 โ Ongoing monitoring | Continuous monitoring of AI systems required | Monitoring logs and reassessment triggers | Periodic reviews with alerts for drift, data change or performance degradation that require re-evaluation |
| Section 4 โ Regulatory examination | Provide evidence to regulators upon request | Audit trail and evidence vault | Immutable logs and exportable reports produce a regulator-ready evidence pack in one step |
Section numbers reference the structure of the NAIC Model Bulletin on the Use of Artificial Intelligence Systems by Insurers (adopted Dec 4, 2023). State-adopted versions may re-number; the substance tracks.
Four questions that decide your scope
Every NAIC AIS Program starts from the same four questions. The answers shape the inventory, the risk tiers and the evidence you need to produce first.
Which states do you write business in?
Adoption is state by state. Colorado layers SB 21-169 and SB 24-205 on top. New York and California are moving fast. Scope depends entirely on your footprint.
Which AI use cases are in scope?
Underwriting, pricing, fraud detection, claims triage and agent-facing AI all carry different risk tiers. Use-case mix decides how tight the controls need to be.
Do you already have a model inventory?
If you don't, Exhibit A of the Evaluation Tool stalls on day one. A defensible inventory is usually the first deliverable of an AIS Program.
Have you done bias or model risk assessments before?
This tells us your maturity. If you have results, we map them to the bulletin. If you don't, we start with the highest-harm models first.
Common NAIC compliance mistakes
Patterns that repeatedly trip up insurers when examiners arrive. Worth pressure-testing your AIS Program against each one.
Treating the bulletin as a policy drafting exercise
The NAIC is explicit: regulators expect to see testing results, incident records and vendor evidence, not just a signed policy. An AIS Program without operational artefacts fails the first serious review.
Leaving ECDIS and vendor data sources out of scope
External consumer data, credit-based scores, wearables, social signals and aerial imagery all sit inside Exhibit D. Insurers that inventory only their own models miss the largest part of the regulator's data-review workload. This also intersects the Colorado SB 21-169 regime for Colorado-licensed insurers.
Applying one control level to every AI use case
The bulletin is risk-proportional. An underwriting model and an internal drafting assistant don't need the same controls. Insurers who over-govern low-risk use cases burn capacity they need on the high-risk ones.
Assuming vendor diligence transfers responsibility
Responsibility stays with the insurer. A vendor certification is evidence, not a substitute. Regulators will ask the insurer to produce testing, monitoring and incident records even when the model is third-party.
Ignoring the AI Systems Evaluation Tool pilot
The pilot is not the bulletin, but it's the template for how regulators will ask questions. Insurers in non-pilot states who ignore it lose the chance to rehearse the format they'll eventually be examined on.
Frequently asked questions
Straight answers to what insurers, brokers and AI vendors ask most often about the NAIC regime.
Further reading
Colorado SB 21-169
Insurance-specific bias testing and ECDIS rules for Colorado-licensed insurers, enforced by the Division of Insurance.
Read the guide โColorado AI Act (SB 24-205)
Cross-sector AI law enforced by the Attorney General. Applies on top of insurance-specific rules.
Read the guide โNIST AI RMF
The risk management framework the Model Bulletin recognises as a reference for building defensible AIS Program controls.
Read the guide โISO/IEC 42001
The AI management system standard. A solid structural backbone when an insurer wants certification alongside NAIC alignment.
Read the guide โReady to stand up an AIS Program?
Spin up the NAIC framework in VerifyWise, import your models and vendors, and start producing the evidence regulators will ask for. No spreadsheet stack, no policy-only theatre.