
NAIC AI Principles and Model Bulletin compliance
If you write business in an adopting state, you owe a written AIS Program. It has to govern every AI system you run, test for adverse consumer outcomes and survive an examiner walking through your evidence. VerifyWise puts the Model Bulletin, the FACTS principles and the Evaluation Tool exhibits in one place.
What are the NAIC AI Principles?
Five principles โ the FACTS acronym โ adopted by the NAIC on August 14, 2020 to set expectations for insurers, producers and vendors across the AI lifecycle.
On December 4, 2023, NAIC members adopted the Model Bulletin on the Use of Artificial Intelligence Systems by Insurers, turning the principles into a concrete AIS Program obligation: written governance, risk management, testing, vendor oversight and documentation. State insurance departments then issue it as their own regulatory guidance.
Written AIS Program
Documented governance and controls required
Risk-proportional
Controls scale with potential consumer harm
Where insurers are right now
24+ states have adopted the bulletin
List growing each quarter
AI Systems Evaluation Tool pilot is live
12 states running the 4-exhibit examination in 2026
Existing laws still apply to AI decisions
Unfair trade, discrimination and market conduct rules are not waived
Vendor AI is the insurer's problem
Liability does not transfer through procurement
The FACTS principles
Five principles adopted in 2020. Everything the NAIC has written on AI since, including the Model Bulletin and the 2026 Evaluation Tool, traces back to these.
- F
Fair and ethical
AI actors respect the rule of law and pursue consumer outcomes consistent with the risk-based foundation of insurance. When an AI system produces unintended discrimination, the actor finds it and corrects it.
- A
Accountable
AI actors are answerable for whether their systems follow these principles and for the outcomes those systems produce. Responsibility doesn't transfer through a procurement contract.
- C
Compliant
AI actors keep the knowledge and resources to comply with applicable insurance laws, regulations and sub-regulatory guidance in every state where they operate.
- T
Transparent
AI actors disclose how their AI systems work, in line with responsible-disclosure norms. Regulators and consumers need a path to inquire about, review and seek recourse for AI-driven insurance decisions.
- S
Secure, safe and robust
AI systems carry reasonable traceability of datasets, processes and decisions. A systematic risk management process detects and corrects privacy, security and unfair-discrimination risks across the lifecycle.
The four pillars of an AIS Program
What a documented AIS Program must cover โ the pillars regulators test against.
Governance
A written AIS Program with senior-management approval and clear cross-functional roles.
- Written AIS Program
- Board or senior management oversight
- Cross-functional roles
- Lifecycle policies
Risk management and internal controls
Controls scaled to the Degree of Potential Harm to Consumers โ tighter for higher-stakes decisions.
- AI inventory with risk tiers
- Validation and ongoing monitoring
- Drift detection and re-validation
- Human-in-the-loop calibrated to risk
Testing for adverse consumer outcomes
Test methods that surface errors, bias and unfair discrimination, with results that drive remediation.
- Disparate-impact testing
- Accuracy and explainability metrics
- Proxy-discrimination screens
- Remediation when tests fail
Third-party vendor oversight
Responsibility for vendor models stays with the insurer โ diligence, audit rights, ongoing monitoring.
- Vendor AI diligence and risk scoring
- Contractual audit rights
- Monitoring of vendor model changes
- Evidence trail per vendor system
Who is covered
If you write policies, price risk, pay claims or sell AI into the insurance stack, you're in scope.
Life, health, P&C, auto and specialty insurers
Any insurer licensed in an adopting state that uses AI, machine learning or predictive models in a regulated insurance practice.
Reinsurers with ECDIS or model inputs
Reinsurers whose models or data shape ceding insurers' decisions usually sit inside the ceding insurer's governance program.
Managing general agents and third-party administrators
MGAs and TPAs that run pricing, underwriting or claims AI on behalf of insurers fall within the insurer's AIS Program responsibility.
Insurtech and AI vendors
Vendors providing scoring, ECDIS, claims AI or agent-facing systems should expect downstream diligence, audit clauses and evidence obligations tied to the bulletin.
What VerifyWise produces for an AIS Program review
Concrete deliverables an insurer can hand to a state insurance department, generated from the live system without a manual write-up.
AIS Program package
Versioned policies, procedures and a control library mapped to the bulletin. Exports as one bundle a regulator can read.
AI and model inventory export
Exhibit A in CSV and PDF: every AI system with use-case, risk tier, owner, vendor, data sources and the evidence behind each entry.
Bias and disparate-impact test results
Selection-rate and impact-ratio tables for each protected class, plus the test data, proxy-variable notes and timestamped corrective actions.
Vendor evidence pack
One file per vendor: AI risk score, contract clauses, attestations, monitoring records and remediation history.
Adverse outcome log
Incident records tied to the affected model and control, with the triage steps and the resolution.
Regulator submission package
Pull evidence packages, inventory extracts and audit logs from one repository, ready when a state insurance department asks.
NAIC clause by clause, mapped to VerifyWise
For insurers and legal teams who want to see the bulletin translated into the concrete feature that produces the evidence. One row per clause, no hand-waving.
| NAIC clause | Regulatory expectation | VerifyWise feature | How it satisfies the clause |
|---|---|---|---|
| Section 3 โ AIS Program | Insurers must develop, implement and maintain a documented AIS Program | Structured AI governance framework | Pre-built governance structure, policies and workflows that formalise and operationalise the AIS Program |
| Section 3 โ Governance and oversight | Clear accountability, roles and oversight mechanisms must be defined | Role-based access and approval workflows | Assigns ownership, approval flows and accountability across every AI system in the inventory |
| Section 3.1 โ Risk management | Risk-based approach proportional to potential consumer harm | AI risk assessment engine | Scores and classifies AI systems by impact and risk level, then aligns controls to the Degree of Potential Harm |
| Section 3.2 โ Policies and procedures | Written policies for AI development, deployment and monitoring | Policy templates and control mapping | Build and manage AI policies aligned with NAIC expectations and map each policy to the controls it enforces |
| Section 3.3 โ Documentation | Document AI systems, decisions and controls | Central documentation hub | Structured records of models, decisions, approvals and governance artefacts, linked to the relevant control |
| Section 3 โ Scope of AI usage | Identify where AI is used across the business | AI system registry | Central inventory of every AI use case: underwriting, pricing, claims, fraud, marketing and back-office |
| Section 3 โ Lifecycle management | AI systems must be governed across their lifecycle | AI inventory and lifecycle tracking | Tracks each system from development through deployment, monitoring and retirement, with stage-specific controls |
| Section 3 โ Third-party risk | Insurers remain responsible for vendor AI systems | Vendor risk management module | Assesses and monitors third-party AI vendors, captures contract clauses and tracks ongoing vendor performance |
| Section 3 โ Data governance | Data sources and data quality must be governed | Data input documentation and risk flags | Tracks datasets, sources, representativeness and proxy-discrimination flags, tying each to an AI system |
| Section 3 โ Consumer impact | Evaluate potential consumer harm from AI decisions | Impact assessment layer | Links each AI use case to its consumer impact, fairness implications and the controls that mitigate them |
| Section 4 โ Unfair discrimination | Detect, prevent and document unfair discrimination | Bias audit module | Statistical bias testing across protected classes with auditable reports on discrimination risk |
| Section 4 โ Testing and validation | Ongoing testing and validation of AI systems | Continuous bias and risk testing | Recurring assessments logged over time, with trend charts and triggered reviews when metrics move |
| Section 4 โ Transparency and explainability | Insurers must understand and explain AI outcomes | Model documentation and explainability inputs | Captures purpose, inputs, outputs and decision logic per model so regulators and consumers can follow the reasoning |
| Section 4 โ Ongoing monitoring | Continuous monitoring of AI systems required | Monitoring logs and reassessment triggers | Periodic reviews with alerts for drift, data change or performance degradation that require re-evaluation |
| Section 4 โ Regulatory examination | Provide evidence to regulators upon request | Audit trail and evidence vault | Immutable logs and exportable reports produce a regulator-ready evidence pack in one step |
Section numbers reference the structure of the NAIC Model Bulletin on the Use of Artificial Intelligence Systems by Insurers (adopted Dec 4, 2023). State-adopted versions may re-number; the substance tracks.
Four questions that decide your scope
Every NAIC AIS Program starts from the same four questions. The answers shape the inventory, the risk tiers and the evidence you need to produce first.
Which states do you write business in?
Adoption is state by state. Colorado layers SB 21-169 and SB 24-205 on top. New York and California are moving fast. Scope depends entirely on your footprint.
Which AI use cases are in scope?
Underwriting, pricing, fraud detection, claims triage and agent-facing AI all carry different risk tiers. Use-case mix decides how tight the controls need to be.
Do you already have a model inventory?
If you don't, Exhibit A of the Evaluation Tool stalls on day one. A defensible inventory is usually the first deliverable of an AIS Program.
Have you done bias or model risk assessments before?
This tells us your maturity. If you have results, we map them to the bulletin. If you don't, we start with the highest-harm models first.
How VerifyWise compares to other platforms
Public-source snapshot of how four AI governance and compliance platforms map to the eight themes of the NAIC Model Bulletin. Verified 2026-04-29.
| Best fit for insurers | ||||
|---|---|---|---|---|
| NAIC theme | Vanta | Drata | OneTrust | VerifyWise |
| AI System Program / Governance | โ | โ | โ | โ |
| Risk-Based Approach | โ | โ | โ | โ |
| Unfair Discrimination / Bias Testing | ~ | ~ | ~ | โ |
| Accountability | ~ | ~ | โ | โ |
| Third-Party / Vendor Oversight | โ | โ | โ | โ |
| Documentation & Governance Evidence | โ | โ | โ | โ |
| Lifecycle Management | โ | โ | โ | โ |
| Deployment model (self-host) | โ | โ | โ | โ |
Derived from publicly available vendor documentation, product pages, press releases, and the NAIC Model Bulletin text. Vendors update their products frequently; capabilities marked Partial today may be Supported in a future release. We refresh this comparison quarterly.
Frequently asked questions
Straight answers to what insurers, brokers and AI vendors ask most often about the NAIC regime.
Further reading
Colorado SB 21-169
Insurance-specific bias testing and ECDIS rules for Colorado-licensed insurers, enforced by the Division of Insurance.
Colorado AI Act (SB 24-205)
Cross-sector AI law enforced by the Attorney General. Applies on top of insurance-specific rules.
Read the guide โNIST AI RMF
The risk management framework the Model Bulletin recognises as a reference for building defensible AIS Program controls.
Read the guide โISO/IEC 42001
The AI management system standard. A solid structural backbone when an insurer wants certification alongside NAIC alignment.
Read the guide โReady to stand up an AIS Program?
Spin up the NAIC framework in VerifyWise, import your models and vendors, and start producing the evidence regulators will ask for. No spreadsheet stack, no policy-only theatre.
