NAIC AI Principles (FACTS)
On August 14, 2020 the National Association of Insurance Commissioners unanimously adopted five guiding principles for the use of artificial intelligence in the US insurance industry. They are captured by the FACTS acronym and apply to every AI actor in the insurance ecosystem, not only insurers.
The five principles
- Fair and Ethical: AI actors should respect the rule of law and pursue beneficial consumer outcomes aligned with the risk-based foundation of insurance, avoiding and correcting unintended discriminatory consequences.
- Accountable: AI actors are accountable for ensuring AI systems operate in compliance with the principles and for the outcomes those systems produce.
- Compliant: AI actors must have the knowledge and resources in place to comply with all applicable insurance laws, regulations and sub-regulatory guidance in every state where they operate.
- Transparent: AI actors should commit to transparency and responsible disclosure. Regulators and consumers should have a way to inquire about, review and seek recourse for AI-driven insurance decisions.
- Secure, Safe and Robust: AI systems must have reasonable traceability of datasets, processes and decisions, and a systematic risk management process that detects and corrects privacy, security and unfair-discrimination risks.
How they relate to the Model Bulletin
The principles stayed largely aspirational until December 4, 2023, when the NAIC adopted the Model Bulletin on Use of AI Systems by Insurers. The bulletin turns the FACTS principles into a concrete AIS Program obligation: written governance, risk management, testing, vendor oversight and documentation. More than 24 states had adopted the bulletin by 2025.
Related VerifyWise resources
For a full guide on how insurers operationalise the FACTS principles and the Model Bulletin, see the NAIC AI Principles and Model Bulletin compliance guide.