Compliance frameworks

NIST AI RMF

Implement the NIST AI Risk Management Framework.

Overview

The NIST AI Risk Management Framework (AI RMF) is a voluntary framework developed by the U.S. National Institute of Standards and Technology to help organizations design, develop, deploy, and use AI systems responsibly. It provides practical guidance for managing AI risks throughout the AI lifecycle.

Unlike prescriptive regulations, the NIST AI RMF offers flexible, risk-based guidance that organizations can adapt to their specific context. It emphasizes trustworthiness characteristics and provides a structured approach to identifying, assessing, and managing AI risks. Many organizations use it as a foundation for their AI governance programs.

Why use the NIST AI RMF?

The NIST AI RMF has emerged as a leading framework for AI governance because it balances comprehensiveness with flexibility. Whether you are a startup deploying your first AI model or an enterprise managing hundreds of AI systems, the framework scales to your needs.

  • Flexible framework: Adaptable to organizations of any size, sector, or AI maturity level. You can implement it incrementally as your AI governance capabilities grow
  • Risk-based approach: Focus resources on the risks that matter most to your organization rather than checking boxes on a compliance list
  • Widely recognized: Referenced by regulators, customers, and partners globally. Increasingly required in government contracts and enterprise procurement
  • Complementary: Aligns with and supports compliance with regulations like the EU AI Act and ISO 42001. Implementing the AI RMF creates foundations for other standards
  • Practical guidance: The accompanying Playbook provides specific suggested actions and examples for each subcategory
  • Free and accessible: Publicly available with no licensing requirements. NIST continues to develop additional resources and profiles
  • Stakeholder trust: Demonstrates to customers, investors, and the public that you take AI risks seriously and have processes to manage them
The NIST AI RMF was published in January 2023 and is accompanied by a Playbook with detailed implementation guidance. NIST continues to develop additional resources, including profiles for specific use cases and sectors.

Trustworthy AI characteristics

At the heart of the NIST AI RMF are seven characteristics of trustworthy AI systems. These characteristics are interconnected and sometimes in tension with each other. Effective AI governance requires balancing these characteristics based on context, use case, and stakeholder needs.

Safe

AI systems should not endanger human life, health, property, or the environment. Safety considerations span the entire AI lifecycle from design through deployment and retirement.

Secure and resilient

AI systems should withstand attacks and recover from failures. This includes protecting against adversarial manipulation, data poisoning, and model theft.

Explainable and interpretable

AI outputs should be understandable to relevant stakeholders. The level of explainability needed depends on the use case and who needs to understand the system.

Accountable and transparent

Clear responsibility for AI outcomes and openness about system capabilities and limitations. Organizations should be able to explain how and why AI decisions are made.

Fair with harmful bias managed

AI systems should treat individuals and groups equitably. This requires active efforts to identify, measure, and mitigate harmful biases throughout the AI lifecycle.

Privacy enhanced

AI systems should protect individual privacy and data rights. This includes privacy considerations during data collection, model training, and inference.

Valid and reliable

AI systems should perform consistently and as intended across different conditions and over time. Validation should match the deployment context.

These characteristics are not independent. For example, increasing explainability might reduce model performance, and strong privacy protections could limit the data available for bias testing. The framework helps organizations navigate these tradeoffs thoughtfully.

Core functions

The NIST AI RMF is organized around four core functions that provide a structure for managing AI risks. These functions are not sequential; organizations should engage with all four continuously throughout the AI lifecycle.

Govern

The Govern function establishes and maintains a culture of risk management for AI across the organization. Unlike the other three functions which focus on specific AI systems, Govern is cross-cutting and informs how Map, Measure, and Manage are performed.

  • Policies and procedures: Define organizational AI policies, procedures, and practices that are transparent and consistently implemented
  • Accountability structures: Establish clear roles, responsibilities, and lines of authority for AI risk management
  • Legal and regulatory compliance: Ensure processes are in place to understand and comply with applicable AI regulations
  • Risk culture: Promote a critical thinking and safety-first mindset throughout AI design, development, and deployment
  • Third-party management: Address risks from third-party AI software, data, and services
  • Workforce development: Provide training so personnel can perform AI risk management duties effectively

Map

The Map function frames the context in which AI systems operate and identifies potential impacts. Thorough context mapping is essential because AI risks depend heavily on the specific use case, deployment environment, and affected stakeholders.

  • Context establishment: Document intended purposes, users, deployment settings, and applicable laws and norms
  • System categorization: Define what tasks the AI system performs and how its outputs will be used
  • Benefits and costs: Examine potential benefits and costs, including non-monetary impacts on individuals and communities
  • Third-party components: Map risks from third-party data, models, and AI services
  • Impact characterization: Identify the likelihood and magnitude of potential impacts on individuals, groups, and society
  • Stakeholder engagement: Integrate feedback from affected communities and external stakeholders

Measure

The Measure function employs quantitative and qualitative methods to analyze, assess, and monitor AI risks and trustworthiness characteristics. Measurement approaches should be connected to the deployment context identified in the Map function.

  • Metrics selection: Identify appropriate metrics for measuring AI risks based on the mapped context and trustworthiness characteristics
  • Trustworthiness evaluation: Evaluate AI systems against all relevant trustworthiness characteristics (safety, fairness, privacy, etc.)
  • Testing and validation: Document test sets, metrics, and tools used during testing, evaluation, verification, and validation (TEVV)
  • Ongoing monitoring: Monitor deployed AI systems for performance, behavior, and compliance with requirements
  • Risk tracking: Track existing, unanticipated, and emergent risks over time
  • Feedback integration: Gather feedback from end users and affected communities about system performance

Manage

The Manage function allocates resources to address mapped and measured risks based on their priority and the organization's risk tolerance. It turns risk assessment into action.

  • Risk prioritization: Prioritize risks for treatment based on impact, likelihood, and available resources
  • Response strategies: Develop and document responses to high-priority risks (mitigate, transfer, avoid, or accept)
  • Residual risk documentation: Document negative residual risks that remain after treatment
  • Benefit maximization: Plan strategies to maximize AI benefits while minimizing negative impacts
  • Incident response: Establish processes for responding to, recovering from, and learning from AI incidents
  • Third-party risk management: Monitor and manage risks from third-party AI components and services

How VerifyWise supports NIST AI RMF

VerifyWise provides a structured environment to implement and track your NIST AI RMF activities. The platform organizes the framework into functions, categories, and subcategories, making it easy to work through requirements systematically.

  • Govern: Policy management, role-based access control, and organizational structure documentation for AI governance
  • Map: Model inventory with context documentation, impact assessment tools, and stakeholder tracking
  • Measure: Risk assessment tracking across all trustworthiness characteristics with metrics and evidence collection
  • Manage: Risk mitigation planning, incident tracking, and evidence collection to demonstrate risk treatment
Best practice
Use the NIST AI RMF Playbook alongside VerifyWise. The Playbook provides specific suggested actions for each subcategory that you can track in VerifyWise. Download the Playbook from the official NIST website.

NIST AI RMF assessment structure

VerifyWise organizes the NIST AI RMF into a three-level hierarchy that mirrors the framework's structure:

Functions

The four core functions (Govern, Map, Measure, Manage) that organize risk management activities.

Categories

Groups of related outcomes within each function. For example, GOVERN 1 focuses on policies and procedures.

Subcategories

Specific outcomes that represent actionable requirements. These are what you track and implement.

The functions screen

When you access NIST AI RMF in VerifyWise, you see the four core functions with their categories and subcategories. Each function shows progress indicators so you can quickly see where you stand:

  • GOVERN (6 categories, ~19 subcategories) — Establishes AI risk management culture and policies
  • MAP (5 categories, ~18 subcategories) — Frames context and scope of AI risks
  • MEASURE (4 categories, ~25 subcategories) — Evaluates AI risks and trustworthiness
  • MANAGE (4 categories, ~15 subcategories) — Addresses identified risks

Working with subcategories

Subcategories are the actionable units in the NIST AI RMF. Each subcategory represents a specific outcome your organization should achieve. Click on a subcategory to open its detail view where you can:

  1. Review the requirement: Read the subcategory description to understand what is expected
  2. Document implementation: Describe how your organization addresses this requirement
  3. Link evidence: Attach documents, policies, or records from your Evidence Hub
  4. Assign responsibility: Set owner, reviewer, and approver for accountability
  5. Update status: Track progress through the implementation workflow
  6. Add tags: Organize subcategories with custom tags for filtering
  7. Link risks: Connect use case risks that this subcategory addresses

Subcategory detail fields

For each subcategory, VerifyWise tracks:

  • Status: Current progress through the implementation workflow
  • Implementation description: Your documentation of how the requirement is addressed
  • Evidence links: Supporting documents, policies, and artifacts
  • Owner: Person responsible for implementation
  • Reviewer: Person who reviews the implementation
  • Approver: Person who gives final sign-off
  • Due date: Target completion date
  • Auditor feedback: Notes from internal or external auditors
  • Tags: Custom labels for organization and filtering
  • Linked risks: Use case risks associated with this subcategory

Status workflow

NIST AI RMF subcategories follow a detailed status workflow that supports review and approval processes:

  • Not started — Work has not begun on this subcategory
  • Draft — Initial implementation documentation is being prepared
  • In progress — Active work is underway to implement the requirement
  • Awaiting review — Implementation is complete and ready for reviewer assessment
  • Awaiting approval — Reviewer has approved; waiting for final approver sign-off
  • Implemented — The subcategory has been fully addressed and approved
  • Needs rework — Reviewer or approver has identified issues that need correction

Tracking your progress

VerifyWise provides multiple ways to monitor your NIST AI RMF implementation:

  • Overall completion: Total subcategories implemented vs. total subcategories
  • Progress by function: Separate progress tracking for Govern, Map, Measure, and Manage
  • Status breakdown: Distribution across all status values
  • Assignment coverage: How many subcategories have owners assigned
  • Overdue items: Subcategories past their due date

Use these metrics to identify bottlenecks, allocate resources, and report progress to stakeholders.

Linking evidence

For each subcategory, you can link evidence to demonstrate how you meet the requirement:

  1. Open the subcategory detail view
  2. Navigate to the evidence section
  3. Select existing evidence from your Evidence Hub or upload new documents
  4. Add implementation notes explaining how the evidence supports compliance
Best practice
The NIST AI RMF Playbook lists suggested actions for each subcategory. Use these as a guide for what evidence to collect and what implementation activities to document.

Linking risks

The NIST AI RMF is fundamentally about managing AI risks. You can link use case risks to subcategories to create traceability between your risk assessment and your control implementation. When you link a risk to a subcategory, it demonstrates how your NIST AI RMF activities address specific identified risks.

Frequently asked questions

Is the NIST AI RMF mandatory?

The NIST AI RMF is voluntary for most organizations. However, it is increasingly referenced in government contracts, procurement requirements, and industry standards. Some federal agencies require AI RMF implementation for AI systems used in government contexts. Even when not required, implementing the framework demonstrates AI governance maturity.

Where should we start with implementation?

Start with the Govern function, as it establishes the organizational foundation for all other AI risk management activities. Then focus on the Map function for your highest-risk AI systems to understand context and potential impacts. You do not need to complete all subcategories before moving to Measure and Manage.

How does the AI RMF relate to the EU AI Act?

The NIST AI RMF and EU AI Act share common goals around trustworthy AI but differ in approach. The EU AI Act is a regulation with mandatory requirements, while the AI RMF is voluntary guidance. Organizations subject to the EU AI Act will find that implementing the AI RMF addresses many of the same concerns and can support EU AI Act compliance.

Should we use the AI RMF or ISO 42001?

They serve different but complementary purposes. ISO 42001 provides a certifiable management system standard, while the AI RMF offers practical risk management guidance. Many organizations implement the AI RMF as the operational framework within an ISO 42001-certified management system. You can use both together.

Do we need to implement all subcategories?

No. The AI RMF is designed to be flexible. Organizations should prioritize subcategories based on their specific AI systems, risk tolerance, and resources. Start with the subcategories most relevant to your highest-risk AI systems and expand coverage over time as your AI governance program matures.

What is the AI RMF Playbook?

The NIST AI RMF Playbook is a companion document that provides specific suggested actions for each subcategory. It includes examples, considerations, and guidance to help organizations understand how to implement each requirement. The Playbook is available for free from the NIST website and is regularly updated.

PreviousISO 27001 integration
NIST AI RMF - Compliance frameworks - VerifyWise User Guide