NIST AI RMF
Implement the NIST AI Risk Management Framework.
Overview
The NIST AI Risk Management Framework (AI RMF) is a voluntary framework from the U.S. National Institute of Standards and Technology. It helps organizations design, develop, deploy and use AI systems responsibly, with practical guidance for managing AI risks throughout the lifecycle.
Unlike prescriptive regulations, the NIST AI RMF offers flexible, risk-based guidance you can adapt to your specific context. It focuses on trustworthiness characteristics and gives you a structured way to identify, assess and manage AI risks. Many organizations use it as a foundation for their AI governance programs.
Why use the NIST AI RMF?
The NIST AI RMF has become a go-to framework for AI governance because it balances thoroughness with flexibility. Whether you're a startup deploying your first AI model or an enterprise managing hundreds of AI systems, it scales to your needs.
- Flexible: Works for organizations of any size, sector or AI maturity level. You can implement it incrementally as your governance capabilities grow
- Risk-based: Lets you focus resources on the risks that matter most rather than checking boxes on a compliance list
- Widely recognized: Referenced by regulators, customers and partners globally. Increasingly required in government contracts and enterprise procurement
- Complementary: Aligns with regulations like the EU AI Act and ISO 42001. Implementing the AI RMF creates foundations for other standards
- Practical guidance: The accompanying Playbook gives specific suggested actions and examples for each subcategory
- Free and accessible: Publicly available with no licensing requirements. NIST continues to develop additional resources and profiles
- Stakeholder trust: Shows customers, investors and the public that you take AI risks seriously and have processes to manage them
Trustworthy AI characteristics
The NIST AI RMF is built around seven characteristics of trustworthy AI systems. These characteristics are interconnected and sometimes in tension with each other. Good AI governance means balancing them based on context, use case and stakeholder needs.
Safe
AI systems should not endanger human life, health, property or the environment. Safety considerations span the entire AI lifecycle from design through deployment and retirement.
Secure and resilient
AI systems should withstand attacks and recover from failures. This includes protecting against adversarial manipulation, data poisoning and model theft.
Explainable and interpretable
AI outputs should be understandable to relevant stakeholders. The level of explainability needed depends on the use case and who needs to understand the system.
Accountable and transparent
Clear responsibility for AI outcomes and openness about system capabilities and limitations. Organizations should be able to explain how and why AI decisions are made.
Fair with harmful bias managed
AI systems should treat individuals and groups equitably. This takes active effort to identify, measure and mitigate harmful biases throughout the AI lifecycle.
Privacy enhanced
AI systems should protect individual privacy and data rights. This includes privacy considerations during data collection, model training and inference.
Valid and reliable
AI systems should perform consistently and as intended across different conditions and over time. Validation should match the deployment context.
These characteristics aren't independent. For example, increasing explainability might reduce model performance, and strong privacy protections could limit the data available for bias testing. The framework helps you navigate these tradeoffs thoughtfully.
Core functions
The NIST AI RMF is organized around four core functions that provide a structure for managing AI risks. These functions are not sequential; organizations should engage with all four continuously throughout the AI lifecycle.
Govern
The Govern function establishes and maintains a culture of risk management for AI across the organization. Unlike the other three functions which focus on specific AI systems, Govern is cross-cutting and informs how Map, Measure and Manage are performed.
- Policies and procedures: Define organizational AI policies, procedures and practices that are transparent and consistently implemented
- Accountability structures: Establish clear roles, responsibilities and lines of authority for AI risk management
- Legal and regulatory compliance: Ensure processes are in place to understand and comply with applicable AI regulations
- Risk culture: Promote a critical thinking and safety-first mindset throughout AI design, development and deployment
- Third-party management: Address risks from third-party AI software, data and services
- Workforce development: Provide training so personnel can perform AI risk management duties effectively
Map
The Map function frames the context in which AI systems operate and identifies potential impacts. Thorough context mapping matters because AI risks depend heavily on the specific use case, deployment environment and affected stakeholders.
- Context establishment: Document intended purposes, users, deployment settings and applicable laws and norms
- System categorization: Define what tasks the AI system performs and how its outputs will be used
- Benefits and costs: Look at potential benefits and costs, including non-monetary impacts on individuals and communities
- Third-party components: Map risks from third-party data, models and AI services
- Impact characterization: Identify the likelihood and magnitude of potential impacts on individuals, groups and society
- Stakeholder engagement: Integrate feedback from affected communities and external stakeholders
Measure
The Measure function uses quantitative and qualitative methods to analyze, assess and monitor AI risks and trustworthiness characteristics. Your measurement approaches should connect to the deployment context you identified in the Map function.
- Metrics selection: Identify appropriate metrics for measuring AI risks based on the mapped context and trustworthiness characteristics
- Trustworthiness evaluation: Evaluate AI systems against all relevant trustworthiness characteristics (safety, fairness, privacy, etc.)
- Testing and validation: Document test sets, metrics and tools used during testing, evaluation, verification and validation (TEVV)
- Ongoing monitoring: Monitor deployed AI systems for performance, behavior and compliance with requirements
- Risk tracking: Track existing, unanticipated and emergent risks over time
- Feedback integration: Gather feedback from end users and affected communities about system performance
Manage
The Manage function allocates resources to address mapped and measured risks based on their priority and the organization's risk tolerance. It turns risk assessment into action.
- Risk prioritization: Prioritize risks for treatment based on impact, likelihood and available resources
- Response strategies: Develop and document responses to high-priority risks (mitigate, transfer, avoid or accept)
- Residual risk documentation: Document negative residual risks that remain after treatment
- Benefit maximization: Plan strategies to maximize AI benefits while minimizing negative impacts
- Incident response: Establish processes for responding to, recovering from and learning from AI incidents
- Third-party risk management: Monitor and manage risks from third-party AI components and services
How VerifyWise supports NIST AI RMF
VerifyWise gives you a structured place to implement and track your NIST AI RMF activities. The platform organizes the framework into functions, categories and subcategories so you can work through requirements systematically.
- Govern: Policy management, role-based access control and organizational structure documentation for AI governance
- Map: Model inventory with context documentation, impact assessment tools and stakeholder tracking
- Measure: Risk assessment tracking across all trustworthiness characteristics with metrics and evidence collection
- Manage: Risk mitigation planning, incident tracking and evidence collection to demonstrate risk treatment
NIST AI RMF assessment structure
VerifyWise organizes the NIST AI RMF into a three-level hierarchy that mirrors the framework's structure:
Functions
The four core functions (Govern, Map, Measure, Manage) that organize risk management activities.
Categories
Groups of related outcomes within each function. For example, GOVERN 1 focuses on policies and procedures.
Subcategories
Specific outcomes that represent actionable requirements. These are what you track and implement.
The functions screen
When you access NIST AI RMF in VerifyWise, you see the four core functions with their categories and subcategories. Each function shows progress indicators so you can quickly see where you stand:
- GOVERN (6 categories, 25 subcategories), establishes AI risk management culture and policies
- MAP (5 categories, 23 subcategories), frames context and scope of AI risks
- MEASURE (4 categories, 26 subcategories), evaluates AI risks and trustworthiness
- MANAGE (4 categories, 19 subcategories), addresses identified risks
Working with subcategories
Subcategories are the actionable units in the NIST AI RMF. Each subcategory represents a specific outcome your organization should achieve. Click on a subcategory to open its detail view where you can:
- Review the requirement: Read the subcategory description to understand what is expected
- Document implementation: Describe how your organization addresses this requirement
- Link evidence: Attach documents, policies, or records from your Evidence Hub
- Assign responsibility: Set owner, reviewer and approver for accountability
- Update status: Track progress through the implementation workflow
- Add tags: Organize subcategories with custom tags for filtering
- Link risks: Connect use case risks that this subcategory addresses
Subcategory detail fields
For each subcategory, VerifyWise tracks:
- Status: Current progress through the implementation workflow
- Implementation description: Your documentation of how the requirement is addressed
- Evidence links: Supporting documents, policies and artifacts
- Owner: Person responsible for implementation
- Reviewer: Person who reviews the implementation
- Approver: Person who gives final sign-off
- Due date: Target completion date
- Auditor feedback: Notes from internal or external auditors
- Tags: Custom labels for organization and filtering
- Linked risks: Use case risks associated with this subcategory
Status workflow
NIST AI RMF subcategories follow a detailed status workflow that supports review and approval processes:
- Not started: work hasn't begun on this subcategory
- Draft: initial implementation documentation is being prepared
- In progress: active work is underway to implement the requirement
- Awaiting review: implementation is complete and ready for reviewer assessment
- Awaiting approval: reviewer has approved, waiting for final approver sign-off
- Implemented: the subcategory has been fully addressed and approved
- Needs rework: reviewer or approver has identified issues that need correction
Tracking your progress
There are several ways to monitor your NIST AI RMF implementation in VerifyWise:
- Overall completion: Total subcategories implemented vs. total subcategories
- Progress by function: Separate progress tracking for Govern, Map, Measure and Manage
- Status breakdown: Distribution across all status values
- Assignment coverage: How many subcategories have owners assigned
- Overdue items: Subcategories past their due date
Use these metrics to spot bottlenecks, allocate resources and report progress to stakeholders.
Linking evidence
For each subcategory, you can link evidence to demonstrate how you meet the requirement:
- Open the subcategory detail view
- Navigate to the evidence section
- Select existing evidence from your Evidence Hub or upload new documents
- Add implementation notes explaining how the evidence supports compliance
Linking risks
The NIST AI RMF is all about managing AI risks. You can link use case risks to subcategories to create traceability between your risk assessment and your control implementation. Linking a risk to a subcategory shows how your NIST AI RMF activities address that specific risk.
Frequently asked questions
Is the NIST AI RMF mandatory?
For most organizations, it's voluntary. That said, it shows up more and more in government contracts, procurement requirements and industry standards. Some federal agencies require AI RMF implementation for AI systems used in government contexts. Even when it's not required, implementing the framework signals AI governance maturity.
Where should we start with implementation?
Start with the Govern function since it establishes the organizational foundation for everything else. Then focus on the Map function for your highest-risk AI systems to understand context and potential impacts. You don't need to complete all subcategories before moving to Measure and Manage.
How does the AI RMF relate to the EU AI Act?
The NIST AI RMF and EU AI Act share common goals around trustworthy AI but differ in approach. The EU AI Act is a regulation with mandatory requirements, while the AI RMF is voluntary guidance. Organizations subject to the EU AI Act will find that implementing the AI RMF addresses many of the same concerns and can support EU AI Act compliance.
Should we use the AI RMF or ISO 42001?
They serve different but complementary purposes. ISO 42001 provides a certifiable management system standard, while the AI RMF offers practical risk management guidance. Many organizations implement the AI RMF as the operational framework within an ISO 42001-certified management system. You can use both together.
Do we need to implement all subcategories?
No. The AI RMF is designed to be flexible. Prioritize subcategories based on your specific AI systems, risk tolerance and resources. Start with the subcategories most relevant to your highest-risk AI systems and expand coverage over time.
What is the AI RMF Playbook?
The NIST AI RMF Playbook is a companion document with specific suggested actions for each subcategory. It includes examples, considerations and guidance to help you understand how to implement each requirement. The Playbook is free on the NIST website and gets updated regularly.