Back to AI lexicon
AI Governance Frameworks

NIST AI Risk Management Framework (RMF)

NIST AI Risk Management Framework (RMF)

The NIST AI Risk Management Framework, released in January 2023, is a flexible, outcome-based framework for developing, deploying, and using trustworthy AI. It helps teams identify and manage risks specific to AI systems, covering not only technical issues like model bias or data drift but also risks to civil rights, safety, transparency, and accountability.

Where many frameworks focus strictly on compliance, the AI RMF is built to bring different roles into the conversation: engineers, legal teams, executives, and policymakers. The goal is a shared vocabulary for evaluating AI risks and benefits.

The official NIST AI RMF v1.0 is available here: https://www.nist.gov/itl/ai-risk-management-framework

The four core functions

The AI RMF is organized around four functions that apply across the AI system lifecycle.

Govern

Govern sets up the structures, policies, and accountability needed for AI risk management. It defines who makes AI-related decisions, what level of risk the organization will accept, and how AI governance fits into enterprise risk management more broadly.

Without clear governance, the other three functions lack institutional backing. In practice, teams that establish governance before deploying measurement tools tend to see more durable results.

Map

Map is about understanding context: the goals, stakeholders, and intended uses of each AI system. The work includes documenting use cases, classifying systems by risk level, identifying who is affected, and examining the broader environment the system operates in.

Good mapping goes beyond the technical system itself. It asks who is affected by the outputs, what assumptions are baked into the design, and what failure modes are plausible in actual deployment.

Measure

Measure covers the analysis of a system's capabilities, limitations, and risk factors through both quantitative and qualitative methods. Performance benchmarking, bias evaluation, robustness testing, red teaming, and explainability assessments all fall here.

Measurement should not stop at pre-deployment testing. Post-deployment monitoring matters just as much for catching model drift, behavioral shifts, and risks that only surface in production.

Manage

Manage is where identified risks get acted on: implementing safeguards, making residual risk decisions, setting up incident response, and maintaining monitoring systems.

The four functions are not sequential. They run in parallel and should be revisited as the AI system and its operating context change.

Profiles and use cases

A Profile describes how a team implements the core functions to match its risk tolerance and business needs. Profiles let teams tailor the framework to their sector, use cases, and maturity level. A company might maintain one Profile for internal productivity tools and another for customer-facing, high-impact applications.

The NIST AI RMF Playbook offers voluntary suggested actions for each function as a non-prescriptive companion. Teams can pick and adapt actions based on their context and risk appetite.

Trustworthy AI characteristics

The framework identifies seven characteristics that AI systems should exhibit:

  • Valid and reliable: Performs consistently and accurately for its intended purpose.
  • Safe: Does not create unreasonable risks to human safety.
  • Fair and unbiased: Avoids unfairly discriminatory outcomes.
  • Transparent and explainable: Lets stakeholders understand how the system works and why it produces particular outputs.
  • Secure and resilient: Withstands adversarial attacks and recovers from failures.
  • Accountable and traceable: Decisions can be traced to their origins, with responsible parties clearly identified.
  • Privacy-enhanced: Protects personal information and respects data rights.

Risk management activities across all four functions aim to achieve and sustain these characteristics throughout the system lifecycle.

The Generative AI Profile (AI 600-1)

In July 2024, NIST released AI 600-1, a companion document for risks specific to generative AI. It is the most significant addition since the framework's initial publication.

The Generative AI Profile identifies 12 risk categories unique to or worsened by generative AI: harmful content generation, hallucinations and confabulations, data leakage, disinformation, copyright and intellectual property violations, and cybersecurity misuse, among others.

The document includes over 200 suggested actions organized around these risk categories, all mapped back to the four core RMF functions. Teams already using the base framework can layer generative AI risk management on top of their existing practices.

Four priority areas emerged during development: governance structures for GenAI, content provenance (tracking and watermarking AI-generated content), pre-deployment testing, and incident disclosure. Content provenance received particular emphasis, reflecting growing concern about AI-generated misinformation and deepfakes.

Alignment with other frameworks

The AI RMF is designed to work alongside the NIST Privacy Framework and the Cybersecurity Framework (CSF 2.0), so AI risks can be treated as part of broader enterprise risk management.

Crosswalk with the EU AI Act

NIST maintains official crosswalk documents mapping the AI RMF to EU AI Act requirements. The three major frameworks (NIST RMF, ISO 42001, and the EU AI Act) share transparency, bias mitigation, and output monitoring as common themes. The EU AI Act carries binding legal obligations, particularly for high-risk systems and foundation models, while the NIST RMF is voluntary but maps directly to those obligations.

Relationship with ISO/IEC 42001

ISO 42001 provides a certifiable management system structure. The NIST AI RMF supplies the risk reasoning methodology. Many teams use both: ISO 42001 as the quality management backbone and the NIST AI RMF as the risk analysis lens. Guidance published in 2025 by the Cloud Security Alliance shows how combining both frameworks can support EU AI Act compliance.

Integration with enterprise risk

NIST released a companion document (NIST IR 8357) to help teams fold AI risk into enterprise risk governance. The idea is to align AI risk assessments with audit, legal, compliance, and board-level oversight, so AI risk management sits within the same structures used for every other risk domain rather than operating as a separate technical exercise.

Why it matters

AI introduces risks that differ fundamentally from traditional software. AI systems can behave unpredictably, evolve over time, and affect individuals or communities in ways that are hard to anticipate. They can amplify discrimination, reinforce inequality, or erode trust, particularly when deployed without safeguards.

The NIST AI RMF gives teams a structured way to:

  • Move from ad hoc AI decisions to documented, accountable processes
  • Build risk management into the full AI lifecycle, not just pre-deployment review
  • Show alignment with emerging global principles like the OECD AI Principles, the EU AI Act, and the G7 Hiroshima Process
  • Earn trust with customers, regulators, and internal stakeholders through transparency and proactive risk practices

Adoption patterns

The AI RMF is voluntary, but its reach is growing quickly. Major industry players, federal agencies, and AI assurance providers are aligning internal practices with it, and it has become a reference point in U.S. and international policy discussions.

U.S. government agencies are formalizing adoption through OMB directives and agency AI use case inventories. Enterprises most often start with the Govern function to establish accountability before rolling out measurement tools. Because adoption is voluntary, depth varies: some take on the full playbook while others use only the taxonomy and risk classification elements.

NIST AI RMF FAQ

What types of organizations is the AI RMF designed for?

Any organization that develops, deploys, or uses AI. Whether you are a startup fine-tuning LLM prompts or a large enterprise deploying high-stakes AI in healthcare, the framework adapts to different scales and contexts.

Is the AI RMF a legal requirement?

No. It is voluntary. But it helps teams prepare for regulatory requirements already taking shape in the EU, Canada, and several U.S. states. It also reduces litigation risk, satisfies procurement requirements, and meets stakeholder expectations. Many teams adopt it as a de facto standard even without a legal mandate.

Does the AI RMF replace existing AI governance practices?

No, it complements them. If you already follow ISO/IEC 42001, the EU AI Act, or internal AI ethics principles, the AI RMF helps you put those efforts into practice and benchmark them. It works as a translation layer that connects different governance frameworks.

What is the Generative AI Profile and do I need it?

The Generative AI Profile (AI 600-1) extends the base framework with over 200 actions addressing 12 GenAI-specific risk categories. If your team develops, deploys, or uses generative AI, the Profile provides targeted guidance for managing risks like hallucinations, harmful content, data leakage, and IP violations that the base framework does not fully cover.

What is a Profile in the RMF?

A Profile is a customized view of the framework reflecting your context, priorities, and risk tolerance. You might have one Profile for internal tools and another for customer-facing, high-impact applications. Profiles allow flexible adaptation without sacrificing the consistency of the underlying structure.

How does the AI RMF connect to broader enterprise risk?

The companion document NIST IR 8357 helps teams integrate AI risk into enterprise risk governance, aligning AI risk assessments with audit, legal, compliance, and board-level oversight so that AI risk management operates within the same structures used for all other risk domains.

Implement NIST AI Risk Management Framework (RMF) in your organization

Get hands-on with VerifyWise's open-source AI governance platform

NIST AI Risk Management Framework (RMF) | AI Governance Lexicon | VerifyWise