Back to Blog
Blog
Sep 7, 2024
7 min read

AI governance: frameworks and best practices

Explore essential AI governance frameworks and best practices. Learn about risk management, ethical considerations, and developer responsibilities for responsible AI.

Through our work building VerifyWise, an open-source governance platform, we've studied how organizations translate AI principles into day-to-day practice. AI governance covers the structures, processes, and policies organizations use to develop and deploy AI responsibly. It spans ethics, risk management, and regulatory compliance.

Three governance framework paths: compliance, risk management, and ethics

How to match your situation to the right governance framework

Start with your situation, not the frameworks

Most organizations approach AI governance by reading about every available framework and trying to figure out which one to adopt. That's backwards. Start with four questions about your own situation:

  • Are you operating in the EU? The EU AI Act is legally binding. It's not optional, and it's not a suggestion. If you deploy AI systems that affect EU residents, regulatory compliance is your starting point.
  • Are you in a regulated industry? Healthcare, finance, and government procurement each have sector-specific AI requirements that sit on top of general frameworks. Your industry regulator's guidance matters more than any generic best-practices document.
  • Are you a global organization? You'll likely need to satisfy multiple frameworks simultaneously. The good news is that they overlap significantly, so a well-designed governance program can cover several at once.
  • Are you just getting started? If you have no formal AI governance today, NIST's AI Risk Management Framework is the most practical starting point. It's structured, actionable, and doesn't assume you already have a governance team in place.

Your answers determine which path below applies to you.

Five frameworks, three practical paths

Rather than treating all five major frameworks as equal options, it helps to group them by what they're actually designed to do. Most organizations fall into one of three situations, and each situation has a natural starting framework.

The regulatory compliance path

If legal requirements are driving your governance effort, these two frameworks define the rules you need to follow.

European Union's AI Act. The most comprehensive AI regulation in the world. It categorizes AI systems into risk levels (unacceptable, high, limited, and minimal) and assigns obligations accordingly. High-risk systems face strict requirements around transparency, human oversight, data quality, and documentation. If your AI systems touch EU residents, this framework isn't a choice; it's a legal obligation.

Learn more: https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai

Singapore's AI governance framework. Developed by IMDA and PDPC, this framework takes a similarly structured approach but focuses on practical deployment guidance. It covers governance structures, operations management, explainability, and fairness, and includes a self-assessment guide that makes compliance more concrete. Organizations operating in Asia-Pacific often use this alongside the EU AI Act to cover both regulatory environments.

Learn more: https://www.pdpc.gov.sg/help-and-resources/2020/01/model-ai-governance-framework

The risk management path

If you want to build operational governance, practical processes that teams actually follow, this is where to start.

NIST AI Risk Management Framework (AI RMF). Built around four core functions: Govern, Map, Measure, and Manage. What makes NIST practical is its companion playbook, which translates abstract principles into specific activities. It helps you identify where AI risks live in your organization, measure them consistently, and build management processes that scale. Most organizations we've worked with find NIST the easiest framework to actually operationalize, because it was designed for implementation, not just policy statements.

Learn more: https://www.nist.gov/itl/ai-risk-management-framework

The principles and ethics path

If your organization is building a governance culture from the ground up and needs shared language and values before jumping into process, these frameworks provide the foundation.

OECD AI Principles. Adopted by 42 countries, these principles promote trustworthy AI that respects human rights and democratic values. They include five principles for responsible AI stewardship and five recommendations for national and international policy. The OECD framework works well as an organization-wide north star, giving leadership and technical teams a common vocabulary for discussing AI responsibility.

Learn more: https://oecd.ai/en/ai-principles

IEEE Ethically Aligned Design. Where OECD stays at the principle level, IEEE goes deeper into design-level guidance. It covers transparency, accountability, and privacy with concrete recommendations that engineers can apply during system design. If your team needs to understand what ethical AI looks like in practice, not just in policy documents, IEEE bridges that gap.

Learn more: https://ethicsinaction.ieee.org/

What developers actually need to do

Frameworks set direction, but developers determine whether governance works in practice. Here's what that looks like day to day:

  • Run bias audits before deployment. Don't wait for a governance review to flag fairness issues. Test your models against demographic subgroups during development, not after launch.
  • Document training data sources. Record where your data came from, what transformations you applied, what's included, and what's excluded. When regulators or auditors ask questions, "we used public data" is not an adequate answer.
  • Build override mechanisms for automated decisions. Any AI system that affects people (hiring, lending, content moderation) needs a way for humans to intervene. Design the override path before you design the model.
  • Make outputs explainable at the right level. A data scientist needs feature importance scores. A loan applicant needs a plain-language reason for denial. Build explanations for your actual audiences, not just technical ones.
  • Treat privacy as a design constraint, not a compliance checkbox. Minimize data collection, encrypt sensitive inputs, implement access controls, and build data deletion capabilities from the start.

The implementation trap

The most common failure mode we see isn't choosing the wrong framework. It's choosing the right one and never operationalizing it.

Here's how it usually plays out: an organization selects a framework, writes policies based on it, and publishes those policies internally. Then nothing changes. Development teams keep building the way they always have, because nobody translated the policies into specific processes, tools, or checkpoints that fit into existing workflows.

The second most common failure is trying to do everything at once. An organization decides it needs to comply with the EU AI Act, implement NIST's risk management framework, and align with OECD principles simultaneously. The governance team spends months building a comprehensive program, and by the time they're ready to roll it out, the AI field has shifted and the program feels outdated.

What works instead:

  • Start with one framework. Pick the one that addresses your most pressing need, whether that's regulatory compliance, risk management, or building shared principles. Get it working before adding anything else.
  • Embed governance into existing workflows. Don't create a separate governance process that runs parallel to development. Add checkpoints, reviews, and documentation requirements to the workflows teams already use.
  • Assign clear ownership. Every governance requirement needs a person or team responsible for it. "The organization" is not an owner. Name the people who will review bias audits, approve high-risk deployments, and update documentation.
  • Review and adjust quarterly. AI regulations evolve, your organization's AI usage grows, and new risks emerge. A governance framework that never changes is a governance framework that stops working.

What to do next

Pick a framework that fits your regulatory environment. Adapt it to your organization's context. Build the governance components you need. Monitor outcomes and adjust. If unauthorized AI usage is a concern, shadow AI detection can help you discover and govern unapproved tools across your organization.

The frameworks above provide starting points. Execution determines results.

Found this article helpful? Share it with your network.

Share:

About the VerifyWise team

VerifyWise builds open-source AI governance software used by organizations to manage risk, compliance, and oversight across their AI portfolios. Our editorial team draws on hands-on experience implementing governance workflows for regulated industries and fast-scaling AI teams.

Learn more about VerifyWise →

Ready to govern your AI responsibly?

Start your AI governance journey with VerifyWise today.

AI governance: frameworks and best practices | VerifyWise Blog