Making sense of AI rules: EU AI Act, NIST AI RMF, and ISO 42001

Artificial Intelligence (AI) is growing fast. It brings amazing new possibilities but also big questions. AI is showing up more and more in important areas and in our daily lives. 

Because of this, we need to make sure AI is developed and used in a good way. This has led to new rules and guidelines to help control AI. 

Three important ones are the EU AI Act from Europe, the NIST AI Risk Management Framework (RMF) from the U.S., and the worldwide standard ISO/IEC 42001:2023.

These three all deal with managing AI and its risks, but they have different goals, cover different things, and work in different ways. Knowing how they are alike, how they are different, and how they might work together is key for groups trying to handle AI responsibly. This article compares these important guidelines.

Why We Need Rules for AI

Before looking at each set of rules, it helps to know why we need them. AI is special. It can learn, change, and make decisions in ways that are sometimes hard to follow. This brings up some real worries:

  • Safety and Things Working Right: Stopping AI systems from causing harm by mistake or not working as expected.
  • Fairness: Making sure AI systems don’t copy or increase unfairness already present in society.
  • Openness and Being Clear: Helping people understand how AI makes decisions, especially for important choices.
  • Responsibility: Knowing who is in charge when AI systems cause problems.
  • Security: Protecting AI from being hacked or messed with.
  • Building Trust: Making sure people, rule-makers, and the public feel good about how AI is made and used.

These ideas are the base for the EU AI Act, NIST AI RMF, and ISO 42001. But each one focuses on these ideas in its own way.

A Quick Look at the Rules

Here’s a short intro to each set of rules:

  • EU AI Act: This is a detailed law from the European Union (EU). It wants to make AI rules the same across all EU countries. It uses a risk-based system: the riskier an AI system might be to people’s health, safety, or basic rights, the stricter the rules. It’s a real law with penalties (like big fines) if companies don’t follow it. Its main goal is making AI safe and protecting people’s basic rights for AI used in the EU.
  • NIST AI Risk Management Framework (RMF): This comes from the U.S. National Institute of Standards and Technology (NIST). It is advice, not a law. It gives a clear way for groups to find, measure, and handle AI risks from start to finish. It’s meant to be flexible and used in different situations to help build trustworthy and responsible AI. It doesn’t force specific results but gives good ideas and shared words for talking about AI risks.
  • ISO/IEC 42001:2023: This is a worldwide standard from ISO and IEC (groups that make standards). It lists the steps needed to set up, use, keep up, and keep improving an AI Management System (AIMS) inside a group. It’s built like other ISO standards (like ISO 9001 for quality). It helps build AI rules into a group’s normal ways of working. Using it is optional, but groups can get officially approved (certified) to show they have good AI management skills.

These first points show big differences in whether they are laws, what they focus on, and how they work.

Looking Closer at Each Set of Rules

The EU AI Act: The Big Rule-Setter

The EU AI Act is a major attempt to make broad rules for AI. Its system of risk levels is key:

  • Unacceptable Risk: AI uses that are seen as clear threats to people or rights are banned. Things like government social scoring or using face scanning to identify people in public in real-time (though there are small exceptions for police work).
  • High Risk: This is where most of the rules apply. AI used in important areas like transportation, hiring, schooling, essential services (like getting a loan), police work, border control, and courts falls here. These systems must meet strict rules before they can be sold or used. Rules cover things like having good risk plans, using high-quality information, keeping good records, having people oversee the AI, and making sure the AI is accurate, strong, and secure. Often, they need to be checked by an outside group.
  • Limited Risk: Systems like chatbots or AI that makes deepfakes have rules about being open. They must tell users they are dealing with AI or seeing AI-made content.
  • Minimal Risk: This covers things like AI in video games or email spam filters. The Act doesn’t add new rules here but suggests companies follow voluntary guides.

The EU AI Act is known for being a real law in the EU, focusing on product safety and basic rights, and having very specific rules for high-risk AI. It mainly puts duties on companies making AI for the EU market. It tries to make the rules clear, but figuring out exactly what they mean and making sure everyone follows them will be a big job.

NIST AI RMF: The Helpful Guide for Risks

Unlike the EU law, the NIST AI RMF is advice you don’t have to follow. Its main goal is to give groups the tools they need to handle AI risks well and build trustworthy AI. NIST says trustworthy AI should be: accurate, reliable, strong, safe, secure, responsible, open, explainable, understandable, private, and fair (managing unfairness).

The RMF organizes the work into four main parts:

  • Govern: Building a culture of thinking about AI risks in the group. This means setting up internal rules, jobs, and training.
  • Map: Understanding the situation, the specific AI system, and finding possible risks and benefits. What are we building? Who might it affect? What could go wrong?
  • Measure: Finding ways to check, study, and track the AI risks that were found. This means using numbers or other ways to figure out how likely a risk is or how bad it could be.
  • Manage: Deciding what to do about the most important risks (like avoid them, reduce them, or accept them) and then taking action.

The NIST AI RMF focuses on being flexible, fitting the situation, and always getting better. It gives a useful way to think about risk throughout the life of an AI system. It can be used by different groups of any size. While it’s not a law itself, using it can help groups meet legal rules (like the EU AI Act) and show others they are using AI responsibly. It’s used around the world because NIST is well-respected and the guide is useful.

ISO/IEC 42001: The Plan for Your Organization

ISO/IEC 42001 is about setting up a formal AI Management System (AIMS) inside a group. It uses the same basic structure as other popular ISO standards (like ISO 9001 for quality or ISO 27001 for security), which makes it easier to add if a group already uses those.

The standard asks groups to take a planned approach to managing AI, including:

  • Figuring out the group’s situation and what interested people need regarding AI.
  • Making an AI policy and goals that can be measured.
  • Giving people clear jobs and power for managing AI.
  • Putting in place ways to check and handle AI risks and opportunities.
  • Managing the life of AI systems using proper controls.
  • Making sure there are enough resources and skilled people.
  • Checking, measuring, studying, and judging how well the AIMS is working.
  • Always trying to make the system better.

A big part is Annex A, which lists suggested AI controls and goals. These cover things like handling data, being open with people, having human checks, being strong and secure, and managing the AI system’s life. Groups choose the controls that fit their situation and risks. Annex B gives advice on how to use these controls.

ISO 42001 is about making a group capable and having good processes. A main reason groups use it is to get officially certified. This certification can show customers, partners, and rule-makers that the group has solid ways to manage AI. It gives a clear plan for handling AI the same way across the group.

What’s Similar About Them?

Even though they are different, these rules share some basic ideas:

  • Risk Management is Key: All three say that finding, checking, and handling AI risks is very important.
  • Look at the Whole Lifecycle: They agree that AI rules aren’t a one-time thing. Risks need to be thought about from the first idea all the way to when the AI is retired.
  • Data Matters: They all point out how important good data, proper data handling, and reducing unfairness in data are for good AI.
  • Humans Should Be Involved: They agree that having people oversee AI, especially for important tasks, is needed for safety and control.
  • Be Open and Keep Records: Being clear about what AI is doing (from telling users to keeping detailed notes) is seen as key for trust and responsibility.
  • Be Secure and Strong: Making sure AI systems are safe from attacks and work correctly in different situations is a common goal.

These shared points show that people around the world are starting to agree on the basic needs for making and using AI well.

Where They Are Different

The differences are just as important for knowing how to use them:

  • Law vs. Advice: The EU AI Act is a law in the EU. NIST AI RMF and ISO 42001 are optional.

Main Focus:

  • EU AI Act: Focuses on the AI system itself (as a product/service) being safe and respecting rights in the EU.
  • ISO 42001: Focuses on the group’s management system for handling AI processes inside the company.

  • NIST AI RMF: Focuses on the practical steps and ways to manage AI risk.

Type:

  • EU AI Act: A broad law with specific rules based on risk level.
  • ISO 42001: A standard for a management system that groups can get certified for.
  • NIST AI RMF: A guide and set of tools for risk management practices.

Main Reason to Use:

  • EU AI Act: To follow the law in the EU market.
  • NIST AI RMF: To make AI more trustworthy and improve how the group handles risk.
  • ISO 42001: To show good AI management through certification and standard processes.

How Specific the Rules Are:

  • EU AI Act: Very specific about rules for high-risk systems.
  • NIST AI RMF: Meant to be flexible and fit the situation, not strict rules.
  • ISO 42001: Specific about the parts of the management system, but flexible on how groups use specific controls based on risk.

Where They Apply:

  • EU AI Act: Applies directly to the EU, but might affect others worldwide.
  • NIST AI RMF: Made in the US, but used worldwide as good advice.
  • ISO 42001: A worldwide standard meant for use anywhere.

These differences show that the rules play different, but connected, roles in how we manage AI.

How They Can Work Together

Do groups have to choose just one? Not always. Often, they can work well together. A large company working worldwide might use parts of all three.

Think about a company making an AI tool for doctors that they want to sell in Europe and other places:

They must follow the EU AI Act to sell in Europe. The Act tells them the specific rules their AI system must meet (like risk handling, data quality, strength).

To figure out how to find, check, and handle the risks in making and using that tool, they might use the NIST AI RMF. It gives them practical steps and ways to think about being trustworthy.

To make sure these risk steps and other rules (like who is in charge, policies, training, checks) are used the same way across the whole company and fit with other business steps, they might set up an AI Management System using ISO 42001. Getting ISO certified could then help show rule-makers and customers they have good processes, which might help show they meet parts of the EU AI Act.

In this case, NIST gives the “how-to” for risk, ISO gives the company-wide structure and proof, and both help meet the legal rules from the EU AI Act.

Challenges on the Road Ahead

Using any of these rules takes real effort, time, and skilled people.

  • EU AI Act: It can be hard to know exactly what the rules mean, make sure they are applied the same way everywhere in the EU, and keep up with fast changes in AI. It might also be tough for smaller companies.
  • NIST AI RMF: Since it’s optional, it only works if groups really want to use it and put in the resources. Measuring some parts of “trustworthiness” (like fairness) is still hard to do.
  • ISO 42001: Like other standards, there’s a risk groups just “check boxes” without really changing. Making it truly part of how the company works takes real effort.

Also, AI rules are different around the world, which makes things tricky for companies working in many countries.

Moving Towards Good AI

The EU AI Act, NIST AI RMF, and ISO 42001 are important steps towards guiding AI in a good direction. They come from different places – law, risk advice, and worldwide standards – but they share goals like safety, fairness, openness, and responsibility.

Knowing their different goals, coverage, and rules is key. The EU AI Act sets legal rules for the EU market. NIST offers a useful, flexible toolkit for managing AI risks. ISO 42001 gives a plan that groups can get certified for to show they manage AI well internally.

These rules don’t have to be used alone; they can work together as parts of a larger plan for managing AI. Groups can use NIST’s practical advice and ISO’s structure to help meet legal rules like the EU AI Act and generally build trustworthy AI.

Handling AI well needs ongoing care, learning, and sticking to good values. While it’s complicated, these guidelines give us important starting points for making sure AI helps people in a safe, fair, and responsible way.

 

VerifyWise is an open-source AI governance platform designed to help businesses use the power of AI safely and responsibly. Our platform ensures compliance and robust AI management without compromising on security.

© VerifyWise - made with ❤️ in Toronto 🇨🇦

  • Main Pages