What are some of the high risk AI systems under EU AI Act?
The rapid development of artificial intelligence has brought transformative possibilities, but it also raises critical questions about safety, fairness, and accountability. To address these concerns, the EU AI Act categorizes certain AI systems as “high-risk” due to their significant impact on people’s rights, safety, and well-being.
Understanding what makes an AI system high-risk is crucial for businesses, regulators, and society as a whole.
In this blog, we’ll explore the key categories of high-risk AI systems defined by the EU AI Act, including:
- Remote biometric identification systems,
- AI in critical infrastructure,
- Education, employment, and essential services,
- Law enforcement, migration, and border management, and
- Administration of justice and democratic processes.
We’ll also discuss why these systems are considered high-risk and what it means for developers, users, and society as we move toward responsible AI governance.
Under the EU AI Act, these systems can be considered high-risk AI systems if they meet specific criteria outlined in Annex III of the Act. Here’s an analysis of each:
1. Remote biometric identification systems
Yes, these are high-risk AI systems.
- Specifically mentioned in Annex III (1).
- Includes AI used for identifying people remotely in publicly accessible spaces in real-time or post-event scenarios.
2. Critical infrastructure
Yes, this can be high-risk system.
- Included in Annex III (2), if failure or misuse of the AI system poses a significant risk to safety or the environment.
- Covers areas like transport, energy, and water supply.
3. Education and vocational training
Yes, this can be high-risk AI system if certain conditions apply.
- Mentioned in Annex III (3).
- AI systems that influence access to education or determine outcomes for exams, tests, or evaluations are included.
4. Employment, workers management, and access to self-employment
Yes, they are high-risk AI systems if certain criteria are met.
- Mentioned in Annex III (4).
- Includes AI systems used in recruitment, promotion, or task assignment, where decisions significantly impact individuals’ lives.
5. Access to and enjoyment of essential private services and essential public services and benefits
Yes, they are high-risk AI systems under Annex III (5).
- Examples: AI systems determining access to social benefits, healthcare, financial services, etc.
6. Law enforcement
Yes, they are high-risk AI systems under Annex III (6).
- Includes AI systems used for crime prediction, risk assessments, profiling, evidence analysis, or criminal investigations.
7. Migration, asylum, and border control management
Yes, they are high-risk AI systems under Annex III (7).
- Covers AI systems used for verifying documents, assessing risks, or making decisions on migration or asylum applications.
8. Administration of justice and democratic processes
Yes, they are high-risk AI systems under Annex III (8).
- Includes AI systems assisting judicial authorities or impacting democratic elections or decision-making.
Navigating the compliance requirements of the EU AI Act can be complex, especially when dealing with high-risk AI systems.
That’s where VerifyWise comes in. As an open-source AI governance platform, VerifyWise helps your company ensure compliance by providing tools to assess, document, and monitor your AI systems.
From auditing datasets and algorithms to tracking accountability and mitigating risks, VerifyWise empowers you to align with regulatory standards while maintaining transparency and trust.
By integrating VerifyWise into your processes, you can confidently develop and deploy AI systems that are not only innovative but also ethical and compliant with the EU AI Act.