Glossary of terms in EU AI Act

This glossary is not exhaustive, and there may be other terms that are used in the EU AI Act that are not defined here. However, it provides a good starting point for understanding the key terminology used in the regulation.

Artificial Intelligence (AI) System

A computer-based system designed to perform tasks that typically require human intelligence, such as visual perception, speech recognition, decision-making, and language translation.

High-Risk AI System

AI applications that pose significant risks to health, safety, or fundamental rights of individuals, often subject to stricter regulatory oversight.

Unacceptable AI System

AI applications considered to create unacceptable risks and are typically prohibited by law, such as certain types of social scoring systems.

General Purpose AI System

An AI system with broad capabilities that can be applied to a wide range of tasks across different domains.

AI Regulatory Sandbox

A controlled environment where companies can test innovative AI technologies under regulatory supervision, often with temporary exemptions from certain rules.

Notified Body

An organization designated by a country to assess the conformity of certain products before they can be placed on the market.

National Competent Authority

Government bodies responsible for implementing and enforcing AI-related regulations at the national level.

AI Office

A dedicated governmental or organizational unit responsible for overseeing AI-related policies, implementation, and governance.

Post-Market Monitoring System

A process for continuously assessing and addressing risks associated with AI systems after they have been deployed.

AI Literacy

The ability to understand, use, and critically evaluate AI technologies and their impacts on society.

Conformity Assessment

The process of evaluating whether an AI system meets specified requirements, standards, or regulations.

Placing on the Market

The act of making an AI system or model available for sale or use for the first time in a particular market or region.

More glossary terms related to AI in general

These terms cover various aspects of AI governance and regulation, reflecting the complex landscape of managing AI systems responsibly and ethically.

Algorithmic Bias

Systematic errors in AI systems that can result in unfair outcomes for certain groups, often based on characteristics like race, gender, or age.

Automated Decision-Making

The process by which decisions are made by AI systems without human intervention.

Biometric Data

Personal data related to an individual's physical, physiological, or behavioral characteristics.

Data Minimization

The principle of limiting data collection and processing to only what is necessary for a specific purpose.

Explainable AI

AI systems designed to provide understandable explanations for their decisions and actions.

Federated Learning

A machine learning technique that trains algorithms across multiple decentralized devices or servers without exchanging raw data.

Human-in-the-Loop

A model that requires human interaction in the decision-making process.

Model Drift

The degradation of an AI model's performance over time due to changes in data or environment.

Privacy-Preserving AI

Techniques used to develop AI systems that protect individual privacy while maintaining functionality.

Responsible AI

The practice of designing, developing, and deploying AI systems in an ethical, transparent, and accountable manner.

Risk-Based Approach

A method for categorizing and regulating AI systems based on their potential risk level.

Synthetic Data

Artificially generated data that mimics real-world data, often used for training AI models.

Transparency by Design

The principle of incorporating transparency into AI systems from the earliest stages of development.

AI Ethics Committee

A group of experts providing guidance on ethical issues related to AI development and deployment.

AI Impact Assessment

A systematic evaluation of the potential effects of an AI system on individuals and society.

Algorithmic Accountability

The obligation to explain and justify decisions made by AI systems.

Data Governance

The overall management of data availability, usability, integrity, and security within an organization.

Model Interpretability

The degree to which an AI model's decision-making process can be understood by humans.

Robustness Testing

The process of evaluating an AI system's performance under various conditions, including adversarial attacks.

Trustworthy AI

AI systems that are lawful, ethical, and technically robust.

VerifyWise is an open-source AI governance platform designed to help businesses use the power of AI safely and responsibly. Our platform ensures compliance and robust AI management without compromising on security.

© 2024 VerifyWise