Frequently asked questions about EU AI Act
What is the EU AI Act?
The EU Artificial Intelligence (AI) Act governs the implementation and use of AI models, applications and products within the European Union. It includes legislation to ensure that AI models are implemented fairly, securely and in service of upholding human rights. Additionally, the EU AI Act seeks to empower innovative AI technologies being developed and put to market in the EU.
What are the main goals of the EU AI Act?
The EU AI Act aims to ensure that all AI applications developed and/or used within the EU comply with certain standards of quality and safety without compromising innovation in the AI space. To this end, the AI act classifies high-risk uses of AI and implements heightened restrictions on these applications, demands that AI implementations are always accountable to human oversight, and prohibits harmful use of AI models. Furthermore, the Act introduces regulations on general-purpose AI models that might pose systemic risk, documentation for AI systems, and other areas where responsible implementation of AI technologies are very important.
Who needs to comply with the EU AI Act?
The EU AI Act applies to many people and entities developing, marketing, distributing, implementing and using AI technologies within the European Union. Article 2 of the Act provides details on the scope of the Act.
When are the requirements of the EU AI Act going to apply?
The EU AI Act will be going into effect gradually. While the Act went into effect on 1 August 2024, the requirements of the Act will start to be enforced in early 2025; Chapters I and II of the Act go into effect 2 February 2025. The official Implementation Timeline provides further details on when different parts of the Act will start to be enforced.
What kinds of AI applications are prohibited under the EU AI Act?
The EU AI Act prohibits several different kinds of AI implementations and uses for AI technologies. These include untargeted collection of facial recognition data from online sources, AI applications that exploit the socioeconomic or personal vulnerabilities of their users, and AI implementations that hinder their users’ ability to make informed decisions. Article 5 of the Act goes into further detail on prohibited uses of AI within the EU, how the Act applies to law enforcement, and other relevant considerations.
How does the EU AI Act classify ‘high-risk’ AI?
According to the EU AI Act, there are several areas of implementation where AI is considered ‘high-risk’. These areas include critical infrastructure, education, employment, law enforcement, and biometrics. However, there are certain exceptions; for instance, if an AI in a high-risk field is only designed to perform a narrow procedural task, it is not classified as high-risk.
High-risk AI implementations are subject to unique regulations and higher standards under the Act. Annex III to the Act goes into further detail about the areas and exceptions around the high-risk AI classification.
Does the EU AI Act legislate on the possible risks of general-purpose AI?
Yes. As per Article 51 of the EU AI Act, general-purpose AI with sufficiently high impact capabilities can indeed be classified as posing ‘systemic risk’. The Act emphasizes that this capability depends on the evolving state of AI technology and will be reassessed as needed. At present, the threshold for a general-purpose AI needs to be classified as carrying ‘systemic risk’ is that its training process cumulatively exceeds 1025 floating point operations.
Are AI governance platforms important for compliance with the EU AI Act?
AI governance, the practice of maintaining the safety and transparency of AI applications, is indispensable for any organization using or providing AI solutions within the jurisdiction of the EU AI Act in any capacity. As AI applications mature, it will be increasingly vital for all organizations to keep up with changes in AI legislation (within the EU and elsewhere). This process brings many challenges with it, ranging from deploying auditable AI systems to proactively counteracting biases that AI models may develop.
AI governance applications answer a growing need for a streamlined and reliable way for organizations to ensure that their AI applications are well-documented, transparent, and safe.
How does VerifyWise contribute to the emerging AI governance space in light of the EU AI Act and other AI regulation frameworks?
At VerifyWise, our mission is to democratize AI governance. In this new and growing field, existing solutions are proprietary and expensive products, giving large companies an advantage when it comes to ensuring AI regulation compliance. VerifyWise is an open-source AI governance platform, which means that it is free for organizations of all sizes to use. Furthermore, everyone can contribute to the VerifyWise code base, empowering us to improve and innovate AI governance together.