Back to AI lexicon
Compliance & Regulation

EU AI Act

EU AI Act

What is the EU AI Act?

The EU Artificial Intelligence (AI) Act is the first law anywhere in the world to regulate AI systems in a general way. It sets out rules for how AI can be developed, sold, and used, with the goal of protecting fundamental rights and safety without blocking legitimate innovation.

The regulation was published in the Official Journal of the European Union on July 12, 2024, and entered into force on August 1, 2024. Its provisions are being phased in over several years, with full applicability for high-risk AI systems arriving on August 2, 2026.

Risk-based classification

The EU AI Act classifies AI systems into four risk tiers, each with different regulatory requirements.

Unacceptable risk (prohibited)

Certain AI practices are banned outright because they threaten fundamental rights. These include:

  • Social scoring: AI that evaluates or ranks people based on their social behavior or personal characteristics, leading to unjustified harmful treatment
  • Manipulative AI: Systems designed to exploit the vulnerabilities of specific groups (by age, disability, or economic situation) in order to distort their behavior
  • Untargeted facial recognition scraping: Building facial recognition databases by collecting images from the internet or CCTV footage without consent
  • Emotion recognition in workplaces and schools: Inferring the emotions of workers or students using AI, with limited exceptions
  • Biometric categorization: Sorting people by sensitive attributes such as race, political opinions, or sexual orientation

These prohibitions took effect on February 2, 2025.

High risk

AI systems that can significantly affect people's lives are classified as high-risk. They must meet strict requirements before going on the market. The Act covers these areas:

  • Critical infrastructure: Managing water, gas, heating, or electricity supply
  • Education: Deciding who gets admitted to an institution or how students are assessed
  • Employment: Recruitment, candidate screening, performance evaluation, and workforce management
  • Essential services: Determining access to credit, insurance, public benefits, or emergency services
  • Law enforcement: Individual risk assessments, lie detection, evidence evaluation
  • Migration and border control: Asylum applications, border surveillance, visa processing
  • Justice and democracy: Assisting judges in researching and interpreting facts or law

There is an exception: if an AI system in one of these areas only performs a narrow procedural task, it may fall outside the high-risk classification.

Limited risk

Some AI systems carry specific transparency obligations. Users must be told when they are talking to a chatbot, viewing AI-generated content (including deepfakes), or being subjected to emotion recognition or biometric categorization. These disclosure rules apply regardless of which risk tier the system falls into.

Minimal risk

AI systems that do not fit any of the above categories can be developed and used freely under existing law. Most AI systems currently on the EU market fall here.

Requirements for high-risk AI systems

Deploying a high-risk AI system means meeting a detailed set of obligations:

  • Risk management: A risk management process must run throughout the system's lifecycle, not just at launch
  • Data governance: Training, validation, and testing datasets need to be relevant, representative, and as free from errors as practicable
  • Technical documentation: Enough documentation for regulators to assess whether the system complies
  • Record-keeping: Automatic logging of the system's operations so that decisions can be traced back
  • Transparency: Clear instructions for users covering what the system can and cannot do, and what it was designed for
  • Human oversight: Operators must be able to monitor, override, or reverse the system's decisions
  • Accuracy, robustness, and cybersecurity: The system must perform reliably, resist errors and adversarial attacks, and be protected against unauthorized access

Conformity assessment

High-risk AI systems must pass a conformity assessment before they can be sold in the EU.

For most high-risk systems (those in Annex III areas such as employment, credit scoring, and law enforcement), providers can follow an internal control procedure, which amounts to a self-assessment against the Act's requirements.

When AI is embedded in products already covered by EU safety legislation (medical devices, vehicles, machinery), a notified body must be involved. Notified bodies are independent third-party assessors designated by EU Member State authorities.

General-purpose AI (GPAI) models

The Act also regulates general-purpose AI models: foundation models that can serve many different downstream applications.

All GPAI providers must:

  • Make technical documentation available
  • Supply information and documentation to anyone building on top of their model
  • Follow EU copyright law
  • Publish a sufficiently detailed summary of the data used to train the model

GPAI models that pose systemic risk (currently defined as models trained with more than 10^25 FLOPs of compute) face additional requirements, including adversarial testing, incident tracking, and cybersecurity protections.

Open-source exemption

GPAI models released under qualifying open-source licenses get a partial pass on documentation requirements. The exemption does not extend to models with systemic risk, though, and copyright compliance and training data summaries are still mandatory for all GPAI providers. Worth noting: the Act's definition of "qualifying" open-source license is narrower than what most developers would expect.

Implementation timeline

The Act is being enforced in stages:

  • February 2, 2025: Prohibitions on unacceptable-risk practices take effect (Chapters I and II)
  • August 2, 2025: GPAI provider obligations begin; rules for notified bodies apply
  • August 2, 2026: All high-risk AI system requirements become fully applicable
  • August 2, 2027: Extended deadline for high-risk AI systems embedded in products already regulated under existing EU safety law

Penalties

The Act uses a tiered fine structure:

  • Prohibited AI practices: Up to 35 million euros or 7% of global annual turnover, whichever is higher
  • Non-compliance with high-risk requirements: Up to 15 million euros or 3% of global annual turnover
  • Supplying incorrect information to authorities: Up to 7.5 million euros or 1% of global annual turnover

SMEs and startups get some relief: their fines are capped at whichever is lower, the fixed amount or the turnover percentage.

Who needs to comply?

The Act's territorial reach is broad. It applies to:

  • Providers that develop or place AI systems on the EU market, even if headquartered outside Europe
  • Deployers using AI systems within the EU
  • Importers and distributors that bring AI systems into the EU market
  • Product manufacturers that integrate AI into products sold in the EU

In practice, a company based outside the EU that sells AI-powered products or services to EU customers must comply with whatever requirements apply to its systems.

Governance and enforcement

Several bodies share responsibility for overseeing the Act:

  • EU AI Office: Sits within the European Commission. Handles GPAI model oversight and coordinates enforcement across Member States.
  • European AI Board: Made up of Member State representatives. Issues guidance and promotes consistent application of the rules.
  • National competent authorities: Each Member State appoints its own enforcement body.

FAQ

Does the EU AI Act apply to companies outside the EU?

Yes. Any company that places AI systems on the EU market or uses AI systems within the EU must comply, no matter where it is based. If your product or service reaches EU residents, the Act applies to you.

Does the EU AI Act legislate on the possible risks of general-purpose AI?

Yes. Under Article 51, general-purpose AI models with sufficiently high impact capabilities can be classified as posing systemic risk. The current threshold is cumulative training compute above 10^25 floating point operations. Models that cross that line face additional obligations, including adversarial testing and incident reporting.

How does the EU AI Act treat open-source AI?

Open-source GPAI models released under qualifying licenses are partially exempt from documentation requirements, but not from copyright compliance or training data summary obligations. Models classified as having systemic risk get no exemption at all. It is also worth noting that the Act defines "qualifying" open-source licenses more narrowly than the term is commonly understood in the developer community.

Are AI governance platforms important for compliance?

They can be. The EU AI Act creates significant documentation, risk assessment, and monitoring obligations. Governance platforms help by centralizing model inventories, mapping regulatory requirements to specific systems, maintaining audit trails, and streamlining conformity assessments. As the rules take effect, having a structured approach to tracking compliance will matter more than ad hoc spreadsheets.

What is the relationship between the EU AI Act and GDPR?

The two regulations work side by side. GDPR covers personal data processing; the AI Act covers the design and deployment of AI systems. Any high-risk AI system that processes personal data must satisfy both. In practice, there is substantial overlap between GDPR data protection impact assessments and the AI Act's own risk assessment requirements.

How should companies prepare for the EU AI Act?

Start by inventorying your AI systems and classifying each one by risk tier. From there, identify where you fall short of the applicable requirements and build a plan to close those gaps. The inventory and classification step is the most important, because everything else depends on knowing which rules apply to which systems.

Implement EU AI Act in your organization

Get hands-on with VerifyWise's open-source AI governance platform

EU AI Act | AI Governance Lexicon | VerifyWise