European Union
regulationactive

EU Artificial Intelligence Act - Official Text

European Union

View original resource

EU Artificial Intelligence Act - Official Text

Summary

The EU AI Act is the world's first comprehensive AI regulation, setting the global standard for how AI systems should be governed. This resource provides access to the complete official text of Regulation (EU) 2024/1689, the landmark legislation that will reshape how AI is developed, deployed, and monitored across Europe and beyond. With its risk-based classification system, the Act distinguishes between minimal risk, high-risk, and prohibited AI applications, creating a legal framework that balances innovation with fundamental rights protection. This is the definitive source for understanding exactly what compliance looks like under the world's most ambitious AI governance regime.

The regulatory timeline that matters

The EU AI Act follows a phased implementation schedule that directly impacts when different organizations need to comply:

  • February 2025: Prohibition of banned AI practices takes effect (AI systems for social scoring, emotion recognition in workplaces/schools, biometric categorization)
  • August 2025: General-purpose AI model requirements kick in for foundation model providers
  • August 2026: High-risk AI system obligations become mandatory (covers everything from AI in medical devices to recruitment tools)
  • August 2027: Full implementation across all remaining provisions

Understanding these dates is crucial because non-compliance isn't just a regulatory risk—it's a business continuity threat with fines up to €35 million or 7% of global annual turnover.

What puts your AI system in the "high-risk" category

The Act's Annex III defines eight specific areas where AI systems automatically qualify as high-risk, each with detailed compliance requirements:

Biometric identification and categorization - Real-time and post-remote biometric identification systems used by law enforcement, with limited exceptions for serious crimes

Critical infrastructure management - AI systems managing safety components in road traffic, water, gas, heating, and electricity supply

Education and vocational training - Systems for educational institution admissions, exam scoring, or student performance evaluation

Employment and worker management - Recruitment platforms, promotion decisions, work assignment algorithms, and employee monitoring systems

Access to essential services - Credit scoring, insurance pricing, emergency response systems, and benefit eligibility determinations

Law enforcement - Predictive policing tools, evidence evaluation systems, and crime analytics platforms

Migration and border management - Asylum application processing, visa decisions, and border control systems

Democratic processes - Any AI system intended to influence voting behavior or election outcomes

Each category triggers specific technical documentation, risk management, data governance, and human oversight requirements that organizations must implement before deployment.

Navigating the compliance maze

The Act establishes a complex web of responsibilities across the AI value chain. Providers (those who develop or substantially modify AI systems) bear the heaviest burden, including conformity assessments, CE marking requirements, and post-market monitoring obligations. Deployers (organizations using AI systems) must conduct fundamental rights impact assessments for high-risk systems and ensure human oversight protocols.

The regulation also introduces notified bodies—third-party organizations authorized to conduct conformity assessments—and establishes AI governance structures in each member state. The European AI Office will oversee general-purpose AI models, while national authorities handle most other enforcement activities.

Who this resource is for

AI system providers and developers who need to understand technical requirements, documentation obligations, and conformity assessment procedures before bringing products to market in the EU.

Enterprise AI deployers including HR departments using recruitment algorithms, financial services implementing credit scoring systems, and healthcare organizations deploying diagnostic AI tools.

Legal and compliance professionals responsible for interpreting regulatory requirements, conducting impact assessments, and developing organizational AI governance frameworks.

Policy and government officials in non-EU jurisdictions considering similar AI regulation or needing to understand how the Act affects cross-border AI services.

Technology vendors and consultants who need detailed knowledge of EU requirements to advise clients on AI compliance strategies and implementation approaches.

Beyond Europe's borders

While this is EU regulation, its extraterritorial effects mean the Act applies to any AI system that produces outputs used within the EU, regardless of where the system is developed or operated. This "Brussels Effect" makes the EU AI Act relevant for global technology companies, multinational corporations, and even smaller organizations that serve European customers or process EU resident data through AI systems.

The Act also establishes the foundation for international AI governance discussions, with other jurisdictions closely watching its implementation as a model for their own regulatory approaches.

Tags

AI regulationEU complianceartificial intelligencelegal frameworkrisk-based approachAI governance

At a glance

Published

2024

Jurisdiction

European Union

Category

Regulations and laws

Access

Public access

Build your AI governance program

VerifyWise helps you implement AI governance frameworks, track compliance, and manage risk across your AI systems.

EU Artificial Intelligence Act - Official Text | AI Governance Library | VerifyWise