EU AI Act compliance

EU AI Act compliance made simple

The EU AI Act is live. We turn legal text into a clear plan with owners, deadlines, and proof. Start with a fast gap assessment, then track everything in one place.

Risk tiers explained

The EU AI Act uses a risk-based approach with four main categories

Unacceptable risk

Banned outright

Social scoring, emotion recognition at work/school, biometric categorization, real-time biometric ID in public spaces, subliminal techniques

prohibited

High risk

Strict controls required

CV screening, credit scoring, law enforcement, medical devices, critical infrastructure, education assessment, recruitment tools

regulated

Limited risk

Transparency required

Chatbots, deepfakes, emotion recognition systems, biometric categorization (non-prohibited), AI-generated content

disclosure

Minimal risk

Little to no obligations

Spam filters, video games, inventory management, AI-enabled video games, basic recommendation systems

minimal

Not sure if you're in scope?

Take our free EU AI Act readiness assessment to determine your risk classification and compliance obligations in minutes.

5 mins

Quick assessment

Instant

Get results immediately

Take free assessment

Know your role & obligations

Different actors in the AI value chain have different responsibilities under the EU AI Act

Provider

Organizations developing or substantially modifying AI systems

  • Implement risk management system throughout lifecycle
  • Ensure training data quality and governance
  • Create and maintain technical documentation
  • Design appropriate logging capabilities
  • Ensure transparency and provide information to deployers
  • Implement human oversight measures
  • Ensure accuracy, robustness, and cybersecurity
  • Establish quality management system
  • Conduct conformity assessment and affix CE marking
  • Register system in EU database
  • Report serious incidents to authorities

Deployer

Organizations using AI systems under their authority

  • Assign human oversight personnel
  • Maintain logs for at least 6 months
  • Conduct fundamental rights impact assessment
  • Monitor system operation and performance
  • Report serious incidents to provider and authorities
  • Use AI system according to instructions
  • Ensure input data is relevant for intended purpose
  • Inform provider of any risks identified
  • Suspend use if system presents risk
  • Cooperate with authorities during investigations

Distributor/Importer

Organizations making AI systems available in EU market

  • Verify provider has conducted conformity assessment
  • Ensure CE marking and documentation are present
  • Verify registration in EU database
  • Store and maintain required documentation
  • Ensure storage and transport conditions maintain compliance
  • Provide authorities with necessary information
  • Cease distribution if AI system non-compliant
  • Inform provider and authorities of non-compliance
  • Cooperate with authorities for corrective actions

Obligations comparison table

Quick reference guide showing which obligations apply to each role

ObligationProviderDeployerDistributor
Risk management system
Lifecycle risk assessment and mitigation
Technical documentation
System specs, training data, performance metrics
Store
Human oversight
Prevent or minimize risks
DesignImplement
Logging & records
Minimum 6 months retention for deployers
EnableMaintain
Conformity assessment
Self-assessment or notified body
Verify
CE marking
Required before market placement
AffixVerify
EU database registration
High-risk AI systems
RegisterVerify
Fundamental rights impact assessment
Required for deployers in specific sectors
Incident reporting
Serious incidents to authorities
If aware
Post-market monitoring
Continuous surveillance of system performance
Monitor use

Note: Many organizations may have multiple roles. For example, if you both develop and deploy an AI system, you must comply with both Provider and Deployer obligations.

6 steps to compliance by August 2026

A practical roadmap to achieve EU AI Act compliance

Step 11-2 months

AI system inventory

Catalog all AI systems in your organization

  • Identify all AI systems and tools in use
  • Document AI vendors and third-party services
  • Map AI systems to business processes
  • Identify AI system owners and stakeholders
  • Create central AI registry
Step 22-3 months

Risk classification

Assign risk tiers to each AI system

  • Assess each system against Annex III categories
  • Determine if system falls under prohibited use cases
  • Classify as high-risk, limited-risk, or minimal-risk
  • Document classification rationale
  • Identify your role (provider, deployer, distributor)
Step 33-4 months

Gap assessment

Identify compliance gaps and requirements

  • Map current state against EU AI Act requirements
  • Identify missing documentation and processes
  • Assess technical compliance gaps
  • Evaluate governance and oversight mechanisms
  • Prioritize remediation activities
Step 44-8 months

Documentation & governance

Build required documentation and controls

  • Create technical documentation for high-risk systems
  • Implement risk management systems
  • Establish data governance procedures
  • Document human oversight mechanisms
  • Create quality management system
  • Prepare fundamental rights impact assessments
Step 58-10 months

Testing & validation

Conduct conformity assessments

  • Perform internal testing and validation
  • Conduct bias and fairness assessments
  • Test accuracy, robustness, and cybersecurity
  • Engage notified body if required
  • Obtain CE marking for applicable systems
  • Register high-risk systems in EU database
Step 6Ongoing

Monitoring & reporting

Maintain compliance and monitor systems

  • Implement continuous monitoring systems
  • Maintain logs and audit trails
  • Monitor for performance drift and incidents
  • Report serious incidents within required timeframes
  • Conduct periodic reviews and updates
  • Stay updated on regulatory guidance

Note: Notified bodies are already booking into Q2 2026. Start your compliance journey now to meet the August 2026 deadline.

Start your compliance journey

Key dates you should know

Critical compliance deadlines approaching

activeFebruary 2, 2025

Prohibited practices

Banned AI practices become illegal

  • Social scoring
  • Biometric categorization
  • Emotion inference at work/school
upcomingAugust 2, 2025

GPAI transparency

General-purpose AI transparency rules

  • Codes of practice
  • Model documentation
  • Systemic risk assessments
upcomingAugust 2, 2026

High-risk phase 1

Classification rules begin

  • Risk management systems
  • Data governance
  • Technical documentation
upcomingAugust 2, 2027

Full compliance

All high-risk requirements active

  • Complete oversight
  • Post-market monitoring
  • Conformity assessments

High-risk AI systems (Annex III)

Eight categories of AI systems classified as high-risk under the EU AI Act

Biometric identification

Examples

Facial recognition, fingerprint systems, iris scanning

Key requirement

Particularly stringent for law enforcement use

Critical infrastructure

Examples

Traffic management, water/gas/electricity supply management

Key requirement

Must demonstrate resilience and fail-safe mechanisms

Education & vocational training

Examples

Student assessment, exam scoring, admission decisions

Key requirement

Requires bias testing and transparency to students

Employment & HR

Examples

CV screening, interview tools, promotion decisions, monitoring

Key requirement

Must protect worker rights and provide explanations

Essential services

Examples

Credit scoring, insurance risk assessment, benefit eligibility

Key requirement

Requires human review for adverse decisions

Law enforcement

Examples

Risk assessment, polygraph analysis, crime prediction

Key requirement

Additional safeguards for fundamental rights

Migration & border control

Examples

Visa applications, asylum decisions, deportation risk assessment

Key requirement

Strong human oversight and appeal mechanisms

Justice & democracy

Examples

Court case research, judicial decision support

Key requirement

Must maintain judicial independence

Penalties & enforcement

The EU AI Act has a three-tier penalty structure with significant fines

critical

Tier 1 - Prohibited AI

€35M or 7% of global revenue

(whichever is higher)

Violations include:

  • Social scoring systems
  • Manipulative AI
  • Real-time biometric ID in public spaces
  • Untargeted facial scraping
high

Tier 2 - High-risk violations

€15M or 3% of global revenue

(whichever is higher)

Violations include:

  • Non-compliant high-risk AI systems
  • Obligations on AI systems violations
  • Failing to conduct required impact assessments
medium

Tier 3 - Information violations

€7.5M or 1.5% of global revenue

(whichever is higher)

Violations include:

  • Providing incorrect information
  • Failing to provide information to authorities
  • Incomplete documentation

General-purpose AI (GPAI) requirements

Obligations for GPAI providers came into effect on August 2, 2025

Standard

General GPAI models

All general-purpose AI models

  • Provide technical documentation
  • Provide information and documentation to downstream providers
  • Implement copyright policy and publish training data summary
  • Ensure energy efficiency where possible
Systemic risk

Systemic risk GPAI models

>10²⁵ FLOPs or designated by Commission

  • Conduct model evaluation and systemic risk assessment
  • Perform adversarial testing
  • Track, document and report serious incidents
  • Ensure adequate cybersecurity protections
  • Implement risk mitigation measures
  • Report to AI Office annually

Using GPAI models? Even if you're not a provider, deployers of high-risk AI systems built on GPAI must still comply with downstream obligations including transparency, human oversight, and logging.

Policy templates

Complete AI governance policy repository

Access 37 ready-to-use AI governance policy templates aligned with EU AI Act, ISO 42001, and NIST AI RMF requirements

Core governance

  • • AI Governance Policy
  • • AI Risk Management Policy
  • • Responsible AI Principles
  • • AI Ethical Use Charter
  • • Model Approval & Release
  • • AI Quality Assurance
  • + 6 more policies

Data & security

  • • AI Data Use Policy
  • • Data Minimization for AI
  • • Training Data Sourcing
  • • Sensitive Data Handling
  • • Prompt Security & Hardening
  • • Incident Response for AI
  • + 2 more policies

Legal & compliance

  • • AI Vendor Risk Policy
  • • Regulatory Compliance
  • • CE Marking Readiness
  • • High-Risk System Registration
  • • Documentation & Traceability
  • • AI Accountability & Roles
  • + 7 more policies

Frequently asked questions

Common questions about EU AI Act compliance

Yes, the EU AI Act has extraterritorial reach. If your AI systems or their outputs are used in the EU market, you are likely in scope, regardless of where your business is located. This includes US-based SaaS companies, consulting firms, and any organization whose AI systems affect EU citizens.

You still have obligations as a deployer, including transparency and oversight requirements. VerifyWise helps track those duties and ensures compliance.

Prohibited practices are banned outright and cannot be used. High-risk systems can operate if you meet strict requirements and maintain proper documentation and evidence.

For prohibited practices, fines can reach up to €35 million or 7% of global revenue. Other violation tiers have lower but still significant penalties, ranging from €7.5 million to €15 million or 1.5% to 3% of revenue.

The AI Act has a phased rollout: prohibited practices are already banned (Feb 2025), GPAI transparency rules start Aug 2025, high-risk system requirements begin Aug 2026, with full compliance required by Aug 2027.

GPAI models with >10^25 FLOPs have specific obligations including systemic risk assessments, adversarial testing, and incident reporting. If you're using these models, you still need to comply with downstream requirements.

High-risk systems require technical documentation, risk management systems, training data documentation, logs of system operations, and evidence of human oversight. VerifyWise automates most of this documentation.

Most high-risk AI systems can use self-assessment, but certain categories (like biometrics, critical infrastructure) may require third-party evaluation by notified bodies. We help you determine which path applies.

Ready to get compliant?

Start your EU AI Act compliance journey today with our comprehensive assessment and tracking tools.

EU AI Act Compliance Solution & Risk Classification | VerifyWise