The EU AI Act is live. We turn legal text into a clear plan with owners, deadlines, and proof. Start with a fast gap assessment, then track everything in one place.
The EU AI Act uses a risk-based approach with four main categories
Banned outright
Social scoring, emotion recognition at work/school, biometric categorization, real-time biometric ID in public spaces, subliminal techniques
prohibitedStrict controls required
CV screening, credit scoring, law enforcement, medical devices, critical infrastructure, education assessment, recruitment tools
regulatedTransparency required
Chatbots, deepfakes, emotion recognition systems, biometric categorization (non-prohibited), AI-generated content
disclosureLittle to no obligations
Spam filters, video games, inventory management, AI-enabled video games, basic recommendation systems
minimalTake our free EU AI Act readiness assessment to determine your risk classification and compliance obligations in minutes.
Quick assessment
Get results immediately
Different actors in the AI value chain have different responsibilities under the EU AI Act
Organizations developing or substantially modifying AI systems
Organizations using AI systems under their authority
Organizations making AI systems available in EU market
Quick reference guide showing which obligations apply to each role
| Obligation | Provider | Deployer | Distributor |
|---|---|---|---|
Risk management system Lifecycle risk assessment and mitigation | |||
Technical documentation System specs, training data, performance metrics | Store | ||
Human oversight Prevent or minimize risks | Design | Implement | |
Logging & records Minimum 6 months retention for deployers | Enable | Maintain | |
Conformity assessment Self-assessment or notified body | Verify | ||
CE marking Required before market placement | Affix | Verify | |
EU database registration High-risk AI systems | Register | Verify | |
Fundamental rights impact assessment Required for deployers in specific sectors | |||
Incident reporting Serious incidents to authorities | If aware | ||
Post-market monitoring Continuous surveillance of system performance | Monitor use |
Note: Many organizations may have multiple roles. For example, if you both develop and deploy an AI system, you must comply with both Provider and Deployer obligations.
A practical roadmap to achieve EU AI Act compliance
Catalog all AI systems in your organization
Assign risk tiers to each AI system
Identify compliance gaps and requirements
Build required documentation and controls
Conduct conformity assessments
Maintain compliance and monitor systems
Note: Notified bodies are already booking into Q2 2026. Start your compliance journey now to meet the August 2026 deadline.
Start your compliance journeyCritical compliance deadlines approaching
Banned AI practices become illegal
General-purpose AI transparency rules
Classification rules begin
All high-risk requirements active
Eight categories of AI systems classified as high-risk under the EU AI Act
Examples
Facial recognition, fingerprint systems, iris scanning
Key requirement
Particularly stringent for law enforcement use
Examples
Traffic management, water/gas/electricity supply management
Key requirement
Must demonstrate resilience and fail-safe mechanisms
Examples
Student assessment, exam scoring, admission decisions
Key requirement
Requires bias testing and transparency to students
Examples
CV screening, interview tools, promotion decisions, monitoring
Key requirement
Must protect worker rights and provide explanations
Examples
Credit scoring, insurance risk assessment, benefit eligibility
Key requirement
Requires human review for adverse decisions
Examples
Risk assessment, polygraph analysis, crime prediction
Key requirement
Additional safeguards for fundamental rights
Examples
Visa applications, asylum decisions, deportation risk assessment
Key requirement
Strong human oversight and appeal mechanisms
Examples
Court case research, judicial decision support
Key requirement
Must maintain judicial independence
The EU AI Act has a three-tier penalty structure with significant fines
€35M or 7% of global revenue
(whichever is higher)
Violations include:
€15M or 3% of global revenue
(whichever is higher)
Violations include:
€7.5M or 1.5% of global revenue
(whichever is higher)
Violations include:
Obligations for GPAI providers came into effect on August 2, 2025
All general-purpose AI models
>10²⁵ FLOPs or designated by Commission
Using GPAI models? Even if you're not a provider, deployers of high-risk AI systems built on GPAI must still comply with downstream obligations including transparency, human oversight, and logging.
Access 37 ready-to-use AI governance policy templates aligned with EU AI Act, ISO 42001, and NIST AI RMF requirements
Common questions about EU AI Act compliance
Yes, the EU AI Act has extraterritorial reach. If your AI systems or their outputs are used in the EU market, you are likely in scope, regardless of where your business is located. This includes US-based SaaS companies, consulting firms, and any organization whose AI systems affect EU citizens.
You still have obligations as a deployer, including transparency and oversight requirements. VerifyWise helps track those duties and ensures compliance.
Prohibited practices are banned outright and cannot be used. High-risk systems can operate if you meet strict requirements and maintain proper documentation and evidence.
For prohibited practices, fines can reach up to €35 million or 7% of global revenue. Other violation tiers have lower but still significant penalties, ranging from €7.5 million to €15 million or 1.5% to 3% of revenue.
The AI Act has a phased rollout: prohibited practices are already banned (Feb 2025), GPAI transparency rules start Aug 2025, high-risk system requirements begin Aug 2026, with full compliance required by Aug 2027.
GPAI models with >10^25 FLOPs have specific obligations including systemic risk assessments, adversarial testing, and incident reporting. If you're using these models, you still need to comply with downstream requirements.
High-risk systems require technical documentation, risk management systems, training data documentation, logs of system operations, and evidence of human oversight. VerifyWise automates most of this documentation.
Most high-risk AI systems can use self-assessment, but certain categories (like biometrics, critical infrastructure) may require third-party evaluation by notified bodies. We help you determine which path applies.
Start your EU AI Act compliance journey today with our comprehensive assessment and tracking tools.