EU AI Act compliance made simple
The EU AI Act is live. We turn legal text into a clear plan with owners, deadlines and proof. Start with a fast gap assessment, then track everything in one place.
Risk tiers explained
The EU AI Act uses a risk-based approach with four main categories
Unacceptable risk
Banned outright
Social scoring, emotion recognition at work/school, biometric categorization, real-time biometric ID in public spaces, subliminal techniques
prohibitedHigh risk
Strict controls required
CV screening, credit scoring, law enforcement, medical devices, critical infrastructure, education assessment, recruitment tools
regulatedLimited risk
Transparency required
Chatbots, deepfakes, emotion recognition systems, biometric categorization (non-prohibited), AI-generated content
disclosureMinimal risk
Little to no obligations
Spam filters, video games, inventory management, AI-enabled video games, basic recommendation systems
minimalHow VerifyWise supports EU AI Act compliance
VerifyWise provides an EU AI Act Article 9 preset operating in impact assessment mode, structured around the five core sections the regulation demands
Additional compliance capabilities
AI system inventory with risk classification
Register every AI system in your organization with structured metadata. Each entry captures purpose, data sources, deployment context and stakeholders. The platform applies Annex III criteria to determine whether systems qualify as high-risk, limited-risk or minimal-risk, generating classification rationale you can reference during audits.
Addresses: Article 6 classification, Article 9 risk management, Article 49 registration
Technical documentation generation
Build the documentation package required under Article 11. The platform structures information about system architecture, training data provenance, performance metrics and known limitations into formatted documents that match regulatory expectations. Templates cover both provider and deployer perspectives.
Addresses: Article 11 technical documentation, Annex IV requirements
Human oversight workflow configuration
Define who reviews AI outputs, under what conditions and with what authority to override. The platform lets you configure oversight triggers, assign reviewers by role or expertise and capture review decisions with timestamps. Oversight patterns become auditable records demonstrating Article 14 compliance.
Addresses: Article 14 human oversight, Article 26 deployer obligations
Operational logging and retention
Capture system events, user interactions and decision outputs with automatic timestamping. Logs are retained according to configurable policies that default to the six-month minimum deployers must maintain. Search and export functions support incident investigation and regulatory requests.
Addresses: Article 12 record-keeping, Article 26(5) log retention
Incident tracking and reporting
Log AI-related incidents with severity classification and assign investigation owners. The platform tracks remediation progress and generates incident reports suitable for regulatory notification. Serious incidents can be escalated to authorities within the required timeframe, with supporting documentation attached.
Addresses: Article 62 serious incident reporting, Article 73 penalties
Fundamental rights impact assessment
Deployers of high-risk AI in certain sectors must assess impacts on fundamental rights before deployment. The platform provides structured assessment templates covering discrimination risk, privacy implications, access to services and due process considerations. Completed assessments generate dated records for compliance evidence.
Addresses: Article 27 fundamental rights impact assessment
All compliance activities are tracked with timestamps, assigned owners and approval workflows. This audit trail demonstrates systematic governance rather than ad-hoc documentation created after the fact.
Complete EU AI Act requirements coverage
VerifyWise provides dedicated tooling for every regulatory requirement across 15 compliance categories
EU AI Act requirements
Requirements with dedicated tooling
Coverage across all categories
Risk Management & Assessment
Data Governance
Technical Documentation
Record-Keeping & Logging
Transparency & User Information
Human Oversight
Accuracy, Robustness & Cybersecurity
Quality Management System
Conformity Assessment
Registration & CE Marking
Post-Market Monitoring
Incident Reporting
Deployer Obligations
Fundamental Rights Impact Assessment
AI Literacy & Training
Capabilities that set VerifyWise apart
CE Marking workflow
Guided 7-step conformity assessment process with document generation
LLM Gateway
Real-time monitoring and policy enforcement for GPAI model usage
Training Registrar
Track AI literacy requirements and staff competency records
Incident Management
Structured workflows for serious incident tracking and authority notification
Not sure if you're in scope?
Take our free EU AI Act readiness assessment to determine your risk classification and compliance obligations in minutes.
Quick assessment
Get results immediately
Know your role & obligations
Different actors in the AI value chain have different responsibilities under the EU AI Act
Provider
Organizations developing or substantially modifying AI systems
- Implement risk management system throughout lifecycle
- Ensure training data quality and governance
- Create and maintain technical documentation
- Design appropriate logging capabilities
- Ensure transparency and provide information to deployers
- Implement human oversight measures
- Ensure accuracy, robustness, and cybersecurity
- Establish quality management system
- Conduct conformity assessment and affix CE marking
- Register system in EU database
- Report serious incidents to authorities
Deployer
Organizations using AI systems under their authority
- Assign human oversight personnel
- Maintain logs for at least 6 months
- Conduct fundamental rights impact assessment
- Monitor system operation and performance
- Report serious incidents to provider and authorities
- Use AI system according to instructions
- Ensure input data is relevant for intended purpose
- Inform provider of any risks identified
- Suspend use if system presents risk
- Cooperate with authorities during investigations
Distributor/Importer
Organizations making AI systems available in EU market
- Verify provider has conducted conformity assessment
- Ensure CE marking and documentation are present
- Verify registration in EU database
- Store and maintain required documentation
- Ensure storage and transport conditions maintain compliance
- Provide authorities with necessary information
- Cease distribution if AI system non-compliant
- Inform provider and authorities of non-compliance
- Cooperate with authorities for corrective actions
Obligations comparison table
Quick reference guide showing which obligations apply to each role
| Obligation | Provider | Deployer | Distributor |
|---|---|---|---|
Risk management system Lifecycle risk assessment and mitigation | |||
Technical documentation System specs, training data, performance metrics | Store | ||
Human oversight Prevent or minimize risks | Design | Implement | |
Logging & records Minimum 6 months retention for deployers | Enable | Maintain | |
Conformity assessment Self-assessment or notified body | Verify | ||
CE marking Required before market placement | Affix | Verify | |
EU database registration High-risk AI systems | Register | Verify | |
Fundamental rights impact assessment Required for deployers in specific sectors | |||
Incident reporting Serious incidents to authorities | If aware | ||
Post-market monitoring Continuous surveillance of system performance | Monitor use |
Note: Many organizations may have multiple roles. For example, if you both develop and deploy an AI system, you must comply with both Provider and Deployer obligations.
6 steps to compliance by August 2026
A practical roadmap to achieve EU AI Act compliance
AI system inventory
Catalog all AI systems in your organization
- Identify all AI systems and tools in use
- Document AI vendors and third-party services
- Map AI systems to business processes
- Identify AI system owners and stakeholders
- Create central AI registry
Risk classification
Assign risk tiers to each AI system
- Assess each system against Annex III categories
- Determine if system falls under prohibited use cases
- Classify as high-risk, limited-risk, or minimal-risk
- Document classification rationale
- Identify your role (provider, deployer, distributor)
Gap assessment
Identify compliance gaps and requirements
- Map current state against EU AI Act requirements
- Identify missing documentation and processes
- Assess technical compliance gaps
- Evaluate governance and oversight mechanisms
- Prioritize remediation activities
Documentation & governance
Build required documentation and controls
- Create technical documentation for high-risk systems
- Implement risk management systems
- Establish data governance procedures
- Document human oversight mechanisms
- Create quality management system
- Prepare fundamental rights impact assessments
Testing & validation
Conduct conformity assessments
- Perform internal testing and validation
- Conduct bias and fairness assessments
- Test accuracy, robustness, and cybersecurity
- Engage notified body if required
- Obtain CE marking for applicable systems
- Register high-risk systems in EU database
Monitoring & reporting
Maintain compliance and monitor systems
- Implement continuous monitoring systems
- Maintain logs and audit trails
- Monitor for performance drift and incidents
- Report serious incidents within required timeframes
- Conduct periodic reviews and updates
- Stay updated on regulatory guidance
Note: Notified bodies are already booking into Q2 2026. Start your compliance journey now to meet the August 2026 deadline.
Start your compliance journeyKey dates you should know
Critical compliance deadlines approaching
Prohibited practices
Banned AI practices become illegal
- Social scoring
- Biometric categorization
- Emotion inference at work/school
GPAI transparency
General-purpose AI transparency rules
- Codes of practice
- Model documentation
- Systemic risk assessments
High-risk phase 1
Classification rules begin
- Risk management systems
- Data governance
- Technical documentation
Full compliance
All high-risk requirements active
- Complete oversight
- Post-market monitoring
- Conformity assessments
High-risk AI systems (Annex III)
Eight categories of AI systems classified as high-risk under the EU AI Act
Biometric identification
Examples
Facial recognition, fingerprint systems, iris scanning
Key requirement
Particularly stringent for law enforcement use
Critical infrastructure
Examples
Traffic management, water/gas/electricity supply management
Key requirement
Must demonstrate resilience and fail-safe mechanisms
Education & vocational training
Examples
Student assessment, exam scoring, admission decisions
Key requirement
Requires bias testing and transparency to students
Employment & HR
Examples
CV screening, interview tools, promotion decisions, monitoring
Key requirement
Must protect worker rights and provide explanations
Essential services
Examples
Credit scoring, insurance risk assessment, benefit eligibility
Key requirement
Requires human review for adverse decisions
Law enforcement
Examples
Risk assessment, polygraph analysis, crime prediction
Key requirement
Additional safeguards for fundamental rights
Migration & border control
Examples
Visa applications, asylum decisions, deportation risk assessment
Key requirement
Strong human oversight and appeal mechanisms
Justice & democracy
Examples
Court case research, judicial decision support
Key requirement
Must maintain judicial independence
Penalties & enforcement
The EU AI Act has a three-tier penalty structure with significant fines
Tier 1 - Prohibited AI
€35M or 7% of global revenue
(whichever is higher)
Violations include:
- Social scoring systems
- Manipulative AI
- Real-time biometric ID in public spaces
- Untargeted facial scraping
Tier 2 - High-risk violations
€15M or 3% of global revenue
(whichever is higher)
Violations include:
- Non-compliant high-risk AI systems
- Obligations on AI systems violations
- Failing to conduct required impact assessments
Tier 3 - Information violations
€7.5M or 1.5% of global revenue
(whichever is higher)
Violations include:
- Providing incorrect information
- Failing to provide information to authorities
- Incomplete documentation
General-purpose AI (GPAI) requirements
Obligations for GPAI providers came into effect on August 2, 2025
What qualifies as general-purpose AI?
General-purpose AI refers to models trained on broad data that can perform a wide range of tasks without being designed for one specific purpose. These foundation models power many downstream applications, from chatbots to code assistants to image generators. The EU AI Act creates specific obligations for organizations that develop these models and those that build applications using them.
Large Language Models
GPT-4, Claude, Gemini, Llama, Mistral
Image Generation
Midjourney, DALL-E, Stable Diffusion
Multimodal Models
GPT-4o, Gemini Pro Vision, Claude 3.5
Code Generation
GitHub Copilot, Amazon CodeWhisperer
Are you a GPAI provider or downstream integrator?
You developed or trained the foundation model itself
- Full GPAI transparency obligations apply
- Must provide documentation to downstream users
- Responsible for copyright compliance in training
- Systemic risk requirements if threshold exceeded
Examples: OpenAI, Anthropic, Google DeepMind, Meta AI
You build applications using GPAI models via API or integration
- Must obtain documentation from GPAI provider
- Responsible for your specific application's compliance
- High-risk use cases trigger high-risk obligations
- Cannot transfer responsibility to foundation model provider
Examples: Companies using GPT-4 API, Claude API, or fine-tuned models
GPAI obligation tiers
General GPAI models
All general-purpose AI models
- Provide technical documentation
- Provide information and documentation to downstream providers
- Implement copyright policy and publish training data summary
- Ensure energy efficiency where possible
Systemic risk GPAI models
>10²⁵ FLOPs or designated by Commission
- Conduct model evaluation and systemic risk assessment
- Perform adversarial testing
- Track, document and report serious incidents
- Ensure adequate cybersecurity protections
- Implement risk mitigation measures
- Report to AI Office annually
Understanding the systemic risk threshold
Models trained with more than 10²⁵ floating point operations (FLOPs) are automatically classified as posing systemic risk. The European Commission can also designate models based on their capabilities, reach or potential for serious harm regardless of training compute. Current models likely meeting this threshold include GPT-4 and successors, Claude 3 Opus and later versions, Gemini Ultra and Meta's largest Llama variants.
Systemic risk classification triggers additional obligations: comprehensive model evaluations, adversarial red-teaming, incident tracking and reporting, enhanced cybersecurity and annual reporting to the EU AI Office.
Open-source GPAI provisions
Reduced obligations apply when
- Model weights are publicly available
- Training methodology is documented openly
- Released under a qualifying open-source license
- Parameters and architecture are published
Full obligations still apply if
- Model poses systemic risk (>10²⁵ FLOPs)
- You modify and deploy commercially
- Used in high-risk applications under Annex III
- Model is integrated into regulated products
If you build on GPAI models
Most organizations using AI are downstream integrators rather than foundation model providers. If you access GPT-4, Claude or similar models through APIs to build your own applications, these obligations apply to you.
The EU AI Office
The EU AI Office within the European Commission provides centralized oversight for GPAI models. It issues guidance, develops codes of practice, evaluates systemic risk models and coordinates with national authorities. GPAI providers with systemic risk models must report directly to the AI Office. The Office also serves as a resource for downstream integrators seeking clarity on their obligations.
Complete AI governance policy repository
Access 37 ready-to-use AI governance policy templates aligned with EU AI Act, ISO 42001, and NIST AI RMF requirements
Core governance
- • AI Governance Policy
- • AI Risk Management Policy
- • Responsible AI Principles
- • AI Ethical Use Charter
- • Model Approval & Release
- • AI Quality Assurance
- + 6 more policies
Data & security
- • AI Data Use Policy
- • Data Minimization for AI
- • Training Data Sourcing
- • Sensitive Data Handling
- • Prompt Security & Hardening
- • Incident Response for AI
- + 2 more policies
Legal & compliance
- • AI Vendor Risk Policy
- • Regulatory Compliance
- • CE Marking Readiness
- • High-Risk System Registration
- • Documentation & Traceability
- • AI Accountability & Roles
- + 7 more policies
Frequently asked questions
Common questions about EU AI Act compliance
Ready to get compliant?
Start your EU AI Act compliance journey today with our comprehensive assessment and tracking tools.