EU AI Act compliance
Navigate the requirements of the European AI regulation.
Overview
The EU AI Act is the European Union's comprehensive regulation governing artificial intelligence systems. It establishes a risk-based framework that classifies AI systems by their potential impact and imposes corresponding requirements for transparency, accountability, and human oversight.
As the world's first comprehensive AI law, the EU AI Act affects any organization that develops, deploys, or uses AI systems within the European Union or that affects EU citizens. Understanding and complying with this regulation is essential for organizations operating in or selling to European markets.
Why comply with the EU AI Act?
- Legal requirement: Non-compliance can result in significant fines up to 35 million euros or 7% of global annual turnover
- Market access: Compliance is required to offer AI systems in the EU market
- Competitive advantage: Demonstrating compliance builds trust with European customers and partners
- Risk management: The Act's requirements align with sound AI governance practices that protect your organization
- Future readiness: Similar regulations are emerging globally; EU AI Act compliance prepares you for other jurisdictions
Risk classification
The EU AI Act classifies AI systems into four risk categories, each with different compliance requirements:
Unacceptable risk
AI systems that pose clear threats to safety or fundamental rights are prohibited entirely.
High risk
AI systems in critical areas like healthcare, employment, or law enforcement require strict compliance measures.
Limited risk
AI systems with specific transparency obligations, such as chatbots or emotion recognition systems.
Minimal risk
Most AI systems fall here and face no specific requirements beyond existing laws.
High-risk AI requirements
For high-risk AI systems, the EU AI Act mandates:
- Risk management system throughout the AI lifecycle
- Data governance and management practices
- Technical documentation and record-keeping
- Transparency and information provision to users
- Human oversight measures
- Accuracy, robustness, and cybersecurity requirements
- Quality management system
- Conformity assessment before market placement
How VerifyWise supports EU AI Act compliance
VerifyWise provides structured tools to help you meet EU AI Act requirements:
- Risk classification: Classify your AI systems according to the Act's risk categories
- Control framework: Pre-built controls mapped to EU AI Act requirements
- Assessment tracking: Track progress through compliance assessments
- Documentation: Maintain technical documentation required by the regulation
- Evidence collection: Gather and organize evidence for conformity assessments
- Audit readiness: Generate reports demonstrating compliance status
Getting started with EU AI Act compliance
To begin your EU AI Act compliance journey in VerifyWise:
- Create a use case for your AI system in VerifyWise
- Select the EU AI Act framework when configuring the use case
- Classify your AI system according to its risk level
- Complete the compliance assessment to identify gaps
- Work through the controls to address each requirement
- Document implementation details and collect evidence
EU AI Act controls structure
VerifyWise organizes EU AI Act requirements into manageable controls and subcontrols:
- Controls: High-level requirements from the regulation
- Subcontrols: Specific actions needed to satisfy each control
- Status tracking: Track each control as Waiting, In progress, or Done
- Risk review: Assess residual risk as Acceptable, Residual, or Unacceptable
- Ownership: Assign owners, reviewers, and approvers to controls
The assessment screen
When you select the EU AI Act framework for a use case, VerifyWise creates an assessment containing all applicable controls. The assessment screen provides a comprehensive view of your compliance progress.
The assessment screen displays:
- Progress overview: Visual indicators showing overall completion percentage and control status breakdown
- Control categories: Controls organized by EU AI Act articles and requirements
- Status filters: Filter controls by status to focus on what needs attention
- Search: Find specific controls by keyword
Control categories
EU AI Act controls in VerifyWise are organized into categories aligned with the regulation's structure:
- Risk management system requirements
- Data and data governance requirements
- Technical documentation requirements
- Record-keeping requirements
- Transparency and provision of information
- Human oversight requirements
- Accuracy, robustness and cybersecurity
- Quality management system
Working with controls
Each control represents a specific requirement from the EU AI Act. Click on a control to open its detail view where you can:
- Review the requirement: Read the control description and understand what is required
- View subcontrols: See the specific actions needed to satisfy the control
- Update status: Change the control status as you make progress
- Assign responsibility: Set the owner, reviewer, and approver
- Set due date: Establish a target completion date
- Document implementation: Describe how you are addressing the requirement
- Assess risk: Evaluate the residual risk after implementation
Understanding subcontrols
Subcontrols break down each control into specific, actionable items. They help you understand exactly what needs to be done and track granular progress.
For each subcontrol, you can:
- Mark it as complete when addressed
- Add notes about your implementation approach
- Link evidence demonstrating compliance
The parent control tracks how many subcontrols are complete, giving you visibility into detailed progress.
Completing a control
To fully complete a control, follow this workflow:
- Change status from "Waiting" to "In progress" when you begin work
- Work through each subcontrol, marking them complete as you go
- Document your implementation details in the control
- Link or upload evidence supporting your implementation
- Assess the residual risk level
- Change status to "Done" when all subcontrols are addressed
- Have the reviewer and approver sign off if required
Control assignments
Each control can have three types of assignments:
Owner
The person responsible for implementing the control and ensuring it is completed.
Reviewer
The person who reviews the implementation to ensure it meets requirements.
Approver
The person with authority to give final approval on the control.
Control risk assessment
After implementing a control, assess the residual risk to document whether your implementation adequately addresses the requirement:
- Acceptable risk: The control fully addresses the requirement with minimal residual risk
- Residual risk: Some risk remains but is documented, understood, and accepted
- Unacceptable risk: The implementation does not adequately address the requirement; further action is needed
Tracking your progress
VerifyWise provides multiple ways to monitor your EU AI Act compliance progress:
- Completion percentage: Overall progress across all controls
- Status breakdown: Number of controls in each status (Waiting, In progress, Done)
- Subcontrol progress: Detailed view of completed vs. pending subcontrols
- Overdue controls: Controls that have passed their due date
- Risk summary: Distribution of controls by risk assessment outcome
Use these metrics to identify bottlenecks, allocate resources, and report progress to stakeholders.
Linking evidence to controls
For each control, you can link evidence from your Evidence Hub to demonstrate compliance:
- Open the control detail view
- Navigate to the evidence section
- Select existing evidence from your Evidence Hub or upload new documents
- Add notes explaining how the evidence supports the control
Linked evidence creates an audit trail that demonstrates your compliance activities to auditors and regulators.