Most EU AI Act compliance work falls apart in one place: the gap between knowing the obligations and turning them into repeatable internal rules.
Teams try to "be compliant" but never document how. The same debates resurface in every project. Evidence ends up scattered across Jira tickets, SharePoint folders, and someone's inbox. Then procurement asks for a policy and someone writes one at 2am.
This post gives you a minimal deployer policy pack for high-risk AI systems, anchored directly to EU AI Act articles. We then add the extra policies that show up repeatedly in four common deployment scenarios: employment, credit, critical infrastructure, and education.
Provider vs. deployer: know which hat you wear
The EU AI Act draws a clear line between two roles. Getting this wrong means building the wrong compliance program.
| Role | What it means in practice | EU AI Act reference |
|---|---|---|
| Provider | Builds an AI system (or has it built) and places it on the market or puts it into service under its own name or trademark | Article 3(3) |
| Deployer | Uses an AI system under its authority as part of a business or public function | Article 3(4) |
If you buy a hiring model from a vendor and run it in HR, you are the deployer. If you then white-label that model and sell it under your brand, you may also become a provider - depending on how you place it on the market.
Most enterprises are deployers. That's what this policy pack addresses.
When does "high-risk" apply to deployers?
Annex III lists the high-risk use cases most enterprises care about. If your AI system falls into one of these categories, deployer obligations under Article 26 kick in.
| Deployment type | Annex III reference | What triggers it |
|---|---|---|
| Critical infrastructure | Annex III(2) | Safety components in management and operation of critical digital infrastructure, road traffic, water, gas, heating, electricity |
| Education | Annex III(3) | Admissions decisions, grading, steering learning outcomes, test proctoring |
| Employment | Annex III(4) | Recruiting, filtering applications, evaluating candidates, promotions, terminations, performance monitoring |
| Credit and insurance | Annex III(5)(b) and (c) | Creditworthiness assessment, credit scoring, life and health insurance risk assessment and pricing |
If you're not sure whether your system qualifies, start with a classification assessment. The EU AI Act's risk categories are not optional interpretations - they're defined in the regulation.
The minimal deployer policy pack
The EU AI Act rarely mandates "you must have a policy named X." What it does mandate is that deployers perform specific actions consistently, with clear ownership and documented evidence.
Article 26 is the backbone for deployers of high-risk systems. Here's a lean policy set that covers the core obligations with the smallest possible surface area:
| Policy | What it controls | EU AI Act anchor |
|---|---|---|
| High-risk AI deployment procedure | How teams approve a use case, confirm intended purpose, and ensure usage aligns with the provider's instructions for use | Article 26(1) |
| Human oversight and accountability procedure | Who has authority to oversee outputs, when humans must intervene, training and competence requirements | Article 26(2) |
| Input data governance procedure | Rules for relevance and representativeness of input data you control, plus monitoring for data drift | Article 26(4) |
| Monitoring and incident escalation procedure | Ongoing monitoring, internal escalation paths, suspension criteria, and how to notify providers and authorities when risk or serious incidents are detected | Article 26(5) |
| Log retention and access procedure | Which logs you retain, retention periods, access controls, and how to extract evidence for audits or incident investigations | Article 26(6) |
| Workforce transparency procedure | How you inform employees and worker representatives when high-risk AI is used in the workplace | Article 26(7) |
| AI literacy plan | Role-based training for staff who operate or interact with AI systems on your behalf | Article 4 |
Seven policies. That's the foundation.
Conditional add-ons: when you need more
Depending on your deployment, two additional policies may become mandatory:
| Policy | When required | EU AI Act anchor |
|---|---|---|
| Fundamental rights impact assessment (FRIA) procedure | Required for public bodies and certain private entities providing public services before deploying most high-risk systems. Also required for deployers of credit scoring and certain insurance systems | Article 27 |
| Transparency and user disclosure procedure | Required if your AI interacts directly with people, uses emotion recognition or biometric categorisation, generates deepfake-style synthetic media, or publishes AI-generated text on matters of public interest | Article 50 |
| Explanation handling procedure | Required if you use Annex III high-risk systems to make decisions with legal or similarly significant effects on individuals (except Annex III point 2 systems) | Article 86 |
Deployment-specific policy add-ons
The minimal pack covers baseline compliance. Real-world deployments in regulated domains need additional controls. Below are the add-ons that surface repeatedly in enterprise rollouts.
Employment deployments
Employment-related AI systems fall under Annex III(4). These deployments touch hiring, performance reviews, promotions, and terminations - areas with significant legal exposure and workforce sensitivity.
| Additional policy | Purpose | Built on |
|---|---|---|
| Employment decision governance procedure | Defines which HR decisions can incorporate AI outputs, human review thresholds, appeal paths, and bias monitoring cadence | Article 26 oversight and monitoring duties |
| Worker impact and consultation procedure | Operationalizes workplace transparency requirements specifically for HR contexts | Article 26(7) |
| FRIA procedure | Mandatory when the deployer is a public body or private entity providing public services | Article 27(1) |
Credit deployments
Credit scoring and creditworthiness assessment are explicitly listed in Annex III(5)(b). Article 27 specifically requires deployers of these systems to conduct fundamental rights impact assessments.
| Additional policy | Purpose | Built on |
|---|---|---|
| FRIA procedure | Mandatory before first use, with updates required when key system elements change | Article 27 |
| Adverse decision explanation procedure | Operationalizes how you provide meaningful explanations when AI-informed decisions significantly affect individuals | Article 86 |
| Monitoring and suspension runbook | Makes "stop using it" a concrete option with defined triggers, owners, and fallback procedures | Article 26(5) |
Critical infrastructure deployments
Critical infrastructure appears in Annex III(2). These deployments prioritize operational safety and system resilience above all else.
| Additional policy | Purpose | Built on |
|---|---|---|
| Operational safety and override procedure | Defines safe states, manual fallback procedures, who can override AI outputs, and response time requirements | Article 26 oversight and monitoring duties |
| Incident response integration procedure | Connects AI-specific incidents into existing operational incident management and escalation frameworks | Article 26(5) |
| Log retention and forensic readiness procedure | Ensures logs support post-incident analysis in environments with strict uptime and availability constraints | Article 26(6) |
Education deployments
Education and vocational training fall under Annex III(3). These systems often involve minors or students in power-imbalanced relationships, which demands stricter documentation and transparency.
| Additional policy | Purpose | Built on |
|---|---|---|
| Student assessment governance procedure | Rules for AI-assisted grading, proctoring outputs, human review thresholds, and dispute handling | Article 26 oversight and monitoring duties |
| Transparency and disclosure procedure | Ensures individuals understand when they're interacting with AI, including disclosures for synthetic content generation | Article 50 |
| FRIA procedure | Mandatory when the deployer is a public body or private entity providing public services | Article 27(1) |
From policy to practice
Policies without implementation are just PDFs collecting dust. Each policy needs:
- Clear ownership: Who maintains it, who approves changes
- Evidence requirements: What documentation proves compliance
- Review cadence: How often you revisit and update
- Integration points: How it connects to your existing workflows
The EU AI Act gives you the legal anchors. Your internal systems - GRC platforms, document management, or purpose-built AI governance tooling - supply the workflow, versioning, approvals, and audit trail.
Start building your deployer compliance program
If you're deploying high-risk AI systems in the EU, the clock is ticking. The minimal policy pack above gives you a concrete starting point. The deployment-specific add-ons help you tailor compliance to your actual use cases.
Need help turning these policies into operational workflows? Start with VerifyWise to manage your AI governance program, or contact our team to discuss your specific deployment scenarios.