Back to Blog
Compliance
Dec 23, 2025
8 min read

The EU AI Act deployer policy pack you actually need

Stop scrambling for policies at 2am. Here's the minimal deployer policy pack for high-risk AI systems under the EU AI Act, plus deployment-specific add-ons for employment, credit, critical infrastructure, and education.

Most EU AI Act compliance work falls apart in one place: the gap between knowing the obligations and turning them into repeatable internal rules.

Teams try to "be compliant" but never document how. The same debates resurface in every project. Evidence ends up scattered across Jira tickets, SharePoint folders, and someone's inbox. Then procurement asks for a policy and someone writes one at 2am.

This post gives you a minimal deployer policy pack for high-risk AI systems, anchored directly to EU AI Act articles. We then add the extra policies that show up repeatedly in four common deployment scenarios: employment, credit, critical infrastructure, and education.

Provider vs. deployer: know which hat you wear

The EU AI Act draws a clear line between two roles. Getting this wrong means building the wrong compliance program.

RoleWhat it means in practiceEU AI Act reference
ProviderBuilds an AI system (or has it built) and places it on the market or puts it into service under its own name or trademarkArticle 3(3)
DeployerUses an AI system under its authority as part of a business or public functionArticle 3(4)

If you buy a hiring model from a vendor and run it in HR, you are the deployer. If you then white-label that model and sell it under your brand, you may also become a provider - depending on how you place it on the market.

Most enterprises are deployers. That's what this policy pack addresses.

When does "high-risk" apply to deployers?

Annex III lists the high-risk use cases most enterprises care about. If your AI system falls into one of these categories, deployer obligations under Article 26 kick in.

Deployment typeAnnex III referenceWhat triggers it
Critical infrastructureAnnex III(2)Safety components in management and operation of critical digital infrastructure, road traffic, water, gas, heating, electricity
EducationAnnex III(3)Admissions decisions, grading, steering learning outcomes, test proctoring
EmploymentAnnex III(4)Recruiting, filtering applications, evaluating candidates, promotions, terminations, performance monitoring
Credit and insuranceAnnex III(5)(b) and (c)Creditworthiness assessment, credit scoring, life and health insurance risk assessment and pricing

If you're not sure whether your system qualifies, start with a classification assessment. The EU AI Act's risk categories are not optional interpretations - they're defined in the regulation.

The minimal deployer policy pack

The EU AI Act rarely mandates "you must have a policy named X." What it does mandate is that deployers perform specific actions consistently, with clear ownership and documented evidence.

Article 26 is the backbone for deployers of high-risk systems. Here's a lean policy set that covers the core obligations with the smallest possible surface area:

PolicyWhat it controlsEU AI Act anchor
High-risk AI deployment procedureHow teams approve a use case, confirm intended purpose, and ensure usage aligns with the provider's instructions for useArticle 26(1)
Human oversight and accountability procedureWho has authority to oversee outputs, when humans must intervene, training and competence requirementsArticle 26(2)
Input data governance procedureRules for relevance and representativeness of input data you control, plus monitoring for data driftArticle 26(4)
Monitoring and incident escalation procedureOngoing monitoring, internal escalation paths, suspension criteria, and how to notify providers and authorities when risk or serious incidents are detectedArticle 26(5)
Log retention and access procedureWhich logs you retain, retention periods, access controls, and how to extract evidence for audits or incident investigationsArticle 26(6)
Workforce transparency procedureHow you inform employees and worker representatives when high-risk AI is used in the workplaceArticle 26(7)
AI literacy planRole-based training for staff who operate or interact with AI systems on your behalfArticle 4

Seven policies. That's the foundation.

Conditional add-ons: when you need more

Depending on your deployment, two additional policies may become mandatory:

PolicyWhen requiredEU AI Act anchor
Fundamental rights impact assessment (FRIA) procedureRequired for public bodies and certain private entities providing public services before deploying most high-risk systems. Also required for deployers of credit scoring and certain insurance systemsArticle 27
Transparency and user disclosure procedureRequired if your AI interacts directly with people, uses emotion recognition or biometric categorisation, generates deepfake-style synthetic media, or publishes AI-generated text on matters of public interestArticle 50
Explanation handling procedureRequired if you use Annex III high-risk systems to make decisions with legal or similarly significant effects on individuals (except Annex III point 2 systems)Article 86

Deployment-specific policy add-ons

The minimal pack covers baseline compliance. Real-world deployments in regulated domains need additional controls. Below are the add-ons that surface repeatedly in enterprise rollouts.

Employment deployments

Employment-related AI systems fall under Annex III(4). These deployments touch hiring, performance reviews, promotions, and terminations - areas with significant legal exposure and workforce sensitivity.

Additional policyPurposeBuilt on
Employment decision governance procedureDefines which HR decisions can incorporate AI outputs, human review thresholds, appeal paths, and bias monitoring cadenceArticle 26 oversight and monitoring duties
Worker impact and consultation procedureOperationalizes workplace transparency requirements specifically for HR contextsArticle 26(7)
FRIA procedureMandatory when the deployer is a public body or private entity providing public servicesArticle 27(1)

Credit deployments

Credit scoring and creditworthiness assessment are explicitly listed in Annex III(5)(b). Article 27 specifically requires deployers of these systems to conduct fundamental rights impact assessments.

Additional policyPurposeBuilt on
FRIA procedureMandatory before first use, with updates required when key system elements changeArticle 27
Adverse decision explanation procedureOperationalizes how you provide meaningful explanations when AI-informed decisions significantly affect individualsArticle 86
Monitoring and suspension runbookMakes "stop using it" a concrete option with defined triggers, owners, and fallback proceduresArticle 26(5)

Critical infrastructure deployments

Critical infrastructure appears in Annex III(2). These deployments prioritize operational safety and system resilience above all else.

Additional policyPurposeBuilt on
Operational safety and override procedureDefines safe states, manual fallback procedures, who can override AI outputs, and response time requirementsArticle 26 oversight and monitoring duties
Incident response integration procedureConnects AI-specific incidents into existing operational incident management and escalation frameworksArticle 26(5)
Log retention and forensic readiness procedureEnsures logs support post-incident analysis in environments with strict uptime and availability constraintsArticle 26(6)

Education deployments

Education and vocational training fall under Annex III(3). These systems often involve minors or students in power-imbalanced relationships, which demands stricter documentation and transparency.

Additional policyPurposeBuilt on
Student assessment governance procedureRules for AI-assisted grading, proctoring outputs, human review thresholds, and dispute handlingArticle 26 oversight and monitoring duties
Transparency and disclosure procedureEnsures individuals understand when they're interacting with AI, including disclosures for synthetic content generationArticle 50
FRIA procedureMandatory when the deployer is a public body or private entity providing public servicesArticle 27(1)

From policy to practice

Policies without implementation are just PDFs collecting dust. Each policy needs:

  • Clear ownership: Who maintains it, who approves changes
  • Evidence requirements: What documentation proves compliance
  • Review cadence: How often you revisit and update
  • Integration points: How it connects to your existing workflows

The EU AI Act gives you the legal anchors. Your internal systems - GRC platforms, document management, or purpose-built AI governance tooling - supply the workflow, versioning, approvals, and audit trail.

Start building your deployer compliance program

If you're deploying high-risk AI systems in the EU, the clock is ticking. The minimal policy pack above gives you a concrete starting point. The deployment-specific add-ons help you tailor compliance to your actual use cases.

Need help turning these policies into operational workflows? Start with VerifyWise to manage your AI governance program, or contact our team to discuss your specific deployment scenarios.

Found this article helpful? Share it with your network.

Share:

Ready to govern your AI responsibly?

Start your AI governance journey with VerifyWise today.

The EU AI Act deployer policy pack you actually need - VerifyWise Blog