Back to policy templates
Policy 06 of 15

AI Risk Management Policy

Defines how AI risks are identified, assessed, scored, mitigated, monitored, and escalated across the organization.

1. Purpose

This policy establishes a systematic framework for managing risks associated with AI systems throughout their lifecycle. It confirms that AI initiatives align with the organization's risk appetite and comply with regulatory requirements including the EU AI Act, ISO/IEC 42001, and the NIST AI Risk Management Framework.

2. Scope

This policy applies to:

  • All AI and machine learning systems in development, testing, or production.
  • All third-party AI services and vendor solutions.
  • All generative AI applications and large language models.
  • All AI-powered automation and decision-support systems.
  • All personnel involved in AI design, development, deployment, or oversight.

3. Risk taxonomy

AI risks are categorized across six dimensions. Each risk identified must be tagged with its primary dimension:

DimensionDescriptionExamples
TechnicalRisks from model behavior and performanceHallucinations, performance degradation, adversarial attacks, data poisoning, model drift
OperationalRisks from deployment and operationsIntegration failures, inadequate monitoring, deployment errors, capacity planning
EthicalRisks from societal and individual impactBias and discrimination, fairness violations, lack of transparency, unintended social harm
ComplianceRisks from regulatory and legal obligationsRegulatory non-compliance, privacy violations, inadequate documentation, missed reporting deadlines
SecurityRisks from adversarial and unauthorized activityPrompt injection, model theft, data exfiltration, supply chain compromise, unauthorized access
ReputationalRisks from stakeholder perceptionPublic trust erosion, negative media coverage, customer backlash, partner concerns

4. Risk classification

All AI systems must be classified according to the EU AI Act risk tiers before development or procurement:

Risk tierCriteriaGovernance requirements
UnacceptableProhibited under EU AI Act Art. 5 (social scoring, subliminal manipulation, exploitation of vulnerabilities, real-time biometric ID in public spaces without authorization)Prohibited. Must not be developed or deployed.
HighEU AI Act Annex III categories: biometrics, critical infrastructure, education, employment, essential services, law enforcement, migration, justice. Also: systems affecting health, safety, or fundamental rights.Full risk assessment, conformity assessment, FRIA, post-market monitoring, incident response plan, CE marking.
LimitedSystems with transparency obligations: chatbots, deepfakes, emotion recognition, biometric categorization.Transparency disclosures, user notification.
MinimalLow-impact applications with negligible risk.Registration in AI inventory, basic documentation.

5. Risk assessment process

Risk assessments follow a five-phase process aligned with the NIST AI RMF:

Phase 1: System identification (MAP)

  • Document the AI system's purpose, intended users, and operating context.
  • Classify the system per section 4.
  • Identify stakeholders who could be affected by the system.
  • Register the system in the AI inventory.

Phase 2: Risk identification (MAP)

  • Identify risks across all six dimensions using the taxonomy in section 3.
  • Use risk identification questionnaires, threat modeling, and expert review.
  • Consider risks at each lifecycle stage (design, data, training, deployment, operation, retirement).
  • Document each risk in the risk register with description, dimension, and affected stakeholders.

Phase 3: Risk assessment and scoring (MEASURE)

Each risk is scored using a 5x5 likelihood-impact matrix:

ScoreLikelihoodImpact
1Rare (less than 5% probability)Negligible (no measurable effect)
2Unlikely (5-20%)Minor (limited, recoverable effect)
3Possible (20-50%)Moderate (noticeable effect, manageable)
4Likely (50-80%)Major (significant harm, difficult to recover)
5Almost certain (more than 80%)Severe (catastrophic harm, regulatory action, fundamental rights violation)

Risk score = Likelihood x Impact (range 1-25)

Score rangeRisk levelRequired action
1-4LowAccept with documentation. Monitor as part of routine reviews.
5-9MediumMitigate. Implement controls and track remediation. Model owner approval.
10-15HighMitigate urgently. AI Governance Lead review. Cannot deploy without approved mitigation plan.
16-25CriticalEscalate to AI Governance Committee. System may not proceed without Committee approval and verified mitigations.

Phase 4: Mitigation planning (MANAGE)

For each risk scored Medium or above, select a treatment strategy:

StrategyWhen to useExample
AvoidRisk is unacceptable and no mitigation can reduce it sufficientlyDo not deploy the system in this context
MitigateRisk can be reduced to acceptable levels through controlsAdd bias testing, implement guardrails, add human review
TransferRisk can be shared with a third partyInsurance, contractual liability allocation with vendor
AcceptResidual risk is within appetite after mitigationDocument acceptance with rationale and review date

Each mitigation must have an owner, a deadline, and a defined residual risk score after implementation.

Phase 5: Continuous monitoring (MANAGE)

  • Monitor risk indicators in production (performance metrics, drift alerts, incident reports).
  • Review high-risk systems quarterly, medium-risk semi-annually, low-risk annually.
  • Re-assess when material changes occur (model update, data change, context change, regulatory change).
  • Track risk trends across the portfolio to identify systemic issues.

6. Assessment triggers

Risk assessments are required at the following points:

  • Initial: During project intake before development begins.
  • Pre-deployment: After development and testing, before production release.
  • Change-triggered: When significant changes to model, data, or context occur.
  • Incident-triggered: Following any AI-related incident or near-miss.
  • Periodic: Quarterly for high-risk, semi-annually for medium, annually for low.

7. Risk register

All identified risks are recorded in the AI risk register with the following fields:

  • Risk ID and description.
  • Risk dimension (from taxonomy).
  • Associated AI system and model owner.
  • Likelihood score, impact score, and risk score.
  • Treatment strategy and specific mitigations.
  • Mitigation owner and deadline.
  • Residual risk score after mitigation.
  • Status (open, in treatment, accepted, closed).
  • Last review date and next review date.

8. Third-party AI risk

Third-party AI systems carry additional risks that must be assessed:

Third-party risk assessments must be completed before vendor activation and refreshed annually or when the vendor makes material changes.

  • Vendor security posture and certifications (SOC 2, ISO 27001).
  • Training data governance practices and transparency.
  • Model update frequency and change notification process.
  • Data residency and sub-processor inventory.
  • Incident reporting SLAs and escalation paths.
  • Right to audit and contractual liability allocation.
  • Vendor lock-in risk and data portability.

9. Escalation

  • Critical risks (16-25): Escalate immediately to AI Governance Committee. System deployment is blocked pending Committee decision.
  • High risks (10-15): Escalate to AI Governance Lead within 48 hours. Deployment requires approved mitigation plan.
  • Overdue mitigations: If a mitigation passes its deadline without completion, escalate to AI Governance Lead.
  • Systemic risks: If multiple systems share the same risk pattern, escalate to Committee for portfolio-level response.

10. Roles and responsibilities

RoleRisk management responsibilities
Model OwnerConducts risk assessments, maintains risk register entries, implements mitigations, monitors residual risk.
AI Governance LeadReviews assessments, tracks portfolio risk posture, escalates to Committee, coordinates risk reporting.
AI Governance CommitteeSets risk appetite, approves critical risk acceptance, resolves escalations, reviews quarterly risk report.
SecurityAssesses security-dimension risks, conducts threat modeling, reviews vendor security posture.
Legal / ComplianceAssesses compliance-dimension risks, advises on regulatory obligations, reviews third-party contracts.

11. Regulatory alignment

  • EU AI Act: Article 9 (risk management system for high-risk AI), Article 5 (prohibited practices), Articles 6/Annex III (high-risk classification).
  • ISO/IEC 42001: Clause 6.1 (actions to address risks and opportunities), Annex B (AI risk sources).
  • NIST AI RMF: MAP (context and risk identification), MEASURE (risk analysis), MANAGE (risk response and monitoring).
  • ISO 31000: Risk management principles and process.

12. Review

This policy is reviewed annually. The risk register is reviewed quarterly by the AI Governance Lead and presented to the AI Governance Committee. Material changes to risk methodology require Committee approval.

Document control

FieldValue
Policy owner[AI Governance Lead]
Approved by[AI Governance Committee]
Effective date[Date]
Next review date[Date + 12 months]
Version1.0
ClassificationInternal

Ready to implement this policy?

Use VerifyWise to customize, deploy, and track compliance with this policy template.

AI Risk Management Policy | VerifyWise AI Governance Templates