1. Purpose
This policy establishes a systematic framework for managing risks associated with AI systems throughout their lifecycle. It confirms that AI initiatives align with the organization's risk appetite and comply with regulatory requirements including the EU AI Act, ISO/IEC 42001, and the NIST AI Risk Management Framework.
2. Scope
This policy applies to:
- All AI and machine learning systems in development, testing, or production.
- All third-party AI services and vendor solutions.
- All generative AI applications and large language models.
- All AI-powered automation and decision-support systems.
- All personnel involved in AI design, development, deployment, or oversight.
3. Risk taxonomy
AI risks are categorized across six dimensions. Each risk identified must be tagged with its primary dimension:
| Dimension | Description | Examples |
|---|---|---|
| Technical | Risks from model behavior and performance | Hallucinations, performance degradation, adversarial attacks, data poisoning, model drift |
| Operational | Risks from deployment and operations | Integration failures, inadequate monitoring, deployment errors, capacity planning |
| Ethical | Risks from societal and individual impact | Bias and discrimination, fairness violations, lack of transparency, unintended social harm |
| Compliance | Risks from regulatory and legal obligations | Regulatory non-compliance, privacy violations, inadequate documentation, missed reporting deadlines |
| Security | Risks from adversarial and unauthorized activity | Prompt injection, model theft, data exfiltration, supply chain compromise, unauthorized access |
| Reputational | Risks from stakeholder perception | Public trust erosion, negative media coverage, customer backlash, partner concerns |
4. Risk classification
All AI systems must be classified according to the EU AI Act risk tiers before development or procurement:
| Risk tier | Criteria | Governance requirements |
|---|---|---|
| Unacceptable | Prohibited under EU AI Act Art. 5 (social scoring, subliminal manipulation, exploitation of vulnerabilities, real-time biometric ID in public spaces without authorization) | Prohibited. Must not be developed or deployed. |
| High | EU AI Act Annex III categories: biometrics, critical infrastructure, education, employment, essential services, law enforcement, migration, justice. Also: systems affecting health, safety, or fundamental rights. | Full risk assessment, conformity assessment, FRIA, post-market monitoring, incident response plan, CE marking. |
| Limited | Systems with transparency obligations: chatbots, deepfakes, emotion recognition, biometric categorization. | Transparency disclosures, user notification. |
| Minimal | Low-impact applications with negligible risk. | Registration in AI inventory, basic documentation. |
5. Risk assessment process
Risk assessments follow a five-phase process aligned with the NIST AI RMF:
Phase 1: System identification (MAP)
- Document the AI system's purpose, intended users, and operating context.
- Classify the system per section 4.
- Identify stakeholders who could be affected by the system.
- Register the system in the AI inventory.
Phase 2: Risk identification (MAP)
- Identify risks across all six dimensions using the taxonomy in section 3.
- Use risk identification questionnaires, threat modeling, and expert review.
- Consider risks at each lifecycle stage (design, data, training, deployment, operation, retirement).
- Document each risk in the risk register with description, dimension, and affected stakeholders.
Phase 3: Risk assessment and scoring (MEASURE)
Each risk is scored using a 5x5 likelihood-impact matrix:
| Score | Likelihood | Impact |
|---|---|---|
| 1 | Rare (less than 5% probability) | Negligible (no measurable effect) |
| 2 | Unlikely (5-20%) | Minor (limited, recoverable effect) |
| 3 | Possible (20-50%) | Moderate (noticeable effect, manageable) |
| 4 | Likely (50-80%) | Major (significant harm, difficult to recover) |
| 5 | Almost certain (more than 80%) | Severe (catastrophic harm, regulatory action, fundamental rights violation) |
Risk score = Likelihood x Impact (range 1-25)
| Score range | Risk level | Required action |
|---|---|---|
| 1-4 | Low | Accept with documentation. Monitor as part of routine reviews. |
| 5-9 | Medium | Mitigate. Implement controls and track remediation. Model owner approval. |
| 10-15 | High | Mitigate urgently. AI Governance Lead review. Cannot deploy without approved mitigation plan. |
| 16-25 | Critical | Escalate to AI Governance Committee. System may not proceed without Committee approval and verified mitigations. |
Phase 4: Mitigation planning (MANAGE)
For each risk scored Medium or above, select a treatment strategy:
| Strategy | When to use | Example |
|---|---|---|
| Avoid | Risk is unacceptable and no mitigation can reduce it sufficiently | Do not deploy the system in this context |
| Mitigate | Risk can be reduced to acceptable levels through controls | Add bias testing, implement guardrails, add human review |
| Transfer | Risk can be shared with a third party | Insurance, contractual liability allocation with vendor |
| Accept | Residual risk is within appetite after mitigation | Document acceptance with rationale and review date |
Each mitigation must have an owner, a deadline, and a defined residual risk score after implementation.
Phase 5: Continuous monitoring (MANAGE)
- Monitor risk indicators in production (performance metrics, drift alerts, incident reports).
- Review high-risk systems quarterly, medium-risk semi-annually, low-risk annually.
- Re-assess when material changes occur (model update, data change, context change, regulatory change).
- Track risk trends across the portfolio to identify systemic issues.
6. Assessment triggers
Risk assessments are required at the following points:
- Initial: During project intake before development begins.
- Pre-deployment: After development and testing, before production release.
- Change-triggered: When significant changes to model, data, or context occur.
- Incident-triggered: Following any AI-related incident or near-miss.
- Periodic: Quarterly for high-risk, semi-annually for medium, annually for low.
7. Risk register
All identified risks are recorded in the AI risk register with the following fields:
- Risk ID and description.
- Risk dimension (from taxonomy).
- Associated AI system and model owner.
- Likelihood score, impact score, and risk score.
- Treatment strategy and specific mitigations.
- Mitigation owner and deadline.
- Residual risk score after mitigation.
- Status (open, in treatment, accepted, closed).
- Last review date and next review date.
8. Third-party AI risk
Third-party AI systems carry additional risks that must be assessed:
Third-party risk assessments must be completed before vendor activation and refreshed annually or when the vendor makes material changes.
- Vendor security posture and certifications (SOC 2, ISO 27001).
- Training data governance practices and transparency.
- Model update frequency and change notification process.
- Data residency and sub-processor inventory.
- Incident reporting SLAs and escalation paths.
- Right to audit and contractual liability allocation.
- Vendor lock-in risk and data portability.
9. Escalation
- Critical risks (16-25): Escalate immediately to AI Governance Committee. System deployment is blocked pending Committee decision.
- High risks (10-15): Escalate to AI Governance Lead within 48 hours. Deployment requires approved mitigation plan.
- Overdue mitigations: If a mitigation passes its deadline without completion, escalate to AI Governance Lead.
- Systemic risks: If multiple systems share the same risk pattern, escalate to Committee for portfolio-level response.
10. Roles and responsibilities
| Role | Risk management responsibilities |
|---|---|
| Model Owner | Conducts risk assessments, maintains risk register entries, implements mitigations, monitors residual risk. |
| AI Governance Lead | Reviews assessments, tracks portfolio risk posture, escalates to Committee, coordinates risk reporting. |
| AI Governance Committee | Sets risk appetite, approves critical risk acceptance, resolves escalations, reviews quarterly risk report. |
| Security | Assesses security-dimension risks, conducts threat modeling, reviews vendor security posture. |
| Legal / Compliance | Assesses compliance-dimension risks, advises on regulatory obligations, reviews third-party contracts. |
11. Regulatory alignment
- EU AI Act: Article 9 (risk management system for high-risk AI), Article 5 (prohibited practices), Articles 6/Annex III (high-risk classification).
- ISO/IEC 42001: Clause 6.1 (actions to address risks and opportunities), Annex B (AI risk sources).
- NIST AI RMF: MAP (context and risk identification), MEASURE (risk analysis), MANAGE (risk response and monitoring).
- ISO 31000: Risk management principles and process.
12. Review
This policy is reviewed annually. The risk register is reviewed quarterly by the AI Governance Lead and presented to the AI Governance Committee. Material changes to risk methodology require Committee approval.
Document control
| Field | Value |
|---|---|
| Policy owner | [AI Governance Lead] |
| Approved by | [AI Governance Committee] |
| Effective date | [Date] |
| Next review date | [Date + 12 months] |
| Version | 1.0 |
| Classification | Internal |