Back to policy templates
Policy 11 of 15

Incident Response for AI Systems Policy

Extends the organization's incident response plan with AI-specific triggers, triage procedures, and regulatory notification requirements.

1. Purpose

This policy defines how [Organization Name] detects, triages, contains, and recovers from AI-specific incidents. Standard IT incident response plans do not cover AI failure modes such as model drift, hallucinations, bias incidents, prompt injection, or data poisoning. This policy fills that gap.

2. Scope

This policy applies to:

  • All AI systems in production (internal and customer-facing).
  • All AI-related security incidents, safety incidents, and performance failures.
  • All employees who detect, report, or respond to AI incidents.
  • Incidents involving both internally developed and third-party AI systems.

3. AI incident categories

CategoryDescriptionExamples
SafetyAI output causes or could cause harm to individualsDangerous medical advice, incorrect safety recommendation, harmful content to minors
BiasAI produces discriminatory outcomes across protected groupsDisparate rejection rates in hiring, biased credit scoring, unfair content moderation
SecurityAdversarial attack or unauthorized access to AI systemsPrompt injection, jailbreak, model extraction, data exfiltration through AI, training data poisoning
PrivacyAI exposes personal or confidential dataModel memorization of PII, prompt leakage of confidential data, unauthorized data sharing via AI
PerformanceAI quality degrades below acceptable thresholdsAccuracy drop, hallucination increase, latency spike, model drift beyond tolerance
ComplianceAI operates outside regulatory boundariesMissing transparency disclosures, unauthorized automated decisions, documentation gaps discovered

4. Severity classification

SeverityCriteriaResponse time
Critical (P1)Active harm to individuals, data breach, or regulatory notification requiredImmediate response, containment within 1 hour
High (P2)Material risk of harm, significant bias detected, or security breach contained but not resolvedResponse within 4 hours, containment within 24 hours
Medium (P3)Performance degradation, non-critical bias, or compliance gap identifiedResponse within 24 hours, resolution within 5 business days
Low (P4)Minor quality issue, informational finding, or cosmetic problemLogged and addressed in next scheduled review

5. Response process

Placeholder. Populate with your organization's language for 5. Response process.

Phase 1: Detection and reporting

  • Incidents may be detected through monitoring alerts, user reports, audit findings, or third-party notifications.
  • Any employee who suspects an AI incident must report it immediately to the AI Governance Lead or Security team.
  • The report must include: system name, incident description, severity estimate, and any immediate actions taken.

Phase 2: Triage

  • The AI Governance Lead (or Security for security incidents) assigns severity and assembles the response team.
  • Response team composition depends on category: Model Owner (always), Security (for security/privacy), Legal (for compliance/regulatory), Data Owner (for data incidents).
  • For P1 and P2: the AI system may be suspended pending investigation. The decision to suspend is made by the AI Governance Lead or Security team lead.

Phase 3: Containment

  • Stop the immediate harm: suspend the system, block the attack vector, revert to a previous model version, or restrict access.
  • Preserve evidence: logs, model state, input/output samples, monitoring data.
  • Notify affected parties if required (see section 6).

Phase 4: Investigation

  • Identify root cause: was it a model problem, data problem, infrastructure problem, or adversarial action?
  • Determine scope: how many users/decisions/outputs were affected? Over what time period?
  • Assess regulatory implications: does this trigger reporting obligations?

Phase 5: Remediation

  • Fix the root cause (retrain, patch, update guardrails, fix data pipeline).
  • Validate the fix through testing before restoring production service.
  • Update monitoring to detect recurrence.

Phase 6: Post-incident review

  • Conduct a blameless post-incident review within 10 business days of resolution.
  • Document: timeline, root cause, impact, response actions, lessons learned, preventive measures.
  • Present findings to the AI Governance Committee for P1 and P2 incidents.
  • Update policies, procedures, or controls based on lessons learned.

6. Notification obligations

TriggerNotification requiredTimeline
Personal data breach (GDPR Art. 33)Supervisory authorityWithin 72 hours of becoming aware
High risk to individuals' rights (GDPR Art. 34)Affected individualsWithout undue delay
Serious AI incident (EU AI Act Art. 73)Market surveillance authorityWithin 15 days of becoming aware
Vendor AI incidentInternal: AI Governance Lead + LegalWithin 24 hours of vendor notification

7. Roles and responsibilities

RoleIncident response responsibilities
AI Governance LeadTriages incidents, assembles response team, coordinates investigation, manages communications.
Model OwnerProvides system context, executes containment (suspend/rollback), leads technical investigation.
SecurityLeads security incident response, preserves evidence, conducts forensic analysis.
LegalAssesses notification obligations, drafts regulator communications, advises on liability.
CommunicationsManages external communications if public disclosure is required.

8. Regulatory alignment

  • EU AI Act: Article 73 (reporting of serious incidents), Article 20 (corrective actions).
  • GDPR: Articles 33-34 (breach notification).
  • ISO/IEC 42001: Clause 10.2 (nonconformity and corrective action).
  • NIST AI RMF: MANAGE function (MG-4: incident response).
  • CoSAI AI IR Framework: Detection, triage, containment, recovery playbooks.

9. Review

This policy is reviewed annually, after every P1 incident, and when new incident categories emerge from regulatory guidance or industry experience.

Document control

FieldValue
Policy owner[AI Governance Lead / CISO]
Approved by[AI Governance Committee]
Effective date[Date]
Next review date[Date + 12 months]
Version1.0
ClassificationInternal

Ready to implement this policy?

Use VerifyWise to customize, deploy, and track compliance with this policy template.

Incident Response for AI Systems Policy | VerifyWise AI Governance Templates