Structured incident tracking from detection through resolution—with the audit trail regulators expect.

The challenge
When an AI system malfunctions, causes harm, or behaves unexpectedly, organizations need to respond quickly and document thoroughly. Unlike traditional IT incidents, AI incidents may require regulatory reporting, can affect fundamental rights, and demand specific harm categorization. Ad-hoc incident tracking leaves organizations exposed.
No standardized way to classify AI-specific incident types and severity
Harm categories required by regulations aren't captured in generic ticketing systems
Investigation progress and corrective actions aren't tracked systematically
Regulatory reporting deadlines can be missed without proper workflows
Incident history needed for audits is scattered or incomplete
Benefits
Key advantages for your AI governance program
Capture incidents with structured severity and harm classification
Move incidents through a clear lifecycle with accountability
Document corrective actions and lessons learned
Meet regulatory reporting requirements with complete records
Capabilities
Core functionality of Incident management
Log incidents with severity levels, harm categories, and affected systems, everything needed for reporting.
Record harm categories, affected persons, and immediate mitigations for complete incident records.
Track incidents through Open, Investigating, Mitigated, Closed with clear ownership and status visibility at every stage.
Coordinate cross-functional incident response with legal, engineering, and compliance working together.
Monitor incident trends, resolution times, and recurring patterns across your AI portfolio.
Enterprise example
See how organizations use this capability in practice
An organization's AI-powered customer service system started providing incorrect information to users. The incident was reported via email, investigated through chat messages, and documented in a Word document. When the compliance team needed to prepare a regulatory report, they spent days piecing together what happened, who was involved, and what corrective actions were taken.
They implemented a structured incident management system where all AI incidents are logged with standardized severity levels, harm categories, and incident types. Each incident moves through a defined lifecycle with clear ownership, and serious incidents require approval before being marked as resolved.
When the next incident occurred, the organization captured all required information at the point of logging. The investigation was tracked in one place, corrective actions were documented, and the approval workflow ensured proper review before closure. Regulatory reporting that previously took days now takes hours, with complete audit trails.
Why VerifyWise
What makes our approach different
Seven incident types tailored to AI systems: Malfunction, Unexpected Behavior, Model Drift, Misuse, Data Corruption, Security Breach, and Performance Degradation.
Five harm categories matching EU AI Act requirements: Health, Safety, Fundamental Rights, Property, and Environment. Capture exactly what regulators need.
Incidents progress through Open → Investigating → Mitigated → Closed. Once closed, they cannot be reopened—preserving audit trail integrity.
Serious incidents can require approval before regulatory reporting. Track who approved, when, and with what notes for complete accountability.
Regulatory context
AI regulations require organizations to track, report, and learn from incidents. Structured incident management ensures you capture the right information and can demonstrate proper response procedures to regulators.
Providers of high-risk AI systems must report serious incidents to market surveillance authorities, including incidents that resulted in death, serious damage to health, property, or the environment.
Deployers must monitor AI systems and inform providers of serious incidents or malfunctions that could lead to risks to health, safety, or fundamental rights.
Organizations must establish processes for reporting and responding to AI-related incidents as part of their AI management system.
Technical details
Implementation details and technical capabilities
Automatic INC-ID generation for unique incident identification and tracking
4-stage lifecycle: Open→Investigating→Mitigated→Closed (closed incidents cannot be reopened)
EU AI Act Article 73 compliance for serious incident reporting with harm category tracking
7 incident types: Malfunction, Unexpected Behavior, Model Drift, Misuse, Data Corruption, Security Breach, Performance Degradation
5 harm categories aligned with EU AI Act: Health, Safety, Fundamental Rights, Property, Environment
Approval workflow with Pending, Approved, Rejected, and Not Required statuses
CE marking integration for linking incidents to conformity assessments
FAQ
Frequently asked questions about Incident management
Seven incident types cover the spectrum of AI failures: Malfunction (system errors), Unexpected Behavior (outputs outside expected bounds), Model Drift (degradation over time), Misuse (improper use), Data Corruption (training or input data issues), Security Breach (unauthorized access), and Performance Degradation (declining accuracy or speed).
Three severity levels—Minor, Serious, and Very Serious—align with regulatory reporting thresholds. Serious and Very Serious incidents typically require regulatory notification under EU AI Act Article 73.
Five harm categories match EU AI Act requirements: Health, Safety, Fundamental Rights, Property, and Environment. Selecting the appropriate categories ensures your incident records contain the information regulators expect to see.
Incident history integrity is critical for audits and regulatory reviews. Once closed, incidents cannot be reopened to prevent tampering with historical records. If a related issue emerges, create a new incident and link it to the original.
More from Govern
Other features in the Govern pillar
See how VerifyWise can help you govern AI with confidence.