arXiv
View original resourceThis groundbreaking research addresses one of the most pressing challenges in AI safety: the fragmented, inconsistent way we document and learn from AI failures. The paper introduces a comprehensive, standardized framework specifically designed for critical digital infrastructure—the backbone systems that keep our digital world running. Unlike generic incident reporting schemas, this taxonomy is laser-focused on the unique risks and failure modes that occur when AI systems interact with power grids, telecommunications networks, financial systems, and other mission-critical infrastructure.
Current AI incident databases are a mess of incompatible formats, inconsistent categorizations, and missing context. When an AI system fails in a power grid in Germany and a similar failure occurs in a financial trading system in Japan, there's no standardized way to compare, analyze, or learn from these incidents collectively. This research provides the missing infrastructure for incident data—a common language that enables:
The proposed framework structures incident data across several dimensions:
Infrastructure Context: Captures the specific type of critical system affected (energy, telecommunications, finance, transportation, etc.) and its interdependencies with other systems.
AI System Characteristics: Documents the AI architecture, training data sources, deployment configuration, and integration points with legacy infrastructure.
Incident Taxonomy: Classifies failures by root cause (data drift, adversarial attacks, system integration issues), impact severity, and cascading effects across interconnected systems.
Temporal Dynamics: Tracks incident progression, response times, and recovery phases to understand how AI failures evolve in critical infrastructure environments.
Stakeholder Impact: Maps consequences across different affected parties—from end users to regulatory bodies to interconnected systems.
Critical digital infrastructure is increasingly AI-dependent, yet we're flying blind when it comes to understanding systematic risks. Traditional IT incident management wasn't designed for AI systems that can fail in subtle, probabilistic ways. This research arrives at a crucial moment when:
Infrastructure Operators: Power companies, telecom providers, financial institutions, and transportation authorities implementing or considering AI systems in critical operations.
AI Safety Teams: Researchers and practitioners building incident databases, conducting post-mortems, or developing AI safety metrics for high-stakes deployments.
Policy Makers and Regulators: Government officials crafting AI governance frameworks who need standardized data to inform evidence-based policy decisions.
Risk Management Professionals: Insurance underwriters, auditors, and compliance officers working to quantify and manage AI-related risks in critical systems.
Academic Researchers: Scientists studying AI safety, critical infrastructure resilience, or sociotechnical systems who need structured datasets for empirical research.
Adopting this schema requires more than technical implementation—it demands organizational change management. The framework is designed to integrate with existing incident response workflows while adding the AI-specific context that traditional IT systems miss. Early adopters will likely need to train staff on the new categorization system and modify existing reporting tools to capture the additional data fields.
The research acknowledges that implementation will be gradual and provides guidance on phased adoption, starting with high-risk AI deployments and expanding to comprehensive coverage over time.
Published
2025
Jurisdiction
Global
Category
Risk taxonomies
Access
Public access
US Executive Order on Safe, Secure, and Trustworthy AI
Regulations and laws • White House
Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence
Regulations and laws • U.S. Government
Highlights of the 2023 Executive Order on Artificial Intelligence
Regulations and laws • Congressional Research Service
VerifyWise helps you implement AI governance frameworks, track compliance, and manage risk across your AI systems.