arXiv
Ver recurso originalThis groundbreaking research addresses one of the most pressing challenges in AI safety: the fragmented, inconsistent way we document and learn from AI failures. The paper introduces a comprehensive, standardized framework specifically designed for critical digital infrastructure—the backbone systems that keep our digital world running. Unlike generic incident reporting schemas, this taxonomy is laser-focused on the unique risks and failure modes that occur when AI systems interact with power grids, telecommunications networks, financial systems, and other mission-critical infrastructure.
Current AI incident databases are a mess of incompatible formats, inconsistent categorizations, and missing context. When an AI system fails in a power grid in Germany and a similar failure occurs in a financial trading system in Japan, there's no standardized way to compare, analyze, or learn from these incidents collectively. This research provides the missing infrastructure for incident data—a common language that enables:
The proposed framework structures incident data across several dimensions:
Critical digital infrastructure is increasingly AI-dependent, yet we're flying blind when it comes to understanding systematic risks. Traditional IT incident management wasn't designed for AI systems that can fail in subtle, probabilistic ways. This research arrives at a crucial moment when:
Adopting this schema requires more than technical implementation—it demands organizational change management. The framework is designed to integrate with existing incident response workflows while adding the AI-specific context that traditional IT systems miss. Early adopters will likely need to train staff on the new categorization system and modify existing reporting tools to capture the additional data fields.
The research acknowledges that implementation will be gradual and provides guidance on phased adoption, starting with high-risk AI deployments and expanding to comprehensive coverage over time.
Publicado
2025
Jurisdicción
Global
CategorÃa
Risk taxonomies
Acceso
Acceso público
US Executive Order on Safe, Secure, and Trustworthy AI
Regulations and laws • White House
Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence
Regulations and laws • U.S. Government
Highlights of the 2023 Executive Order on Artificial Intelligence
Regulations and laws • Congressional Research Service
VerifyWise le ayuda a implementar frameworks de gobernanza de IA, hacer seguimiento del cumplimiento y gestionar riesgos en sus sistemas de IA.