arXiv
Original-Ressource anzeigenThis groundbreaking research addresses one of the most pressing challenges in AI safety: the fragmented, inconsistent way we document and learn from AI failures. The paper introduces a comprehensive, standardized framework specifically designed for critical digital infrastructure—the backbone systems that keep our digital world running. Unlike generic incident reporting schemas, this taxonomy is laser-focused on the unique risks and failure modes that occur when AI systems interact with power grids, telecommunications networks, financial systems, and other mission-critical infrastructure.
Current AI incident databases are a mess of incompatible formats, inconsistent categorizations, and missing context. When an AI system fails in a power grid in Germany and a similar failure occurs in a financial trading system in Japan, there's no standardized way to compare, analyze, or learn from these incidents collectively. This research provides the missing infrastructure for incident data—a common language that enables:
The proposed framework structures incident data across several dimensions:
Critical digital infrastructure is increasingly AI-dependent, yet we're flying blind when it comes to understanding systematic risks. Traditional IT incident management wasn't designed for AI systems that can fail in subtle, probabilistic ways. This research arrives at a crucial moment when:
Adopting this schema requires more than technical implementation—it demands organizational change management. The framework is designed to integrate with existing incident response workflows while adding the AI-specific context that traditional IT systems miss. Early adopters will likely need to train staff on the new categorization system and modify existing reporting tools to capture the additional data fields.
The research acknowledges that implementation will be gradual and provides guidance on phased adoption, starting with high-risk AI deployments and expanding to comprehensive coverage over time.
Veröffentlicht
2025
Zuständigkeit
Global
Kategorie
Risikotaxonomien
Zugang
Ă–ffentlicher Zugang
US Executive Order on Safe, Secure, and Trustworthy AI
Vorschriften und Gesetze • White House
Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence
Vorschriften und Gesetze • U.S. Government
Highlights of the 2023 Executive Order on Artificial Intelligence
Vorschriften und Gesetze • Congressional Research Service
VerifyWise hilft Ihnen bei der Implementierung von KI-Governance-Frameworks, der Verfolgung von Compliance und dem Management von Risiken in Ihren KI-Systemen.