The AI Incident Database stands as the world's most comprehensive repository of documented AI system failures, containing over 1,200 real-world cases where AI has caused harm. Launched by the Partnership on AI in 2020, this living database transforms scattered incident reports into a searchable, categorized resource that reveals patterns in AI failures across industries. From biased hiring algorithms to autonomous vehicle crashes, each entry provides detailed context about what went wrong, why it happened, and what lessons can be learned. It's essentially the "NTSB database" for AI incidents—turning individual failures into collective wisdom for safer AI deployment.
Unlike scattered news reports or academic papers about AI failures, the AI Incident Database applies rigorous incident classification systems borrowed from aviation and nuclear safety. Each incident receives structured tagging across multiple dimensions: harm type (physical, economic, social), affected populations, AI system characteristics, and contributing factors. The database doesn't just collect incidents—it analyzes patterns, enabling users to identify common failure modes like algorithmic bias in facial recognition or edge case failures in computer vision systems.
The database also maintains a living taxonomy that evolves as new types of AI incidents emerge. Early entries focused heavily on discrimination and privacy violations, but recent additions increasingly document issues with generative AI, deepfakes, and large language model hallucinations.
The database uses a multi-layered taxonomy system that categorizes incidents across several key dimensions:
Each incident also receives temporal tagging, allowing users to track how AI failure patterns have evolved as technology and deployment practices have changed.
Start with the database's pre-built queries for common use cases rather than browsing randomly through 1,200+ incidents. The "Similar Incidents" feature helps identify clusters of related failures, while the timeline view reveals whether certain types of incidents are becoming more or less common.
For risk assessment purposes, filter incidents by your specific AI application area and harm severity levels. The database's citation system links back to original sources, making it valuable for due diligence documentation.
The monthly incident summaries provide digestible overviews of newly added cases and emerging patterns, making it easier to stay current without monitoring the full database continuously.
Advanced users can export structured data for quantitative analysis, though be aware that incident reporting rates vary significantly across industries and geographic regions, potentially skewing apparent risk distributions.
The database suffers from significant reporting bias—incidents in regulated industries like aviation get documented more thoroughly than failures in consumer applications. Western English-language incidents are overrepresented compared to global AI deployments.
Many incidents lack technical depth about root causes, focusing more on observable harms than underlying system architecture or training methodology failures. The database also struggles with incidents involving proprietary systems where companies limit information disclosure.
The classification system, while comprehensive, continues evolving as new AI capabilities create novel failure modes not anticipated in the original taxonomy. Users should expect some inconsistency in how similar incidents from different time periods are categorized.
Veröffentlicht
2020
Zuständigkeit
Global
Kategorie
Vorfälle und Rechenschaftspflicht
Zugang
Öffentlicher Zugang
US Executive Order on Safe, Secure, and Trustworthy AI
Vorschriften und Gesetze • White House
Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence
Vorschriften und Gesetze • U.S. Government
EU AI Act: First Regulation on Artificial Intelligence
Vorschriften und Gesetze • European Union
VerifyWise hilft Ihnen bei der Implementierung von KI-Governance-Frameworks, der Verfolgung von Compliance und dem Management von Risiken in Ihren KI-Systemen.