MIT
toolactive

AI Incident Tracker

MIT

View original resource

AI Incident Tracker

Summary

The MIT AI Incident Tracker is the most comprehensive database of AI system failures, accidents, and unintended consequences spanning nearly a decade of AI deployment. Unlike scattered media reports or vendor-specific incident logs, this tool systematically categorizes over 1,000 documented AI incidents using rigorous academic taxonomies. It transforms anecdotal "AI gone wrong" stories into structured data that reveals patterns, trends, and emerging risks across industries and AI applications.

What makes this different

Most AI incident reporting is reactive and fragmented—a bias scandal here, an autonomous vehicle accident there. The MIT tracker takes a systematic approach by applying two sophisticated classification systems: Causal Taxonomy (why incidents happen) and Domain Taxonomy (where they occur). This dual framework reveals that 40% of incidents stem from training data issues, while computer vision applications account for the majority of documented failures.

The tracker doesn't just catalog incidents; it analyzes their evolution. You can see how deepfake-related incidents spiked in 2020, how algorithmic bias reporting became more prevalent post-2018, and how autonomous system failures have grown more complex as the technology matured.

Key incident patterns revealed

Training Data Failures: Poor data quality, biased datasets, and inadequate training samples consistently emerge as root causes across domains—from hiring algorithms that discriminate to medical AI that fails on underrepresented populations.

Deployment Context Mismatches: Many incidents occur when AI systems trained in controlled environments meet messy real-world conditions. Autonomous vehicles struggling with construction zones exemplify this pattern.

Human-AI Interface Problems: A significant cluster of incidents involves miscommunication between AI systems and human operators, particularly in healthcare and criminal justice applications.

Adversarial Exploitation: Deliberate manipulation of AI systems shows increasingly sophisticated attack vectors, from fooling image classifiers to gaming recommendation algorithms.

Who this resource is for

Risk and Compliance Teams need concrete examples of AI failures to build realistic risk assessments and demonstrate due diligence to regulators and boards.

AI Product Managers and Engineers can learn from others' mistakes by studying incident patterns relevant to their domain before deployment.

Policy Makers and Regulators require evidence-based understanding of where AI systems actually fail in practice, not just theoretical risks.

Researchers and Academics studying AI safety, algorithmic accountability, or technology policy need structured historical data for longitudinal analysis.

Insurance and Legal Professionals working on AI liability cases or developing coverage policies need documented precedents and failure modes.

How to extract maximum value

Start with the domain filter most relevant to your work—healthcare, transportation, criminal justice, or hiring. Look for patterns in incident causes rather than just individual cases. Pay attention to the timeline view to spot emerging risks before they become widespread.

Use the causal taxonomy to map your own AI system's potential failure modes. If you're deploying computer vision, study the 200+ vision-related incidents to understand common pitfalls. Cross-reference incident causes with your development and deployment processes.

The tracker works best as a living reference during AI system design, not just post-incident analysis. Bookmark specific incident types relevant to your use case and check quarterly for new additions.

Limitations to keep in mind

The tracker suffers from reporting bias—incidents that make headlines or academic papers are overrepresented compared to internal corporate failures that stay private. Western incidents, particularly from the US and EU, dominate the dataset due to language and media accessibility.

Not every entry represents a technical failure; some incidents reflect changing social standards around AI ethics rather than system malfunctions. The tracker also can't capture near-misses or incidents that were caught before causing harm.

The taxonomies, while rigorous, impose MIT's academic framework on incidents that might be categorized differently by practitioners or other research institutions.

Tags

AI incidentsrisk trackingaccountabilityincident analysisAI safetyrisk repository

At a glance

Published

2024

Jurisdiction

Global

Category

Incident and accountability

Access

Public access

Build your AI governance program

VerifyWise helps you implement AI governance frameworks, track compliance, and manage risk across your AI systems.

AI Incident Tracker | AI Governance Library | VerifyWise