arXiv
View original resourceThis groundbreaking research addresses a critical gap in AI governance: how do we systematically categorize the actual harms happening from AI systems? Rather than theoretical speculation, this taxonomy is built from over 10,000 documented real-world cases of AI, algorithmic, and automation harms collected from global media reports, academic research, and legal documents. The researchers took a human-centered approach, focusing on how people actually experience harm rather than technical failure modes. The result is a practical classification system that covers everything from facial recognition bias to generative AI misinformation, providing a common language for discussing AI risks across disciplines and jurisdictions.
Most AI risk frameworks are built top-down by experts making educated guesses about potential harms. This taxonomy flips that approach by starting with documented evidence of actual harm. The researchers analyzed thousands of real cases to identify patterns in how AI systems actually hurt people, not just how they might.
Key differentiators:
The taxonomy also explicitly addresses limitations in existing systems that often miss intersectional harms, systemic impacts, and the lived experiences of affected communities.
The taxonomy organizes AI harms into a hierarchical structure that moves from broad impact areas down to specific harm types. This multilevel approach allows users to zoom in or out depending on their needs - policymakers might work at the high level while incident response teams need granular categories.
Primary dimensions include:
Emerging harm categories the research identifies include:
The taxonomy is designed to be living document that can evolve as new types of AI systems create new categories of harm.
Primary audiences:
Secondary audiences:
This is particularly valuable for organizations that need to move beyond vague discussions of "AI risks" to specific, actionable categories of harm that can be measured, monitored, and addressed.
For incident response: Use the classification system to categorize AI harm reports consistently across your organization. This enables better tracking of harm patterns and more targeted mitigation strategies.
For risk assessment: Map your AI systems against the harm categories to identify potential negative impacts you might have missed in traditional risk assessments focused on technical failures.
For policy development: Reference the taxonomy when creating AI governance policies to ensure you're addressing the full spectrum of documented harms, not just the most obvious ones.
For research and monitoring: Use the categories as a framework for systematically collecting and analyzing AI harm cases in your sector or jurisdiction.
The researchers provide detailed definitions and examples for each category, making it practical to train teams on consistent classification approaches.
The study employed a mixed-methods approach combining quantitative analysis of harm cases with qualitative input from affected communities and domain experts. Cases were sourced from academic literature, news reports, legal filings, and advocacy organization documentation spanning multiple languages and regions.
Key limitations to consider:
The researchers acknowledge these limitations and position the taxonomy as a starting point for more comprehensive harm tracking systems rather than a definitive catalog.
Published
2024
Jurisdiction
Global
Category
Risk taxonomies
Access
Public access
VerifyWise helps you implement AI governance frameworks, track compliance, and manage risk across your AI systems.