arXiv
researchactive

A Collaborative, Human-Centred Taxonomy of AI, Algorithmic, and Automation Harms

arXiv

View original resource

A Collaborative, Human-Centred Taxonomy of AI, Algorithmic, and Automation Harms

Summary

This groundbreaking research addresses a critical gap in AI governance: how do we systematically categorize the actual harms happening from AI systems? Rather than theoretical speculation, this taxonomy is built from over 10,000 documented real-world cases of AI, algorithmic, and automation harms collected from global media reports, academic research, and legal documents. The researchers took a human-centered approach, focusing on how people actually experience harm rather than technical failure modes. The result is a practical classification system that covers everything from facial recognition bias to generative AI misinformation, providing a common language for discussing AI risks across disciplines and jurisdictions.

What makes this different from existing taxonomies

Most AI risk frameworks are built top-down by experts making educated guesses about potential harms. This taxonomy flips that approach by starting with documented evidence of actual harm. The researchers analyzed thousands of real cases to identify patterns in how AI systems actually hurt people, not just how they might.

Key differentiators:

  • Evidence-based foundation: Built from 10,000+ documented harm cases rather than theoretical scenarios
  • Human-centered perspective: Categorizes harms based on human impact rather than technical failure types
  • Global scope: Includes cases from diverse jurisdictions and cultural contexts
  • Current technology coverage: Incorporates emerging risks from generative AI, emotion recognition, and other recent developments
  • Collaborative methodology: Developed through multi-stakeholder input rather than expert-only panels

The taxonomy also explicitly addresses limitations in existing systems that often miss intersectional harms, systemic impacts, and the lived experiences of affected communities.

Core structure and categories

The taxonomy organizes AI harms into a hierarchical structure that moves from broad impact areas down to specific harm types. This multilevel approach allows users to zoom in or out depending on their needs - policymakers might work at the high level while incident response teams need granular categories.

Primary dimensions include:

  • Individual vs. collective harms
  • Direct vs. indirect impacts
  • Immediate vs. long-term consequences
  • Physical, psychological, economic, and social harm types

Emerging harm categories the research identifies include:

  • Synthetic media manipulation and deepfake abuse
  • AI-generated misinformation at scale
  • Emotion recognition privacy violations
  • Automated content moderation bias
  • Generative AI copyright and consent issues

The taxonomy is designed to be living document that can evolve as new types of AI systems create new categories of harm.

Who this resource is for

Primary audiences:

  • AI governance teams developing internal harm monitoring and response processes
  • Policymakers and regulators creating AI oversight frameworks who need standardized harm categories
  • AI safety researchers studying real-world AI impacts rather than theoretical risks
  • Legal professionals working on AI liability cases who need consistent harm classification
  • Journalists and civil society organizations investigating and reporting on AI incidents

Secondary audiences:

  • Product teams building AI systems who want to understand potential negative impacts
  • Insurance companies developing AI liability coverage
  • Academic researchers studying algorithmic accountability
  • International organizations working on AI governance standards

This is particularly valuable for organizations that need to move beyond vague discussions of "AI risks" to specific, actionable categories of harm that can be measured, monitored, and addressed.

Putting the taxonomy to work

For incident response: Use the classification system to categorize AI harm reports consistently across your organization. This enables better tracking of harm patterns and more targeted mitigation strategies.

For risk assessment: Map your AI systems against the harm categories to identify potential negative impacts you might have missed in traditional risk assessments focused on technical failures.

For policy development: Reference the taxonomy when creating AI governance policies to ensure you're addressing the full spectrum of documented harms, not just the most obvious ones.

For research and monitoring: Use the categories as a framework for systematically collecting and analyzing AI harm cases in your sector or jurisdiction.

The researchers provide detailed definitions and examples for each category, making it practical to train teams on consistent classification approaches.

Research methodology and limitations

The study employed a mixed-methods approach combining quantitative analysis of harm cases with qualitative input from affected communities and domain experts. Cases were sourced from academic literature, news reports, legal filings, and advocacy organization documentation spanning multiple languages and regions.

Key limitations to consider:

  • Reporting bias: Cases that make it into public documentation may not represent the full spectrum of AI harms
  • Cultural context: Some harm types may be more readily reported or recognized in certain jurisdictions
  • Rapidly evolving landscape: New AI technologies create new harm categories faster than they can be systematically studied
  • Definition challenges: Determining what constitutes "AI harm" vs. broader technological or social issues requires judgment calls

The researchers acknowledge these limitations and position the taxonomy as a starting point for more comprehensive harm tracking systems rather than a definitive catalog.

Tags

AI harmsrisk taxonomyalgorithmic accountabilityAI governanceharm classificationAI safety

At a glance

Published

2024

Jurisdiction

Global

Category

Risk taxonomies

Access

Public access

Build your AI governance program

VerifyWise helps you implement AI governance frameworks, track compliance, and manage risk across your AI systems.

A Collaborative, Human-Centred Taxonomy of AI, Algorithmic, and Automation Harms | AI Governance Library | VerifyWise