arXiv
Original-Ressource anzeigenThis groundbreaking research addresses a critical gap in AI governance: how do we systematically categorize the actual harms happening from AI systems? Rather than theoretical speculation, this taxonomy is built from over 10,000 documented real-world cases of AI, algorithmic, and automation harms collected from global media reports, academic research, and legal documents. The researchers took a human-centered approach, focusing on how people actually experience harm rather than technical failure modes. The result is a practical classification system that covers everything from facial recognition bias to generative AI misinformation, providing a common language for discussing AI risks across disciplines and jurisdictions.
Most AI risk frameworks are built top-down by experts making educated guesses about potential harms. This taxonomy flips that approach by starting with documented evidence of actual harm. The researchers analyzed thousands of real cases to identify patterns in how AI systems actually hurt people, not just how they might.
Key differentiators:
The taxonomy also explicitly addresses limitations in existing systems that often miss intersectional harms, systemic impacts, and the lived experiences of affected communities.
The taxonomy organizes AI harms into a hierarchical structure that moves from broad impact areas down to specific harm types. This multilevel approach allows users to zoom in or out depending on their needs - policymakers might work at the high level while incident response teams need granular categories.
Primary dimensions include:
The taxonomy is designed to be living document that can evolve as new types of AI systems create new categories of harm.
Primary audiences:
Secondary audiences:
The researchers provide detailed definitions and examples for each category, making it practical to train teams on consistent classification approaches.
The study employed a mixed-methods approach combining quantitative analysis of harm cases with qualitative input from affected communities and domain experts. Cases were sourced from academic literature, news reports, legal filings, and advocacy organization documentation spanning multiple languages and regions.
Key limitations to consider:
The researchers acknowledge these limitations and position the taxonomy as a starting point for more comprehensive harm tracking systems rather than a definitive catalog.
Veröffentlicht
2024
Zuständigkeit
Global
Kategorie
Risikotaxonomien
Zugang
Ă–ffentlicher Zugang
US Executive Order on Safe, Secure, and Trustworthy AI
Vorschriften und Gesetze • White House
Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence
Vorschriften und Gesetze • U.S. Government
EU Artificial Intelligence Act - Official Text
Vorschriften und Gesetze • European Union
VerifyWise hilft Ihnen bei der Implementierung von KI-Governance-Frameworks, der Verfolgung von Compliance und dem Management von Risiken in Ihren KI-Systemen.