ACM
Original-Ressource anzeigenThis ACM research paper tackles one of AI governance's most pressing challenges: creating a systematic way to identify, categorize, and mitigate harms from algorithmic systems. Through a comprehensive scoping review of computing literature, researchers developed a taxonomy that goes beyond technical failures to examine the complex sociotechnical interactions where most algorithmic harms actually occur. What sets this work apart is its focus on prevention through classification—rather than just documenting harms after they happen, it provides a structured framework for anticipating and reducing them before deployment.
The research identifies six distinct categories of sociotechnical harm:
Individual Harms
Traditional approaches to AI safety often focus on technical metrics or individual bias detection. This taxonomy reveals why that's insufficient—most real-world harms emerge from the complex interactions between algorithms, social systems, and institutional contexts. The framework's strength lies in its recognition that technical solutions alone cannot address sociotechnical problems.
The taxonomy also provides a common language for interdisciplinary teams. Product managers, ethicists, engineers, and policymakers can use these categories to systematically evaluate potential harms across different domains and stakeholder groups.
The taxonomy is based on existing computing literature, which may not capture all emerging forms of harm or perspectives from affected communities. Consider supplementing this framework with direct stakeholder input and community-based harm definitions.
The categories can overlap in practice—real incidents often span multiple harm types simultaneously. Don't treat them as mutually exclusive when conducting assessments.
This is a research paper, not an implementation guide. You'll need to adapt the taxonomy to your specific context, industry, and regulatory environment.
Veröffentlicht
2023
Zuständigkeit
Global
Kategorie
Vorfälle und Rechenschaftspflicht
Zugang
Registrierung erforderlich
US Executive Order on Safe, Secure, and Trustworthy AI
Vorschriften und Gesetze • White House
Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence
Vorschriften und Gesetze • U.S. Government
Highlights of the 2023 Executive Order on Artificial Intelligence
Vorschriften und Gesetze • Congressional Research Service
VerifyWise hilft Ihnen bei der Implementierung von KI-Governance-Frameworks, der Verfolgung von Compliance und dem Management von Risiken in Ihren KI-Systemen.