ACM
Ver recurso originalThis ACM research paper tackles one of AI governance's most pressing challenges: creating a systematic way to identify, categorize, and mitigate harms from algorithmic systems. Through a comprehensive scoping review of computing literature, researchers developed a taxonomy that goes beyond technical failures to examine the complex sociotechnical interactions where most algorithmic harms actually occur. What sets this work apart is its focus on prevention through classification—rather than just documenting harms after they happen, it provides a structured framework for anticipating and reducing them before deployment.
The research identifies six distinct categories of sociotechnical harm:
Individual Harms
Traditional approaches to AI safety often focus on technical metrics or individual bias detection. This taxonomy reveals why that's insufficient—most real-world harms emerge from the complex interactions between algorithms, social systems, and institutional contexts. The framework's strength lies in its recognition that technical solutions alone cannot address sociotechnical problems.
The taxonomy also provides a common language for interdisciplinary teams. Product managers, ethicists, engineers, and policymakers can use these categories to systematically evaluate potential harms across different domains and stakeholder groups.
The taxonomy is based on existing computing literature, which may not capture all emerging forms of harm or perspectives from affected communities. Consider supplementing this framework with direct stakeholder input and community-based harm definitions.
The categories can overlap in practice—real incidents often span multiple harm types simultaneously. Don't treat them as mutually exclusive when conducting assessments.
This is a research paper, not an implementation guide. You'll need to adapt the taxonomy to your specific context, industry, and regulatory environment.
Publicado
2023
Jurisdicción
Global
CategorÃa
Incident and accountability
Acceso
Requiere registro
US Executive Order on Safe, Secure, and Trustworthy AI
Regulations and laws • White House
Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence
Regulations and laws • U.S. Government
Highlights of the 2023 Executive Order on Artificial Intelligence
Regulations and laws • Congressional Research Service
VerifyWise le ayuda a implementar frameworks de gobernanza de IA, hacer seguimiento del cumplimiento y gestionar riesgos en sus sistemas de IA.