Vorfälle und Rechenschaftspflicht
Alles rund um Fehler und Rechenschaftspflicht.
17 Ressourcen
Partnership on AI Incident Database
A comprehensive database cataloging AI incidents and harms. The database enables researchers and practitioners to learn from past AI failures, identify patterns, and develop preventive measures.
AIAAIC Repository
The AI, Algorithmic, and Automation Incidents and Controversies repository tracks incidents involving AI and automated systems. It provides detailed case studies with timelines, stakeholders, and outcomes.
EU AI Act Incident Reporting Requirements
The EU AI Act establishes mandatory incident reporting requirements for high-risk AI systems. Providers must report serious incidents and malfunctions to relevant authorities within specified timeframes.
US Algorithmic Accountability Act (Proposed)
Proposed US legislation requiring companies to conduct impact assessments on automated decision systems. The bill would establish accountability requirements for high-risk algorithmic systems affecting critical decisions.
AI Incident Tracker
A comprehensive tracking tool that provides detailed analysis of AI incidents from 2015-2024. The tracker utilizes Causal and Domain Taxonomies from the MIT AI Risk Repository to categorize and analyze how AI incidents have evolved over time.
AI Incident Database
The AI Incident Database is a comprehensive collection of over 1,200 documented cases where AI systems have caused safety, fairness, or other real-world problems. It serves as a tool to help stakeholders better understand, anticipate, and mitigate AI-related risks through systematic incident documentation and analysis.
Tracking AI incidents: OECD AIM and AIAIAC Repository
A resource covering two major AI incident tracking initiatives: the OECD AI Incidents and Hazards Monitor (AIM) and the AIAAIC Repository. These efforts focus on documenting real-world AI incidents to enhance transparency and inform governance decisions.
AIAAIC Repository
The AIAAIC Repository is an open, public interest resource that documents incidents and controversies related to artificial intelligence, algorithms, and automation. It provides tools and metrics designed to track and analyze AI-related incidents for accountability and governance purposes.
Artificial Intelligence: An Accountability Framework for Federal Agencies and Other Entities
A comprehensive accountability framework developed by the U.S. GAO to help federal agencies and other entities ensure responsible AI implementation. The framework is organized around four complementary principles addressing governance, data, performance, and monitoring to promote accountability in AI systems.
ITI's AI Accountability Framework
A comprehensive accountability framework developed by the Information Technology Industry Council that delineates responsibility sharing between different actors in AI systems development and deployment. The framework specifically addresses roles of various stakeholders including integrators and defines how accountability should be distributed based on their unique functions in the AI ecosystem.
Artificial Intelligence Accountability Policy Report
A policy report by NTIA examining AI accountability frameworks and their implementation. The report references and builds upon NIST's AI Risk Management Framework, focusing on developing trustworthy and responsible AI systems within federal governance structures.
Resistance and refusal to algorithmic harms: Varieties of 'knowledge projects'
This research examines various forms of resistance and refusal to algorithmic harms through different 'knowledge projects'. The work builds on investigative journalism like Machine Bias that revealed how algorithmic systems can replicate and amplify racial biases in criminal justice and other domains where algorithmic decision systems are deployed.
Sociotechnical Harms of Algorithmic Systems: Scoping a Taxonomy for Harm Reduction
This research paper presents a scoping review and taxonomy of sociotechnical harms caused by algorithmic systems. The study uses reflexive thematic analysis of computing research to categorize different types of harms and provides a framework for harm reduction in algorithmic systems.
After Harm: A Plea for Moral Repair after Algorithms Have Failed
This research paper examines the concept of moral repair as a response to algorithmic harm, moving beyond traditional offender-centric approaches to focus on what victims actually need. Using the Ofqual grading controversy as a case study, it argues for algorithmic imprint awareness and emphasizes the importance of addressing the extended consequences of algorithmic failures through victim-centered moral repair processes.
AI-Driven Incident Response: Definition and Components
This resource provides guidance on AI-driven incident response systems that offer structured decision-making frameworks for cybersecurity threats. It focuses on how AI can deliver data-backed insights and suggested actions based on analysis of threat environments and historical incidents.
AI Incident Response Framework, Version 1.0
A comprehensive framework developed by the Coalition for Secure AI that provides security teams with structured approaches, tools, and knowledge to protect AI systems from emerging threats. The framework offers guidance for incident response specifically tailored to the unique challenges of AI technology deployments.
AI Incident Response Plans: Checklist & Best Practices
A practical guide providing checklists and best practices for developing AI incident response plans. The resource covers key elements including assigning response coordinators, establishing communication channels, and documenting procedures for detecting, assessing, containing, and recovering from AI-related incidents.