Risikotaxonomien
Strukturiertes Risikodenken.
21 Ressourcen
MIT AI Risk Repository
The MIT AI Risk Repository is a comprehensive database of AI risks identified from academic literature, policy documents, and industry reports. It provides a structured taxonomy for categorizing and understanding the diverse landscape of AI-related risks.
OWASP Top 10 for LLM Applications
The OWASP Top 10 for LLM Applications identifies the most critical security risks in large language model applications. It covers prompt injection, data leakage, inadequate sandboxing, unauthorized code execution, and other LLM-specific vulnerabilities.
NIST Adversarial Machine Learning Taxonomy
NIST's taxonomy of adversarial machine learning attacks and mitigations. It categorizes attacks into evasion, poisoning, and privacy attacks, providing a structured framework for understanding and defending against ML security threats.
AI Incident Database
The AI Incident Database catalogs real-world harms caused by AI systems. It provides a searchable archive of incidents to help researchers, developers, and policymakers learn from past failures and prevent future harms.
Mapping AI Risk Mitigations
A living, systematic review and database of AI risk frameworks that provides comprehensive mapping of AI risk mitigations. The repository includes domain taxonomy covering multi-agent risks and serves as a centralized resource for understanding various AI risk assessment approaches.
AI Risk Repository Report
A comprehensive repository that categorizes AI risks using two taxonomies: a Causal Taxonomy that classifies risks by causation (human vs AI), intent (intentional vs unintentional), and timing (pre- vs post-deployment), and a Domain Taxonomy that organizes risks into seven thematic areas. The repository serves as a systematic framework for understanding and categorizing various types of AI-related risks.
MITRE ATLAS: Adversarial Threat Landscape for Artificial-Intelligence Systems
MITRE ATLAS is a knowledge base of adversarial tactics, techniques, and case studies for machine learning systems based on real-world observations. It provides a framework for understanding and defending against threats to AI systems by cataloging attack patterns and mitigation strategies.
MITRE ATLAS Framework - Guide to Securing AI Systems
The MITRE ATLAS Framework is a comprehensive knowledge base that catalogs adversary tactics, techniques, and procedures specifically targeting artificial intelligence systems. It provides security practitioners with real-world case studies and practical guidance for identifying and mitigating threats against AI deployments.
MITRE ATLAS: Adversarial Threat Landscape for Artificial-Intelligence Systems
MITRE ATLAS is a comprehensive framework that catalogs adversarial tactics and techniques used against AI systems. The framework provides 15 tactics, 66 techniques, 46 sub-techniques, 26 mitigations, and 33 real-world case studies to help organizations understand and defend against AI-specific security threats.
AI Security Overview
A technical framework from OWASP that focuses on AI security threats, controls, and related practices. It provides a structured approach to understanding and managing security risks in AI systems, with integration planned into the OpenCRE catalog of common security requirements.
AI Exchange
OWASP AI Exchange is a community framework for mapping AI attack surfaces and codifying AI-specific security testing methodologies. It serves as a resource for organizations to implement AI risk mitigation standards and security practices at scale.
OWASP Generative AI Security Project
OWASP's comprehensive security framework for generative AI systems, covering autonomous agents, multi-step AI workflows, and data protection from leaks and tampering. The project provides tools, testing methodologies including adversarial red teaming, and guidance for addressing top GenAI security risks including deepfake threats.
List of Taxonomies
A comprehensive repository containing multiple taxonomies for categorizing AI incidents and risks. Includes detailed taxonomies covering technological and process factors that contribute to AI incidents, with connections to the MIT AI Risk Repository for comprehensive risk categorization.
Standardised schema and taxonomy for AI incident databases in critical digital infrastructure
This research establishes a standardized schema for AI incident reporting to enhance data collection consistency across databases. It introduces a taxonomy for classifying AI incidents specifically in critical digital infrastructure, improving the comprehensiveness and clarity of incident data for better risk management.
AI Incident Tracker Harm Taxonomy
A comprehensive taxonomy and database tracking AI incidents categorized by harm levels and threat characteristics including novelty, autonomy, and imminence. The repository provides temporal analysis of incident patterns across different categories and includes national security impact assessments for each documented AI incident.
A Collaborative, Human-Centred Taxonomy of AI, Algorithmic, and Automation Harms
This research presents a comprehensive taxonomy for categorizing AI, algorithmic, and automation harms based on analysis of over 10,000 real-world cases from global media, research, and legal reports. The taxonomy addresses limitations in existing classification systems and incorporates emerging risks from generative AI and emotion recognition technologies.
A Collaborative, Human-Centred Taxonomy of AI, Algorithmic, and Automation Harms
This research paper presents a collaborative, human-centered taxonomy for categorizing AI, algorithmic, and automation harms. The authors argue that existing taxonomies are often narrow and overlook important perspectives, proposing a more comprehensive framework that better serves diverse stakeholders beyond just practitioners and government.
Taxonomy of Failure Modes in AI Agents
Microsoft's whitepaper presents a comprehensive taxonomy of failure modes in AI agents, developed through internal red teaming activities. The research aims to enhance safety and security in AI systems by cataloguing realistic failures and risks in agentic systems.
Failure Modes in Machine Learning
A comprehensive document that catalogues various failure modes in machine learning systems, covering both adversarial attacks and inherent design failures. The resource aims to provide a unified reference for understanding how ML systems can fail in practice.
Taxonomy of Failure Mode in Agentic AI Systems
A whitepaper that provides a comprehensive taxonomy of failure modes specifically for agentic AI systems, distinguishing between novel failures unique to agentic AI and existing failure modes. The analysis is grounded in Microsoft's Responsible AI Standard and maps failures across multiple dimensions to help identify and categorize potential risks.
OWASP Vendor Evaluation Criteria for AI Red Teaming
OWASP evaluation criteria for AI red teaming vendors and tools, covering RAG pipelines, agentic systems, and adversarial testing of AI applications.