Standards und Zertifizierungen
Formale Standards und Zertifizierungssysteme.
26 Ressourcen
ISO/IEC 42001:2023 - AI Management System
ISO/IEC 42001 is the world's first international standard for AI management systems. It provides a framework for organizations to establish, implement, maintain, and continually improve an AI management system. The standard addresses AI-specific risks and opportunities, ethical considerations, and responsible AI development and deployment.
IEEE 7000-2021: the standard for embedding ethics into system design
IEEE 7000 explained: five-phase methodology for values investigation, translation into requirements, ethical system design, verification, and ongoing monitoring. 94-page standard available via IEEE Xplore.
ISO/IEC 27001:2022 - Information Security Management
ISO/IEC 27001 is the international standard for information security management systems. While not AI-specific, it provides foundational security controls essential for AI systems handling sensitive data. Many AI governance frameworks reference ISO 27001 as a baseline security requirement.
IEEE 7001 Standard for Transparency of Autonomous Systems
IEEE Standard 7001-2021 provides a comprehensive framework for ensuring transparency in autonomous and semi-autonomous systems, including AI models. Published by the Institute of Electrical and Electronics Engineers (IEEE), this standard establishes requirements and guidelines for making algorithmic decision-making processes transparent, interpretable, and accountable to stakeholders. The standard is particularly crucial for high-stakes sectors such as healthcare, finance, and legal systems where algorithmic transparency is essential for trust, compliance, and ethical deployment. IEEE 7001 offers structured approaches for documenting system behavior, providing explanations for automated decisions, and ensuring that AI systems can be audited and understood by relevant parties, making it an essential reference for organizations seeking to implement responsible AI practices.
ISO/IEC 42001:2023 - AI management systems
ISO/IEC 42001 is an international standard that specifies requirements for establishing, implementing, maintaining, and continually improving an Artificial Intelligence Management System (AIMS) within organizations. This standard provides a structured framework for organizations to manage AI-related risks and opportunities systematically.
ISO/IEC 42001:2023 Artificial Intelligence Management System
ISO/IEC 42001:2023 is an international standard that specifies requirements for establishing, implementing, maintaining, and continually improving an Artificial Intelligence Management System (AIMS) within organizations. This standard provides a structured framework for organizations to manage AI systems responsibly and ensure compliance with governance requirements.
ISO/IEC 42001: AI Management Systems Standard
ISO/IEC 42001 is an international standard that establishes requirements for AI management systems to ensure responsible development, deployment and operation of AI systems. The standard provides a foundation for AI governance and regulatory alignment, supporting organizations in successful AI adoption and broader digital transformation initiatives.
ISO/IEC 23894:2023 — AI risk management standard, how it extends ISO 31000
ISO/IEC 23894:2023 in plain language — a lifecycle-based AI risk management standard that extends ISO 31000 with AI-specific risks: algorithmic bias, model drift, explainability gaps, and societal impact. Complements ISO/IEC 42001 and NIST AI RMF.
The IEEE Global Initiative 2.0 on Ethics of Autonomous and Intelligent Systems
IEEE's Global Initiative 2.0 focuses on promoting ethical practices in autonomous and intelligent systems through a 'do no harm' philosophy and engineering excellence. The initiative establishes AI Safety Champions communities and promotes awareness of the IEEE 7000 Series standards for ethical design processes.
Ethical Considerations for AI Systems
An IEEE standard that addresses ethical concerns related to AI systems that can make autonomous decisions and handle personal information without human input. The standard aims to educate government and industry stakeholders on implementing mechanisms to mitigate ethical risks in AI system design.
IEEE 7000 Standard for Embedding Human Values and Ethical Considerations in Technology Design
IEEE 7000 is a developing standard that provides structured methodologies for embedding human values and ethical considerations directly into technology design and AI systems. The standard aims to ensure that algorithmic decisions protect human life and values by establishing frameworks for ethical software development. It is part of IEEE's broader series of standards addressing ethical technology implementation.
Artificial Intelligence Risk Management Framework (AI RMF 1.0)
The NIST AI Risk Management Framework (AI RMF 1.0) provides a structured approach for organizations to identify, assess, and manage risks associated with artificial intelligence systems. This living document establishes guidelines and best practices for responsible AI development and deployment across various sectors.
NIST AI Risk Management Framework: the four-pillar approach to trustworthy AI
NIST AI RMF 1.0 explained: the Govern, Map, Measure, Manage cycle for AI risk management. Voluntary, sector-agnostic framework released January 2023 with companion playbook and implementation guidance.
NIST AI Risk Management Framework Playbook
A comprehensive playbook developed by NIST in collaboration with private sector partners that provides guidance on navigating and implementing the AI Risk Management Framework. It suggests practical ways to incorporate trustworthiness considerations throughout the entire AI system lifecycle, from design and development to deployment and use.
ISO/IEC 22989:2022 - Artificial intelligence concepts and terminology
This ISO/IEC standard establishes official terminology for artificial intelligence and describes foundational concepts in the AI field. It provides standardized definitions and terminology that can be used across AI governance, development, and implementation contexts.
Information technology — Artificial intelligence — Artificial intelligence concepts and terminology
ISO/IEC 22989:2022 establishes standardized terminology for artificial intelligence and describes fundamental concepts in the AI field. The standard is designed to support the development of other AI standards and facilitate clear communication among diverse stakeholders and organizations working with AI technologies.
ISO/IEC 22989 - Artificial Intelligence - Concepts and Terminology
This international standard establishes fundamental terminology and concepts for artificial intelligence systems. It introduces key AI properties including transparency, explainability, robustness, reliability, resilience, safety, security, privacy, and risk management, serving as a foundational reference for other AI-related standards.
ISO/IEC 23053:2022 - Framework for AI systems using machine learning
ISO/IEC 23053:2022 provides a conceptual framework and shared terminology for describing artificial intelligence systems that use machine learning. It defines the components and functions of ML-based AI systems within the broader AI ecosystem.
ISO/IEC 23053: AI Systems Framework for Machine Learning
ISO/IEC 23053 defines the essential components that constitute an AI system using machine learning. The standard decomposes these systems into logical functional blocks, establishing a common vocabulary and conceptual framework for AI system architecture.
Framework for Artificial Intelligence (AI) Systems Using Machine Learning (ML)
ISO/IEC 23053:2022 establishes a comprehensive framework for describing generic AI systems that use machine learning technology. This international standard provides structured guidance for understanding and implementing AI/ML systems across various applications and industries.
IEEE 7001-2021 - IEEE Standard for Transparency of Autonomous Systems
This IEEE standard provides a framework to help developers of autonomous systems review and design features to make their systems more transparent. The framework establishes requirements for transparency features, the transparency they provide to systems, and implementation guidelines for developers.
IEEE Standard for Transparency of Autonomous Systems (IEEE 7001-2021)
IEEE 7001-2021 is a technical standard that establishes requirements and guidelines for transparency in autonomous and intelligent systems. The standard provides a framework for organizations to implement transparency measures that enable stakeholders to understand how autonomous systems make decisions and operate.
ISO/IEC 38507:2022 - Information technology — Governance of IT — Governance implications of the use of artificial intelligence by organizations
This international standard provides guidance for members of governing bodies of organizations to enable and govern the use of Artificial Intelligence (AI). It focuses on ensuring effective governance implications when organizations adopt AI technologies within their IT governance frameworks.
ISO/IEC 38507:2022 - Governance implications of the use of artificial intelligence by organizations
ISO/IEC 38507:2022 is an international standard that provides guidance on the governance implications when organizations use artificial intelligence systems. It helps organizations understand and manage the governance challenges that arise from AI implementation and usage.
ISO/IEC 38507: AI Governance Framework for Organizations
ISO/IEC 38507 provides a comprehensive governance framework for organizations implementing AI technologies. The standard balances AI innovation with responsible use, ensuring alignment with organizational objectives and regulatory compliance requirements.
AIGP certification (IAPP): exam structure, syllabus, pass mark, and study tips
Everything candidates need before sitting the IAPP AIGP exam — 100 questions, 2.75 hours, 300 pass mark, four domains, plus what to study and what to skip.