Formale Standards und Zertifizierungssysteme.
28 Ressourcen
ISO/IEC 42001 is the world's first international standard for AI management systems. It provides a framework for organizations to establish, implement, maintain, and continually improve an AI management system. The standard addresses AI-specific risks and opportunities, ethical considerations, and responsible AI development and deployment.
ISO/IEC 23894:2023 in plain language: lifecycle-based AI risk assessment covering algorithmic bias, model drift, explainability, and societal impact. Complements ISO 42001 and NIST AI RMF.
IEEE 7000 explained: five-phase methodology for values investigation, translation into requirements, ethical system design, verification, and ongoing monitoring. 94-page standard available via IEEE Xplore.
ISO/IEC 27001 is the international standard for information security management systems. While not AI-specific, it provides foundational security controls essential for AI systems handling sensitive data. Many AI governance frameworks reference ISO 27001 as a baseline security requirement.
IEEE Standard 7001-2021 provides a comprehensive framework for ensuring transparency in autonomous and semi-autonomous systems, including AI models. Published by the Institute of Electrical and Electronics Engineers (IEEE), this standard establishes requirements and guidelines for making algorithmic decision-making processes transparent, interpretable, and accountable to stakeholders. The standard is particularly crucial for high-stakes sectors such as healthcare, finance, and legal systems where algorithmic transparency is essential for trust, compliance, and ethical deployment. IEEE 7001 offers structured approaches for documenting system behavior, providing explanations for automated decisions, and ensuring that AI systems can be audited and understood by relevant parties, making it an essential reference for organizations seeking to implement responsible AI practices.
ISO/IEC 42001 is an international standard that specifies requirements for establishing, implementing, maintaining, and continually improving an Artificial Intelligence Management System (AIMS) within organizations. This standard provides a structured framework for organizations to manage AI-related risks and opportunities systematically.
ISO/IEC 42001:2023 is an international standard that specifies requirements for establishing, implementing, maintaining, and continually improving an Artificial Intelligence Management System (AIMS) within organizations. This standard provides a structured framework for organizations to manage AI systems responsibly and ensure compliance with governance requirements.
ISO/IEC 42001 is an international standard that establishes requirements for AI management systems to ensure responsible development, deployment and operation of AI systems. The standard provides a foundation for AI governance and regulatory alignment, supporting organizations in successful AI adoption and broader digital transformation initiatives.
ISO/IEC 23894 is a voluntary international standard that provides a practical, lifecycle-based framework for identifying, assessing, and mitigating AI-specific risks. It complements other governance standards like ISO/IEC 42001 and the NIST AI RMF, offering organizations structured guidance for managing AI-related risks throughout the AI system lifecycle.
ISO/IEC 23894 provides strategic guidance to organizations across all sectors for managing risks associated with AI development and deployment. The standard offers frameworks for integrating risk management practices into AI-driven activities and operations.
IEEE's Global Initiative 2.0 focuses on promoting ethical practices in autonomous and intelligent systems through a 'do no harm' philosophy and engineering excellence. The initiative establishes AI Safety Champions communities and promotes awareness of the IEEE 7000 Series standards for ethical design processes.
An IEEE standard that addresses ethical concerns related to AI systems that can make autonomous decisions and handle personal information without human input. The standard aims to educate government and industry stakeholders on implementing mechanisms to mitigate ethical risks in AI system design.
IEEE 7000 is a developing standard that provides structured methodologies for embedding human values and ethical considerations directly into technology design and AI systems. The standard aims to ensure that algorithmic decisions protect human life and values by establishing frameworks for ethical software development. It is part of IEEE's broader series of standards addressing ethical technology implementation.
The NIST AI Risk Management Framework (AI RMF 1.0) provides a structured approach for organizations to identify, assess, and manage risks associated with artificial intelligence systems. This living document establishes guidelines and best practices for responsible AI development and deployment across various sectors.
NIST AI RMF 1.0 explained: the Govern, Map, Measure, Manage cycle for AI risk management. Voluntary, sector-agnostic framework released January 2023 with companion playbook and implementation guidance.
A comprehensive playbook developed by NIST in collaboration with private sector partners that provides guidance on navigating and implementing the AI Risk Management Framework. It suggests practical ways to incorporate trustworthiness considerations throughout the entire AI system lifecycle, from design and development to deployment and use.
This ISO/IEC standard establishes official terminology for artificial intelligence and describes foundational concepts in the AI field. It provides standardized definitions and terminology that can be used across AI governance, development, and implementation contexts.
ISO/IEC 22989:2022 establishes standardized terminology for artificial intelligence and describes fundamental concepts in the AI field. The standard is designed to support the development of other AI standards and facilitate clear communication among diverse stakeholders and organizations working with AI technologies.
This international standard establishes fundamental terminology and concepts for artificial intelligence systems. It introduces key AI properties including transparency, explainability, robustness, reliability, resilience, safety, security, privacy, and risk management, serving as a foundational reference for other AI-related standards.
ISO/IEC 23053:2022 provides a conceptual framework and shared terminology for describing artificial intelligence systems that use machine learning. It defines the components and functions of ML-based AI systems within the broader AI ecosystem.
ISO/IEC 23053 defines the essential components that constitute an AI system using machine learning. The standard decomposes these systems into logical functional blocks, establishing a common vocabulary and conceptual framework for AI system architecture.
ISO/IEC 23053:2022 establishes a comprehensive framework for describing generic AI systems that use machine learning technology. This international standard provides structured guidance for understanding and implementing AI/ML systems across various applications and industries.
This IEEE standard provides a framework to help developers of autonomous systems review and design features to make their systems more transparent. The framework establishes requirements for transparency features, the transparency they provide to systems, and implementation guidelines for developers.
IEEE 7001-2021 is a technical standard that establishes requirements and guidelines for transparency in autonomous and intelligent systems. The standard provides a framework for organizations to implement transparency measures that enable stakeholders to understand how autonomous systems make decisions and operate.
This international standard provides guidance for members of governing bodies of organizations to enable and govern the use of Artificial Intelligence (AI). It focuses on ensuring effective governance implications when organizations adopt AI technologies within their IT governance frameworks.
ISO/IEC 38507:2022 is an international standard that provides guidance on the governance implications when organizations use artificial intelligence systems. It helps organizations understand and manage the governance challenges that arise from AI implementation and usage.
ISO/IEC 38507 provides a comprehensive governance framework for organizations implementing AI technologies. The standard balances AI innovation with responsible use, ensuring alignment with organizational objectives and regulatory compliance requirements.
The IAPP AIGP (AI Governance Professional) certification covers four domains: governance foundations, risk management, technical controls, and organizational integration. 100-question exam, 2.75 hours, scaled scoring with 300 pass threshold. Aligned with EU AI Act, NIST AI RMF, and ISO 42001.