ISO/IEC
View original resourceISO/IEC 23894:2023 bridges the gap between traditional enterprise risk management and the unique challenges of AI systems. This standard takes the proven ISO 31000 risk management framework and extends it specifically for AI contexts, addressing risks that simply don't exist in conventional IT systems - from algorithmic bias and model drift to societal impact and AI explainability. Unlike generic risk frameworks, this standard provides concrete guidance for identifying, assessing, and mitigating risks throughout the entire AI lifecycle, from initial concept through deployment and ongoing operations.
Primary audience:
ISO/IEC 23894 recognizes that AI systems create fundamentally new risk categories that don't map neatly onto traditional IT risk frameworks:
AI-specific risk domains covered: Algorithmic bias and fairness
The standard also addresses temporal aspects unique to AI - risks that emerge during training, deployment, and ongoing operation phases, with specific guidance for continuous monitoring and model governance.
Risk identification frameworks:
Risk treatment strategies:
Phase 1: Risk context establishment (2-4 weeks)
Phase 2: AI risk taxonomy development (4-6 weeks)
Phase 3: Integration with existing processes (6-8 weeks)
Phase 4: Continuous monitoring setup (4-6 weeks)
Published
2023
Jurisdiction
Global
Category
Standards and certifications
Access
Paid access
EU AI Act explained: risk categories, compliance deadlines, and penalties up to 7% of revenue
Regulations and laws • European Union
China AI Plus Plan and AI Labeling Law
Regulations and laws • China
The Artificial Intelligence and Data Act (AIDA) – Companion document
Regulations and laws • Innovation, Science and Economic Development Canada
VerifyWise helps you implement AI governance frameworks, track compliance, and manage risk across your AI systems.