OECD AI Principles

OECD AI Principles compliance guide

The OECD AI Principles are the first intergovernmental standard on artificial intelligence, endorsed by the G20 and adopted by 46+ countries. We help you implement these principles with clear governance, transparency and accountability measures.

What are the OECD AI Principles?

The OECD Recommendation on Artificial Intelligence, adopted in May 2019 and updated in November 2023, consists of five value-based principles for responsible AI stewardship and five recommendations for policymakers to foster trustworthy AI.

Why this matters now: As the first intergovernmental standard on AI endorsed by the G20, OECD AI Principles have influenced legislation worldwide, including the EU AI Act, national AI strategies and corporate governance frameworks. They represent global consensus on responsible AI.

Global

46+ countries formally adhered

Influential

Inspired EU AI Act and national laws

Complements EU AI Act compliance and aligns with NIST AI RMF implementation.

Who should adopt them?

Multinational corporations

Operating AI systems across OECD member countries

Government agencies

Implementing AI in public services and policymaking

AI developers and providers

Building responsible AI products for global markets

Financial institutions

Using AI for credit, trading and risk management

Healthcare organizations

Deploying AI in diagnostics and patient care

Tech companies

Seeking alignment with international AI standards

How VerifyWise supports alignment with OECD AI Principles

Concrete capabilities that address each principle's requirements

Human-centred values assessment

Evaluate AI systems against fairness, privacy and human rights principles. The platform structures assessments around dignity, autonomy and well-being to ensure human-centred design from the start.

Addresses: Human-centred values: Fairness, privacy, human agency, rights protection

Transparency and explainability tracking

Document how AI systems make decisions and ensure stakeholders receive appropriate explanations. The platform maintains disclosure records and explainability documentation required for transparency.

Addresses: Transparency & explainability: Disclosure obligations, decision documentation

Robustness and security controls

Assess AI system safety, security and reliability throughout the lifecycle. The platform tracks technical safeguards, testing results and resilience measures that demonstrate robust implementation.

Addresses: Robustness, security & safety: Technical safeguards, reliability testing

Accountability framework management

Establish clear roles, responsibilities and governance structures for AI systems. The platform maintains accountability matrices, approval workflows and oversight documentation.

Addresses: Accountability: Governance roles, oversight structures, responsibility assignment

Inclusive growth impact tracking

Monitor how AI systems contribute to equitable economic and social benefits. The platform helps document sustainability considerations and societal impact assessments.

Addresses: Inclusive growth, well-being & sustainability: Impact documentation

Policy and compliance monitoring

Track adherence to OECD Principles across your AI portfolio. The platform generates compliance reports, identifies gaps and maintains evidence for regulatory reviews and stakeholder communications.

Addresses: Recommendations for policymakers: Investment, ecosystem, cooperation

All assessments are tracked with timestamps, assigned owners and approval workflows. This audit trail demonstrates systematic adherence to OECD Principles rather than documentation created after the fact.

Complete OECD AI Principles coverage

VerifyWise provides dedicated tooling for all five principles and implementation guidance

5

Core AI principles

5

Principles with dedicated tooling

46+

Countries adhering

Inclusive growth3/3

Equitable benefits, sustainability, prosperity

Human-centred values4/4

Fairness, privacy, rights, well-being

Transparency3/3

Disclosure, explainability, communication

Robustness4/4

Safety, security, reliability, resilience

Built for global AI governance standards

Human-centred design

Fairness, privacy and rights assessments built-in

International alignment

Crosswalk to EU AI Act, NIST RMF and ISO 42001

Transparency documentation

Disclosure and explainability tracking

Accountability frameworks

Governance structures and oversight mechanisms

Five AI principles for responsible stewardship

Value-based principles that all AI actors should respect throughout AI system lifecycles

Inclusive growth, sustainable development and well-being

AI systems should benefit all of humanity by promoting inclusive growth, sustainable development and well-being.

Promote equitable access to AI benefits
Foster economic opportunity and quality jobs
Support sustainable development goals
Enhance societal well-being
Address digital divides

Human-centred values and fairness

AI systems should respect human rights, democratic values, diversity and fairness throughout their lifecycle.

Protect fundamental rights and freedoms
Ensure fairness and non-discrimination
Respect human dignity and autonomy
Protect privacy and data governance
Enable human agency and oversight

Transparency and explainability

AI systems should be transparent and there should be responsible disclosure around AI systems to ensure people understand outcomes.

Provide meaningful information about AI systems
Enable appropriate understanding of outcomes
Explain AI-based decisions when needed
Disclose AI interaction to users
Document system capabilities and limitations

Robustness, security and safety

AI systems should function robustly, securely and safely throughout their lifecycles, with potential risks assessed and managed.

Ensure technical robustness and reliability
Implement security safeguards
Assess and manage risks throughout lifecycle
Enable traceability and auditability
Build resilience against attacks

Accountability

Organizations deploying AI systems should be accountable for their proper functioning in line with the above principles.

Assign clear roles and responsibilities
Implement governance and oversight mechanisms
Enable redress and remedy processes
Conduct ongoing assessment and monitoring
Ensure responsible disclosure and reporting

Five recommendations for national policies

Policy guidance for governments to foster trustworthy AI innovation and deployment

Investing in AI research and development

Foster long-term public and private investment in responsible AI R&D, including innovation, development of digital infrastructure and human resources.

  • Support open datasets and responsible data sharing
  • Invest in trustworthy AI research
  • Build digital infrastructure
  • Fund interdisciplinary research programs

Fostering a digital ecosystem for AI

Create open, inclusive digital ecosystems that enable secure data access, sharing and technology cooperation while protecting privacy and IP rights.

  • Enable secure data sharing frameworks
  • Support open-source AI initiatives
  • Foster standards development
  • Build digital trust infrastructure

Shaping an enabling policy environment

Develop regulatory frameworks and policies that enable responsible AI innovation while protecting rights and managing risks.

  • Adopt flexible, risk-based regulation
  • Update legal frameworks for AI era
  • Support regulatory cooperation
  • Balance innovation with protection

Building human capacity and preparing for labour market transformation

Equip people with AI skills and prepare for labour market changes through education, training and social policies.

  • Integrate AI literacy in education
  • Provide workforce reskilling programs
  • Support labour market transitions
  • Foster diverse AI talent pipelines

International cooperation for trustworthy AI

Promote international cooperation to share knowledge, develop standards and address global challenges related to AI.

  • Share best practices across borders
  • Harmonize AI policy approaches
  • Collaborate on standards development
  • Address global AI challenges together

Explore more

Visit OECD.AI →

20-week implementation roadmap

A practical path to OECD AI Principles adoption with clear milestones

Phase 1Weeks 1-3

Assessment and gap analysis

  • Map AI systems to OECD Principles
  • Identify compliance gaps
  • Assess current governance maturity
  • Define implementation priorities
Phase 2Weeks 4-8

Governance and accountability

  • Establish AI governance structures
  • Define roles and responsibilities
  • Create accountability frameworks
  • Develop oversight mechanisms
Phase 3Weeks 9-16

Technical implementation

  • Implement transparency measures
  • Deploy robustness and security controls
  • Establish fairness assessments
  • Build monitoring capabilities
Phase 4Weeks 17-20

Monitoring and improvement

  • Monitor ongoing compliance
  • Generate stakeholder reports
  • Assess societal impact
  • Continuous improvement cycle
Global consensus

First intergovernmental AI standard

OECD AI Principles are the first globally agreed framework for AI governance, endorsed by the G20 and adopted by 46+ countries. They have influenced legislation worldwide.

June 2019

G20 endorsement

G20 leaders endorsed OECD AI Principles at Osaka Summit

First global consensus on AI governance

2019-2024

46+ countries adopted

OECD members and partners formally adhered to the Principles

International standard for responsible AI

November 2023

Updated recommendation

OECD updated Principles to address generative AI and emerging risks

Evolved to meet new challenges

Ongoing

Global influence

Influenced EU AI Act, national AI strategies and corporate policies

Foundation for AI legislation worldwide

Start OECD Principles assessment
Policy templates

AI governance policy repository

Access ready-to-use AI governance policy templates aligned with OECD AI Principles, EU AI Act and ISO 42001 requirements

Governance & accountability

  • • AI Governance Policy
  • • Accountability Framework
  • • AI Ethics Charter
  • • Risk Management Policy
  • • Oversight Mechanisms

Human-centred values

  • • Fairness Assessment Policy
  • • Privacy Protection Policy
  • • Human Rights Impact Assessment
  • • Diversity & Inclusion Policy
  • • Human Agency Policy

Transparency & safety

  • • AI Disclosure Policy
  • • Explainability Standards
  • • Safety Testing Policy
  • • Security Controls Policy
  • • Incident Response Plan

How OECD AI Principles compare

Understanding the relationship between major AI governance frameworks

AspectOECD AI PrinciplesEU AI ActNIST AI RMF
Scope
Global, 46+ countriesEU/EEA mandatory regulationUS-focused voluntary framework
Legal status
Voluntary recommendationMandatory law with penaltiesVoluntary (federal mandatory)
Approach
Principles-based, high-level guidanceRisk-tier classification with requirementsRisk-based flexible framework
Focus
Values, policy recommendationsCompliance obligations by roleTrustworthiness characteristics
Structure
5 principles, 5 recommendations4 risk tiers, role-based duties4 functions, 19 categories
Audience
Policymakers and organizationsProviders, deployers, distributorsOrganizations managing AI risks
Timeline
Ongoing adoption since 2019Compliance by 2025-20274-6 months implementation
Best for
Strategic alignment, policy developmentEU market accessOperational risk management

Pro tip: These frameworks are complementary. OECD Principles provide values-based guidance,NIST AI RMFprovides risk methodology, andEU AI Actensures market access.

Discuss multi-framework implementation

Frequently asked questions

Common questions about OECD AI Principles implementation

The OECD Recommendation on Artificial Intelligence consists of five value-based principles for responsible AI stewardship and five recommendations for national policies and international cooperation. Adopted in May 2019 and updated in November 2023, they represent the first intergovernmental standard on AI. Visit OECD.AI for the full framework.
The OECD AI Principles are voluntary recommendations, not legally binding requirements. However, 46+ countries have formally adhered to them, and they have influenced mandatory regulations like the EU AI Act. Many organizations adopt them as best practice standards even when not legally required.
All 38 OECD member countries plus Argentina, Brazil, Peru, Romania, Ukraine and the European Union have adhered to the Principles. The G20 endorsed them in June 2019, making them the first global consensus on AI governance. They influence national AI strategies, corporate policies and legislation worldwide.
The OECD AI Principles provided foundational concepts that influenced the EU AI Act. While OECD Principles are voluntary and values-based, the EU AI Act translates similar concepts into legally binding requirements. Organizations can use OECD Principles for strategic alignment while implementing EU AI Act for compliance.
The OECD Council updated the Recommendation to address generative AI systems, emerging risks and new technological developments. The update strengthens provisions on transparency, accountability and international cooperation while maintaining the core five principles. Read the updated version at OECD Legal Instruments.
The OECD.AI platform provides tools for tracking AI policies, incidents and implementation across countries. It includes policy databases, incident monitoring, trend analysis and resources for implementing the Principles. The Observatory helps policymakers and organizations stay informed about AI governance developments globally.
Start by mapping your AI systems to the five principles, identify gaps, establish governance structures and implement controls for transparency, fairness, safety and accountability. The implementation typically takes 4-5 months depending on organizational complexity. VerifyWise provides tools for assessment, gap analysis and ongoing monitoring.
Yes, the November 2023 update specifically addresses generative AI systems like large language models. The updated Principles include considerations for content provenance, synthetic media disclosure, hallucination risks and human oversight of generative systems. All five core principles apply to generative AI with additional context.
OECD Principles provide high-level values and policy guidance, while NIST AI RMF offers operational risk management methodology. They complement each other: OECD Principles inform strategic direction and NIST AI RMF provides implementation structure. Many organizations adopt both for comprehensive AI governance.
The Recommendation includes five policy areas for governments: investing in AI R&D, fostering digital ecosystems, shaping enabling policy environments, building human capacity and labour market preparation, and international cooperation. These guide national AI strategies and international coordination efforts.
There is no formal OECD certification program. However, organizations can demonstrate alignment through self-assessment, third-party review or by obtaining ISO 42001 certification, which incorporates OECD Principles concepts. VerifyWise helps document and evidence your alignment for stakeholder communications.
The Accountability principle requires organizations to be responsible for AI systems functioning in line with the other four principles. This includes establishing governance structures, assigning clear roles, implementing oversight mechanisms, enabling redress processes and conducting ongoing monitoring and reporting.
Yes, VerifyWise maps governance controls to OECD AI Principles. Our platform helps assess your AI systems against the five principles, track implementation of transparency and accountability measures and generate evidence for stakeholder reporting. We also provide crosswalks to EU AI Act, NIST AI RMF and ISO 42001.

Ready to implement OECD AI Principles?

Start your responsible AI journey with our guided assessment and implementation tools.

OECD AI Principles Implementation Guide | VerifyWise