OECD AI Principles compliance guide
The OECD AI Principles are the first intergovernmental standard on artificial intelligence, endorsed by the G20 and adopted by 46+ countries. We help you implement these principles with clear governance, transparency and accountability measures.
What are the OECD AI Principles?
The OECD Recommendation on Artificial Intelligence, adopted in May 2019 and updated in November 2023, consists of five value-based principles for responsible AI stewardship and five recommendations for policymakers to foster trustworthy AI.
Why this matters now: As the first intergovernmental standard on AI endorsed by the G20, OECD AI Principles have influenced legislation worldwide, including the EU AI Act, national AI strategies and corporate governance frameworks. They represent global consensus on responsible AI.
Global
46+ countries formally adhered
Influential
Inspired EU AI Act and national laws
Complements EU AI Act compliance and aligns with NIST AI RMF implementation.
Who should adopt them?
Multinational corporations
Operating AI systems across OECD member countries
Government agencies
Implementing AI in public services and policymaking
AI developers and providers
Building responsible AI products for global markets
Financial institutions
Using AI for credit, trading and risk management
Healthcare organizations
Deploying AI in diagnostics and patient care
Tech companies
Seeking alignment with international AI standards
How VerifyWise supports alignment with OECD AI Principles
Concrete capabilities that address each principle's requirements
Human-centred values assessment
Evaluate AI systems against fairness, privacy and human rights principles. The platform structures assessments around dignity, autonomy and well-being to ensure human-centred design from the start.
Addresses: Human-centred values: Fairness, privacy, human agency, rights protection
Transparency and explainability tracking
Document how AI systems make decisions and ensure stakeholders receive appropriate explanations. The platform maintains disclosure records and explainability documentation required for transparency.
Addresses: Transparency & explainability: Disclosure obligations, decision documentation
Robustness and security controls
Assess AI system safety, security and reliability throughout the lifecycle. The platform tracks technical safeguards, testing results and resilience measures that demonstrate robust implementation.
Addresses: Robustness, security & safety: Technical safeguards, reliability testing
Accountability framework management
Establish clear roles, responsibilities and governance structures for AI systems. The platform maintains accountability matrices, approval workflows and oversight documentation.
Addresses: Accountability: Governance roles, oversight structures, responsibility assignment
Inclusive growth impact tracking
Monitor how AI systems contribute to equitable economic and social benefits. The platform helps document sustainability considerations and societal impact assessments.
Addresses: Inclusive growth, well-being & sustainability: Impact documentation
Policy and compliance monitoring
Track adherence to OECD Principles across your AI portfolio. The platform generates compliance reports, identifies gaps and maintains evidence for regulatory reviews and stakeholder communications.
Addresses: Recommendations for policymakers: Investment, ecosystem, cooperation
All assessments are tracked with timestamps, assigned owners and approval workflows. This audit trail demonstrates systematic adherence to OECD Principles rather than documentation created after the fact.
Complete OECD AI Principles coverage
VerifyWise provides dedicated tooling for all five principles and implementation guidance
Core AI principles
Principles with dedicated tooling
Countries adhering
Equitable benefits, sustainability, prosperity
Fairness, privacy, rights, well-being
Disclosure, explainability, communication
Safety, security, reliability, resilience
Built for global AI governance standards
Human-centred design
Fairness, privacy and rights assessments built-in
International alignment
Crosswalk to EU AI Act, NIST RMF and ISO 42001
Transparency documentation
Disclosure and explainability tracking
Accountability frameworks
Governance structures and oversight mechanisms
Five AI principles for responsible stewardship
Value-based principles that all AI actors should respect throughout AI system lifecycles
Inclusive growth, sustainable development and well-being
AI systems should benefit all of humanity by promoting inclusive growth, sustainable development and well-being.
Human-centred values and fairness
AI systems should respect human rights, democratic values, diversity and fairness throughout their lifecycle.
Transparency and explainability
AI systems should be transparent and there should be responsible disclosure around AI systems to ensure people understand outcomes.
Robustness, security and safety
AI systems should function robustly, securely and safely throughout their lifecycles, with potential risks assessed and managed.
Accountability
Organizations deploying AI systems should be accountable for their proper functioning in line with the above principles.
Five recommendations for national policies
Policy guidance for governments to foster trustworthy AI innovation and deployment
Investing in AI research and development
Foster long-term public and private investment in responsible AI R&D, including innovation, development of digital infrastructure and human resources.
- Support open datasets and responsible data sharing
- Invest in trustworthy AI research
- Build digital infrastructure
- Fund interdisciplinary research programs
Fostering a digital ecosystem for AI
Create open, inclusive digital ecosystems that enable secure data access, sharing and technology cooperation while protecting privacy and IP rights.
- Enable secure data sharing frameworks
- Support open-source AI initiatives
- Foster standards development
- Build digital trust infrastructure
Shaping an enabling policy environment
Develop regulatory frameworks and policies that enable responsible AI innovation while protecting rights and managing risks.
- Adopt flexible, risk-based regulation
- Update legal frameworks for AI era
- Support regulatory cooperation
- Balance innovation with protection
Building human capacity and preparing for labour market transformation
Equip people with AI skills and prepare for labour market changes through education, training and social policies.
- Integrate AI literacy in education
- Provide workforce reskilling programs
- Support labour market transitions
- Foster diverse AI talent pipelines
International cooperation for trustworthy AI
Promote international cooperation to share knowledge, develop standards and address global challenges related to AI.
- Share best practices across borders
- Harmonize AI policy approaches
- Collaborate on standards development
- Address global AI challenges together
Explore more
Visit OECD.AI →
OECD.AI Policy Observatory
Interactive platform for tracking AI policies, incidents and trends globally
AI Policy Observatory
Interactive platform tracking AI policies, strategies and initiatives across countries
Visit platform →AI Incidents Monitor
Database of AI-related incidents and safety concerns to inform policy and practice
Visit platform →AI Principles Implementation
Tools and resources for implementing OECD AI Principles in organizations
Visit platform →Official resources: Access the full OECD Recommendation at OECD Legal Instruments and explore implementation resources at OECD.AI
20-week implementation roadmap
A practical path to OECD AI Principles adoption with clear milestones
Assessment and gap analysis
- Map AI systems to OECD Principles
- Identify compliance gaps
- Assess current governance maturity
- Define implementation priorities
Governance and accountability
- Establish AI governance structures
- Define roles and responsibilities
- Create accountability frameworks
- Develop oversight mechanisms
Technical implementation
- Implement transparency measures
- Deploy robustness and security controls
- Establish fairness assessments
- Build monitoring capabilities
Monitoring and improvement
- Monitor ongoing compliance
- Generate stakeholder reports
- Assess societal impact
- Continuous improvement cycle
First intergovernmental AI standard
OECD AI Principles are the first globally agreed framework for AI governance, endorsed by the G20 and adopted by 46+ countries. They have influenced legislation worldwide.
G20 endorsement
G20 leaders endorsed OECD AI Principles at Osaka Summit
First global consensus on AI governance
46+ countries adopted
OECD members and partners formally adhered to the Principles
International standard for responsible AI
Updated recommendation
OECD updated Principles to address generative AI and emerging risks
Evolved to meet new challenges
Global influence
Influenced EU AI Act, national AI strategies and corporate policies
Foundation for AI legislation worldwide
AI governance policy repository
Access ready-to-use AI governance policy templates aligned with OECD AI Principles, EU AI Act and ISO 42001 requirements
Governance & accountability
- • AI Governance Policy
- • Accountability Framework
- • AI Ethics Charter
- • Risk Management Policy
- • Oversight Mechanisms
Human-centred values
- • Fairness Assessment Policy
- • Privacy Protection Policy
- • Human Rights Impact Assessment
- • Diversity & Inclusion Policy
- • Human Agency Policy
Transparency & safety
- • AI Disclosure Policy
- • Explainability Standards
- • Safety Testing Policy
- • Security Controls Policy
- • Incident Response Plan
How OECD AI Principles compare
Understanding the relationship between major AI governance frameworks
| Aspect | OECD AI Principles | EU AI Act | NIST AI RMF |
|---|---|---|---|
Scope | Global, 46+ countries | EU/EEA mandatory regulation | US-focused voluntary framework |
Legal status | Voluntary recommendation | Mandatory law with penalties | Voluntary (federal mandatory) |
Approach | Principles-based, high-level guidance | Risk-tier classification with requirements | Risk-based flexible framework |
Focus | Values, policy recommendations | Compliance obligations by role | Trustworthiness characteristics |
Structure | 5 principles, 5 recommendations | 4 risk tiers, role-based duties | 4 functions, 19 categories |
Audience | Policymakers and organizations | Providers, deployers, distributors | Organizations managing AI risks |
Timeline | Ongoing adoption since 2019 | Compliance by 2025-2027 | 4-6 months implementation |
Best for | Strategic alignment, policy development | EU market access | Operational risk management |
Pro tip: These frameworks are complementary. OECD Principles provide values-based guidance,NIST AI RMFprovides risk methodology, andEU AI Actensures market access.
Discuss multi-framework implementationFrequently asked questions
Common questions about OECD AI Principles implementation
Ready to implement OECD AI Principles?
Start your responsible AI journey with our guided assessment and implementation tools.