The OECD AI Principles are the first intergovernmental standard on artificial intelligence, endorsed by the G20 and adopted by 46+ countries. We help you implement these principles with clear governance, transparency and accountability measures.
The OECD Recommendation on Artificial Intelligence, adopted in May 2019 and updated in November 2023, consists of five value-based principles for responsible AI stewardship and five recommendations for policymakers to foster trustworthy AI.
Why this matters now: As the first intergovernmental standard on AI endorsed by the G20, OECD AI Principles have influenced legislation worldwide, including the EU AI Act, national AI strategies and corporate governance frameworks. They represent global consensus on responsible AI.
46+ countries formally adhered
Inspired EU AI Act and national laws
Complements EU AI Act compliance and aligns with NIST AI RMF implementation.
Multinational corporations
Operating AI systems across OECD member countries
Government agencies
Implementing AI in public services and policymaking
AI developers and providers
Building responsible AI products for global markets
Financial institutions
Using AI for credit, trading and risk management
Healthcare organizations
Deploying AI in diagnostics and patient care
Tech companies
Seeking alignment with international AI standards
Concrete capabilities that address each principle's requirements
Evaluate AI systems against fairness, privacy and human rights principles. The platform structures assessments around dignity, autonomy and well-being to ensure human-centred design from the start.
Addresses: Human-centred values: Fairness, privacy, human agency, rights protection
Document how AI systems make decisions and ensure stakeholders receive appropriate explanations. The platform maintains disclosure records and explainability documentation required for transparency.
Addresses: Transparency & explainability: Disclosure obligations, decision documentation
Assess AI system safety, security and reliability throughout the lifecycle. The platform tracks technical safeguards, testing results and resilience measures that demonstrate robust implementation.
Addresses: Robustness, security & safety: Technical safeguards, reliability testing
Establish clear roles, responsibilities and governance structures for AI systems. The platform maintains accountability matrices, approval workflows and oversight documentation.
Addresses: Accountability: Governance roles, oversight structures, responsibility assignment
Monitor how AI systems contribute to equitable economic and social benefits. The platform helps document sustainability considerations and societal impact assessments.
Addresses: Inclusive growth, well-being & sustainability: Impact documentation
Track adherence to OECD Principles across your AI portfolio. The platform generates compliance reports, identifies gaps and maintains evidence for regulatory reviews and stakeholder communications.
Addresses: Recommendations for policymakers: Investment, ecosystem, cooperation
All assessments are tracked with timestamps, assigned owners and approval workflows. This audit trail demonstrates systematic adherence to OECD Principles rather than documentation created after the fact.
VerifyWise provides dedicated tooling for all five principles and implementation guidance
Core AI principles
Principles with dedicated tooling
Countries adhering
Equitable benefits, sustainability, prosperity
Fairness, privacy, rights, well-being
Disclosure, explainability, communication
Safety, security, reliability, resilience
Fairness, privacy and rights assessments built-in
Crosswalk to EU AI Act, NIST RMF and ISO 42001
Disclosure and explainability tracking
Governance structures and oversight mechanisms
Value-based principles that all AI actors should respect throughout AI system lifecycles
AI systems should benefit all of humanity by promoting inclusive growth, sustainable development and well-being.
AI systems should respect human rights, democratic values, diversity and fairness throughout their lifecycle.
AI systems should be transparent and there should be responsible disclosure around AI systems to ensure people understand outcomes.
AI systems should function robustly, securely and safely throughout their lifecycles, with potential risks assessed and managed.
Organizations deploying AI systems should be accountable for their proper functioning in line with the above principles.
Policy guidance for governments to foster trustworthy AI innovation and deployment
Foster long-term public and private investment in responsible AI R&D, including innovation, development of digital infrastructure and human resources.
Create open, inclusive digital ecosystems that enable secure data access, sharing and technology cooperation while protecting privacy and IP rights.
Develop regulatory frameworks and policies that enable responsible AI innovation while protecting rights and managing risks.
Equip people with AI skills and prepare for labour market changes through education, training and social policies.
Promote international cooperation to share knowledge, develop standards and address global challenges related to AI.
Explore more
Visit OECD.AI →
Interactive platform for tracking AI policies, incidents and trends globally
Interactive platform tracking AI policies, strategies and initiatives across countries
Visit platform →Database of AI-related incidents and safety concerns to inform policy and practice
Visit platform →Tools and resources for implementing OECD AI Principles in organizations
Visit platform →Official resources: Access the full OECD Recommendation at OECD Legal Instruments and explore implementation resources at OECD.AI
A practical path to OECD AI Principles adoption with clear milestones
OECD AI Principles are the first globally agreed framework for AI governance, endorsed by the G20 and adopted by 46+ countries. They have influenced legislation worldwide.
G20 leaders endorsed OECD AI Principles at Osaka Summit
First global consensus on AI governance
OECD members and partners formally adhered to the Principles
International standard for responsible AI
OECD updated Principles to address generative AI and emerging risks
Evolved to meet new challenges
Influenced EU AI Act, national AI strategies and corporate policies
Foundation for AI legislation worldwide
Access ready-to-use AI governance policy templates aligned with OECD AI Principles, EU AI Act and ISO 42001 requirements
Understanding the relationship between major AI governance frameworks
| Aspect | OECD AI Principles | EU AI Act | NIST AI RMF |
|---|---|---|---|
Scope | Global, 46+ countries | EU/EEA mandatory regulation | US-focused voluntary framework |
Legal status | Voluntary recommendation | Mandatory law with penalties | Voluntary (federal mandatory) |
Approach | Principles-based, high-level guidance | Risk-tier classification with requirements | Risk-based flexible framework |
Focus | Values, policy recommendations | Compliance obligations by role | Trustworthiness characteristics |
Structure | 5 principles, 5 recommendations | 4 risk tiers, role-based duties | 4 functions, 19 categories |
Audience | Policymakers and organizations | Providers, deployers, distributors | Organizations managing AI risks |
Timeline | Ongoing adoption since 2019 | Compliance by 2025-2027 | 4-6 months implementation |
Best for | Strategic alignment, policy development | EU market access | Operational risk management |
Pro tip: These frameworks are complementary. OECD Principles provide values-based guidance,NIST AI RMFprovides risk methodology, andEU AI Actensures market access.
Discuss multi-framework implementationCommon questions about OECD AI Principles implementation
Start your responsible AI journey with our guided assessment and implementation tools.