OECD
guidelineactive

OECD Due Diligence Guidance for Responsible AI

OECD

View original resource

OECD Due Diligence Guidance for Responsible AI

Summary

The OECD Due Diligence Guidance for Responsible AI turns years of policy work on AI principles into a concrete process that organizations can implement immediately. Published in 2026, the guidance introduces a six-step due diligence cycle designed to help organizations of any size systematically identify, prevent, and address adverse impacts from their AI systems. It draws on the OECD's established due diligence methodology, originally developed for responsible business conduct in supply chains, and adapts it to the specific technical and organizational challenges of AI governance.

The Six-Step Due Diligence Process

The guidance structures responsible AI governance around six connected steps that form a continuous cycle rather than a one-time checklist.

Step 1: Embed Responsible AI into Policies and Management Systems

The first step requires organizations to formalize their commitment to responsible AI through clear policies, designated responsibilities, and management systems that integrate AI governance into existing corporate structures. This goes beyond publishing an ethics statement. The guidance specifies that organizations should establish internal accountability mechanisms, allocate dedicated resources for AI oversight, and ensure that responsible AI commitments reach the entire organization and its business partners. The emphasis is on making AI governance a standing organizational function rather than a project with an end date.

Step 2: Identify and Assess Adverse Impacts

Organizations must develop processes for identifying actual and potential adverse impacts from their AI systems. The guidance distinguishes between impacts the organization causes directly, impacts it contributes to through business relationships, and impacts linked to its operations through third-party AI components. This three-tier approach reflects the reality that most organizations both build and procure AI systems, and governance must cover both. The assessment should consider impacts on human rights, the environment, labor conditions, and broader societal effects.

Step 3: Cease, Prevent, or Mitigate Adverse Impacts

Once impacts are identified, the guidance requires organizations to take action proportionate to the severity and likelihood of harm. For impacts the organization causes directly, the expectation is to cease the harmful activity or redesign the system. For impacts linked to business relationships, the guidance recommends using influence over suppliers and partners while recognizing that organizations cannot always control third-party behavior. The document provides decision trees for determining appropriate responses based on impact severity and organizational influence.

Step 4: Track Implementation and Results

Monitoring and evaluation ensure that due diligence measures actually work. The guidance calls for both quantitative and qualitative indicators, including system performance metrics, stakeholder feedback mechanisms, and periodic audits. Tracking should cover not only whether mitigation measures were implemented but whether they achieved their intended effect.

Step 5: Communicate How Impacts Are Addressed

Transparency requirements under this step go beyond regulatory disclosure. Organizations should communicate proactively with affected stakeholders about how they identify and address AI-related impacts. The guidance provides recommendations on what to disclose, how frequently, and through which channels, while acknowledging that some information may be commercially sensitive or security-relevant.

Step 6: Provide for or Cooperate in Remediation

When adverse impacts occur despite preventive measures, organizations should establish or participate in effective remediation processes. This includes grievance mechanisms accessible to affected parties, processes for investigating complaints, and commitments to provide appropriate remedies. The guidance recognizes that AI-related harms can be difficult to detect and attribute, and recommends that organizations invest in mechanisms that lower barriers to reporting.

Lifecycle Thinking

A defining feature of this guidance is its insistence on applying due diligence across the entire AI lifecycle, from initial design and data collection through development, testing, deployment, operation, and eventual retirement. Many adverse impacts trace back to decisions made early in the lifecycle, such as training data selection or objective function design, that become difficult or impossible to correct after deployment. Embedding due diligence at each lifecycle stage lets organizations catch and address potential harms before they become entrenched.

The lifecycle approach also addresses a common governance gap: the handoff between development and deployment teams. The guidance recommends documentation and communication protocols that ensure risk information travels with the AI system as it moves through organizational boundaries.

The Proportionality Principle

The guidance applies a proportionality principle throughout, acknowledging that not all AI systems carry the same level of risk and that organizations have different capacities for governance. A small company deploying a low-risk recommendation system should not face the same governance apparatus as a multinational deploying facial recognition technology. The guidance provides criteria for calibrating due diligence effort to risk level, including factors such as the severity and likelihood of potential impacts, the vulnerability of affected populations, and the reversibility of potential harms.

Stakeholder Engagement

Meaningful stakeholder engagement runs through every step of the due diligence process. The guidance specifies what meaningful engagement looks like: identifying all affected parties, including those who may lack formal representation; providing accessible channels for input; demonstrating how stakeholder feedback influenced decisions; and maintaining ongoing dialogue rather than one-time consultations. The document provides practical recommendations for engaging hard-to-reach populations, communities with limited digital literacy, and groups that may distrust the organizations deploying AI systems.

Connection to the OECD AI Principles

This guidance operationalizes the OECD AI Principles adopted in 2019 and updated in 2024. While those principles established high-level commitments to values like transparency, fairness, and accountability, this due diligence guidance provides the process architecture for turning those commitments into organizational practice. Each step of the due diligence cycle maps to specific principles, creating a traceable link between aspirational goals and operational procedures.

Who Should Use This Guidance

The guidance is designed for organizations at any stage of AI governance maturity. Organizations just beginning to formalize AI governance can use it as a roadmap for building their program from the ground up. Organizations with mature governance frameworks can use it to identify gaps, particularly in areas like supply chain due diligence and stakeholder engagement that many existing frameworks underemphasize. Regulators and policymakers can reference it when developing national AI governance requirements, as the OECD framework is designed to work with diverse legal and regulatory contexts.

Tags

OECDdue diligenceresponsible AIrisk managementlifecycle governance

At a glance

Published

2026

Jurisdiction

Global

Category

Governance frameworks

Access

Public access

Build your AI governance program

VerifyWise helps you implement AI governance frameworks, track compliance, and manage risk across your AI systems.

OECD Due Diligence Guidance for Responsible AI | AI Governance Library | VerifyWise