OECD Publishes Due Diligence Guidance for Responsible AI: What Companies Need to Know
The intersection of artificial intelligence (AI) and responsible business conduct is very topical: While AI brings many new opportunities for companies, there is a growing focus on the new risks to navigate, including those related to responsible business conduct – for example when it comes to assessing the potential environmental and human impacts of AI. Structured approaches to challenges such as complexity, transparency, human rights and data dependency within business operations (including upstream and downstream supply chains) can strengthen trust, unlock market access and drive competitive advantage.
Against this background, the regulatory landscape has evolved significantly: the EU AI Act, the EU Corporate Sustainability Due Diligence Directive (CSDDD), and national supply chain laws introduce binding due diligence obligations that also apply to certain AI-related risks.
It is in this context that the OECD published its Due Diligence Guidance for Responsible AI on 19 February 2026 (the Guidance). The Guidance aims to assist companies in implementing the OECD Guidelines for Multinational Enterprises on Responsible Business Conduct (OECD Guidelines) and the OECD AI Principles. The Guidance offers a voluntary, structured and internationally recognised reference point that helps companies connect responsible business conduct with AI innovation and risk management. Notably, the Guidance deliberately states that it takes a risk-agnostic approach in the sense that it does not prescribe which specific AI risks companies should prioritise, instead aiming to provide a process-oriented framework that remains applicable in different jurisdictions and with evolving technology. Therefore, it is up to businesses to identify and prioritise risks based on their own circumstances.
Who Is It For?
The Guidance primarily addresses three groups of companies:
Group 1 covers suppliers of AI inputs, including data providers, dataset curators, hardware manufacturers, cloud and compute providers, as well as investors;
Group 2 encompasses companies actively involved in the AI system lifecycle, from planning and design, development through to deployment and monitoring;
Group 3 includes users of AI systems, including companies that integrate AI into their operations, products and services (i.e. companies operating in the downstream segment of the AI value chain).
The “Six-Step Framework”
At its core, the Guidance applies the six-step due diligence framework set out in the OECD Guidelines and the OECD AI Principles. Each step is accompanied by practical examples and a roadmap table that cross-references related provisions from approximately 20 existing frameworks, including the EU AI Act, the EU CSDDD, the ISO standards, and the UN Guiding Principles on Business and Human Rights.
Step 1: Embed responsible business conduct into policies and management systems
Companies are encouraged to adopt and disseminate policies articulating their commitments to the OECD Guidelines and the OECD AI Principles. The Guidance recommends that companies assign oversight and responsibility for AI due diligence to relevant senior management and also assign responsibilities to the board of directors for AI responsible business conduct more broadly. Companies may seek to develop incident monitoring and response systems. Relevant responsible business conduct expectations could also be incorporated into business relationships through pre-qualification processes, training, and contractual requirements. For companies already having relevant compliance management systems in place, the key addition is to embed AI into relevant systems and to integrate AI-specific challenges, such as algorithmic fairness, transparency and misuse prevention, into existing governance structures. The Guidance also encourages companies – after having evaluated potential competition law concerns - to establish or join collaborative initiatives to develop, advance, and adopt shared standards, tools, mechanisms, and best practices for ensuring the safety and security of AI systems.
Step 2: Identify and assess actual and potential adverse impacts
The Guidance recommends that companies carry out scoping exercises. Risk information can be related to, inter alia, the nature of the AI system and its use, the type of use, data sources, the geographic and socio-economic context, and foreseeable use or misuse. They could then assess whether they cause, contribute to, or are directly linked to adverse impacts through business relationships, and prioritise risks based on severity and likelihood. For the assessment of business relationships, relevant processes can be integrated with existing compliance systems existing in areas, such as export controls and sanctions.
Step 3: Cease, prevent and mitigate adverse impacts
Where companies identify that their activities cause or contribute to adverse impacts, the Guidance recommends ceasing those activities and developing forward-looking plans to prevent and mitigate future harm. In terms of prevention and mitigation, companies are encouraged to e.g. ensure that training data is sourced responsibly, that stakeholders are aware of the AI system functionality and risks, and that systems are resilient against attacks and perform reliably. On the deployment side, the Guidance calls for pre-deployment response plans, graduated access models, stakeholder engagement before deployment, the establishment of grievance mechanisms and safeguards to pause AI systems where significant adverse impacts are imminent, with clear protocols for reassessment.
The Guidance also acknowledges that in concentrated AI markets, maintaining certain business relationships could be necessary given market structures. It provides practical options: for example, where disengagement is not feasible, companies are encouraged to report the situation internally, continue monitoring, and revisit their decision as circumstances change.
Step 4: Track implementation and results
The Guidance recommends that companies monitor the effectiveness of their due diligence measures and e.g. assess whether previously undetected risks exist, document and share incident information with stakeholders, and carry out periodic assessments of business relationships.
Step 5: Communicate how impacts are addressed
The Guidance suggests that companies should publicly disclose significant adverse impacts identified. For example, they could communicate risk prioritisation criteria, mitigation actions, and how stakeholders are engaged. All communication measures should be tailored to relevant target audiences and should take into account commercial confidentiality.
Step 6: Provide for or cooperate in remediation
When an enterprise has caused or contributed to adverse impacts, the Guidance states that it may – where possible - seek to restore affected persons to the situation they would be in had the adverse impact not occurred and enable proportionate remedy. The Guidance sets out five potential options of remedy: restitution, compensation, rehabilitation, satisfaction, and guarantees of non-repetition.
Turning Obligation into Opportunity
Companies that are proactive when it comes to responsible AI governance are likely to be better positioned to reduce regulatory and litigation risks, strengthen relationships with investors and regulators, and build trust with customers. That could involve simple practical steps such as mapping their AI footprint: where AI is used, who develops or procures it, and how third-party systems are integrated into key processes, through to updating policies on bias, consent and transparency and aligning technical and non-technical teams.
