Find a lawyerOur capabilitiesYour career
Locations
Our capabilities
News

Select language:

Locations
Our capabilities
News

Select language:

hamburger menu showcase image
  1. Our thinking
  2. Blogs
  3. Risk and Compliance
  4. Product Risks Today: How the new Product Liability Directive turns AI Act compliance into a question of liability
7MIN

Product Risks Today: How the new Product Liability Directive turns AI Act compliance into a question of liability

Apr 21 2026

Introduction

To what extent software qualifies as a “product” subject to the EU’s product liability regime, has been a controversial subject. The revised EU Product Liability Directive (PLD), which entered into force in December 2024 and must be transposed into national law by 9 December 2026, provides clarity on this point and now explicitly introduces liability for software, and as such AI. However, it does not itself define the safety standard. Instead, it links liability directly to compliance with EU and national product safety law, making the AI Act and other compliance regimes (e.g., the Machinery Regulation, Battery Regulation, UNECE Regulations, the Cyber Resilience Act or the GPSR) the de facto benchmarks for defectiveness. In other words: Non-compliance with the applicable regulatory obligations can lead to liability exposure under the new PLD framework.

AI developer liability: Software and AI as products under the PLD

The PLD significantly broadens the definition of "product" for the purposes of strict liability. Under Article 4(1) PLD, software, including AI, now explicitly qualifies as a product, irrespective of its mode of supply or usage: whether stored on a device, cloud-based, or delivered as software-as-a-service. The provider of an AI system will typically qualify as the manufacturer within the meaning of the PLD and is therefore subject to strict product liability accordingly.

In addition, the PLD places a stronger focus on components and related digital services. The directive explicitly covers components and integrated digital services that are essential to a product's functions, for example, AI-driven data processing for navigation, health monitoring, or smart home controls. Where such a component or service is integrated within the manufacturer's or provider's control, it falls within the scope of strict liability (Articles 4(3)-(5) PLD). This means that the provider of an AI-component and the manufacturer of the final product into which it is integrated are jointly and severally liable; the claimant can sue either party directly (Article 8(1) PLD). Contractual exclusions or limitations of liability towards end users are not permitted.

The test for defectiveness as the hinge between liability and regulatory compliance

Article 7(1) PLD provides for all products that "a product shall be considered defective where it does not provide the safety that a person is entitled to expect or that is required under Union or national law". This deceptively simple formulation operates as a gateway: rather than prescribing a distinct technical safety standard, the PLD incorporates the entire body of EU and national product safety law into the liability assessment by reference.

For AI systems, the practical consequence is that non-compliance with the safety requirements of the AI Act directly informs a court's assessment of defectiveness under product liability law. Other product safety regimes, such as the Machinery Regulation or the General Product Safety Regulation, may be relevant where AI is embedded in physical products.

The PLD's special rules for defects of digital products

Article 7(2) PLD introduces a series of additional circumstances that must be taken into account when assessing defectiveness, several of which are particularly significant for AI and digital products. Courts must consider "the effect on the product of any ability to continue to learn or acquire new features after it is placed on the market or put into service", "the reasonably foreseeable effect on the product of other products that can be expected to be used together with the product, including by means of inter-connection", and "relevant product safety requirements, including safety-relevant cybersecurity requirements".

Crucially, the PLD also abandons the traditional "factory gate principle" for digital products. Article 7(2)(e) provides that the relevant point in time for assessing defectiveness is not limited to the moment of placing on the market, but extends to "the moment in time when the product left the control of the manufacturer". In particular, the manufacturer retains control if he has the ability to supply updates. Because AI systems are typically subject to continuous learning, updates and patches throughout their lifecycle, and manufacturers commonly retain control through over-the-air software updates, defects arising only after market placement can still form the basis of liability. This effectively creates a de facto obligation to maintain compliance with current safety standards. The failure to provide security-relevant updates can also itself constitute a product defect.

This illustrates that the relevant standard for determining the potential defectiveness of AI-powered products will be a dynamic one, considering ongoing product enhancements through continuous learning, software updates and interaction with other systems.

AI Act safety rules as the liability benchmark

As a centerpiece of EU technology regulation, the AI Act establishes a comprehensive set of safety obligations for High-Risk AI systems and General-Purpose AI models. Their relevance for product liability under the PLD depends, however, on the nature of the obligation in question.

Certain AI Act requirements directly shape the technical safety characteristics of the AI product itself. These product-inherent safety rules should be captured by the PLD's defectiveness standard under Article 7(1). They include in particular:

  • Accuracy, robustness and cybersecurity (Articles 15 and 55): mandated standards for the AI product’s operational resilience, recognizing that cybersecurity flaws or insufficient robustness can themselves constitute product defects.
  • Data governance and quality (Article 10): requirements for training data quality that directly determine how safely the AI product performs in practice.
  • Transparency and provision of information (Articles 13 and 50): obligations ensuring that AI products are sufficiently transparent to users, enabling safe and informed use.

A breach of these obligations has an immediate bearing on whether the AI system provides the level of safety required under Union law within the meaning of Article 7(1) PLD, and can therefore directly inform a court's assessment of defectiveness.

Beyond these product-inherent rules, the AI Act also imposes a range of process-oriented obligations on providers. These include:

  • Risk management systems (Article 9): a continuous risk management process throughout the AI system's lifecycle.
  • Technical documentation and record-keeping (Articles 11 and 55): comprehensive documentation requirements accompanying the AI product’s development and deployment.
  • Human oversight (Article 14): requirements enabling effective human supervision of AI system operation.
  • Conformity assessment (Article 43) and post-market monitoring (Article 72): ongoing procedural obligations extending well beyond initial market placement.

It is questionable whether a breach of these process-oriented obligations can equally serve as a basis for defectiveness under Article 7(1) PLD. While these obligations are functionally designed to ensure product safety across the AI system's lifecycle, they are not, as such, requirements that the product itself must meet, and their breach does not necessarily translate into an unsafe product. This suggests that they should not qualify as safety requirements relevant under the PLD. The wording of the PLD does not resolve this question conclusively, and it remains to be seen how courts will ultimately draw this line.

The amplifier: rebuttable presumptions and disclosure obligations

Beyond serving as the benchmark for defectiveness, non-compliance with the AI Act may even trigger a rebuttable presumption of defect. The PLD introduces several distinct mechanisms that facilitate claims, each operating independently. The most critical for AI products are:

First, a product's defectiveness is presumed where the claimant demonstrates that it does not comply with "mandatory product safety requirements laid down in Union or national law that are intended to protect against the risk of the damage suffered by the injured person" (Article 10(2)(b) PLD). For AI systems, this means that a demonstrated breach of the AI Act's safety obligations, for instance, a failure to maintain the required risk management system or to meet cybersecurity standards, can trigger a legal presumption of defectiveness. The manufacturer must then rebut that presumption.

Second, non-compliance with a disclosure order may independently trigger a rebuttable presumption of defect under Article 10(2)(a) PLD. The PLD grants courts broad powers to order defendants to disclose "relevant evidence", which for AI products could include design documentation, software source code, AI training and validation data, and test reports. A failure to comply or a failure to present the evidence "in an easily accessible and easily understandable manner" (Article 9(6) PLD) can itself give rise to a presumed defect.

Third, and perhaps most consequentially for AI, in cases of "scientific and technical complexity" courts may presume both defect and causation where the claimant shows that each is likely (Article 10(4)(a) PLD). AI systems are prime examples of such complexity. This significantly lowers the evidentiary hurdles for claimants and in practice means a de facto reversal of the burden of proof: the claimant only needs to demonstrate a likelihood of defect and causation, and the burden effectively shifts to the manufacturer to prove that its product was not defective.

Taken together, these mechanisms create a considerably easier pathway to recovery for claimants, particularly in AI-related cases.

Conclusion

For AI systems, the requirements of the AI Act do not merely constitute regulatory obligations. Their violation can directly trigger product liability claims and, under the new rules of evidence, give rise to a legal presumption of defectiveness. Businesses providing AI products are well-advised to treat safety compliance and liability risk as two sides of the same coin.

Tags

aiconsumerconsumer protectiondisputeseuropelitigationmanufacturingproduct liabilityproduct risk teamregulatory

Authors

Düsseldorf

Moritz Becker

Partner
Düsseldorf, Frankfurt am Main

Theresa Ehlen

Partner
Vienna, Düsseldorf

Lutz Riede

Partner
Düsseldorf

Anita Bell

Principal Associate
Latest Insights

Latest Insights

NAVIGATE TO
About usLocations and officesYour careerOur thinkingOur capabilitiesNews
CONNECT
Find a lawyerAlumniContact us
NEED HELP
Fraud and scamsComplaintsTerms and conditions
LEGAL
AccessibilityCookiesLegal noticesTransparency in supply chains statementResponsible procurementPrivacy

Select language:
Select language:
© 2026 Freshfields. Attorney Advertising: prior results do not guarantee a similar outcome