Product compliance and liability in the digital age
The landscape of product liability and product compliance in the EU is undergoing transformation driven by technological progress and an increased focus on consumer protection. Recent legislative reforms, particularly the new Product Liability Directive (PLD), the General Product Safety Regulation (GPSR), the Machinery Regulation and the Artificial Intelligence Act (AI Act), collectively introduce heightened liability risks for businesses, claimant friendly frameworks and attempt to respond to evolving risks through inter-connected and AI-based products. The key changes and areas of interplay between these regulations are set out in this client briefing.
1. Key changes under the PLD 1
The revised PLD expands the scope of products covered and facilitates enforcement of claims, substantially increasing the risk profile for economic operators. Key changes include:
- Expanded definition of “product”: Strict product liability now explicitly covers software, Article 4(1) PLD including AI, as a product irrespective of its mode of supply or usage (e.g., stored on a device, cloud-based, or software-as-a-service).
- Stronger focus on “components” and “related services”: The PLD explicitly covers components and integrated digital services essential to product functions (e.g., AI data for navigation, health monitoring, smart fridge controls). Liability applies only where the component or service is integrated within the manufacturer or provider’s control (Articles 4(3), 4(4), 4(5), 8(1) PLD).
- Broader scope of protected legal interests: The PLD now also covers certified psychological harm and loss or corruption of non-professional data, in addition to physical injury and property damage. Data destruction is only covered if it results in material loss. Data leaks and breaches of data protection rules are not included (Article 6(1) PLD).
- Expanded list of liable parties: Liability now extends beyond manufacturers and importers to also cover authorized representatives, fulfilment service providers, providers of online platforms and parties that substantially modify a product (e.g., via physical changes or software updates).
- Disclosure: Courts may order disclosure of evidence at the request of a claimant who presents sufficient facts to support a “plausible” compensation claim. Disclosure orders may include technical information and obligations to newly compiling materials. They are limited to what is “necessary and proportionate”, with courts having to balance confidentiality and trade secret concerns.
- Rebuttable presumption of defects/shift in burden of proof: The PLD introduces several rebuttable presumptions in Article 10 PLD. For example, non-compliance with a disclosure order may trigger a rebuttable presumption of defect under Article 10(2)(a) PLD. More importantly, in technically or scientifically complex cases, courts may presume both defect and causation where the plaintiff shows that each is likely, significantly lowering evidential hurdles and in practice approaching a reversal of the burden of proof.
- Removal of liability caps: The PLD abolishes previous financial caps on liability and self-participation thresholds, meaning economic operators face potentially unlimited compensation claims.
- Extended expiry period: For personal injuries with long latency periods, the expiry period for claims is extended from 10 to 25 years after the product was placed on the market or put into service.
2. Direct link between product liability and product safety compliance
The PLD directly links product liability to compliance with EU and national product safety rules. A product is defective if it does not provide the safety a person is entitled to expect or that the law requires, taking into account relevant product safety and cybersecurity requirements (Article 7(1), 7(2)(f) PLD).
If a claimant shows that mandatory safety requirements designed to prevent the type of damage suffered have been breached, defectiveness is presumed and need not be proved separately (Article 10(2)(b) PLD).
Recent regimes such as the GPSR, the Machinery Regulation and the AI Act set safety standards that feed into this assessment, including cybersecurity, interaction with other products and products that can learn or be updated after being placed on the market.
2.1 General Product Safety Regulation 2
Article 6 GPSR mandates that safety assessments for products must now consider also the following aspects:
- Interaction with other products: The effect on other products and that other products might have on the product, i.e. how a product interacts with other hardware or software in a system.
- Cybersecurity features: Appropriate protection against external influences, including malicious third parties impacting safety. A cybersecurity vulnerability is now a direct product defect.
- Evolving, learning, or predictive functionalities: The safety implications of AI-driven systems that change behavior over time.
Furthermore, the GPSR introduces new or stricter obligations for economic operators and, for the first time, also online marketplaces. For example, under Article 9 GPSR, manufacturers must conduct internal risk analyses covering new criteria such as cyber security and AI processes, maintain technical documentation for 10 years, apply enhanced labelling requirements, establish consumer complaint channels, and take immediate corrective measures (including recalls) for dangerous products. In general, economic operators shall place or make available on the market only safe products (Article 5 GPSR).
2.2 Machinery Regulation 3
The Machinery Regulation covers machinery and related products (e.g., safety components, lifting accessories). Critically, software, including AI-driven software, is now explicitly recognized as a potential safety component.
In particular, the Machinery Regulation modernized its safety requirements, directly integrating new technologies (see Article 8 and Annex III):
- Protection against corruption: Machines or related products incl. control systems must be designed so that connections to other devices do not lead to hazardous situations with safety-critical hardware, software and data adequately protected against accidental or intentional corruption and evidence of interventions being collected.
- AI and autonomy: Safety and reliability requirements for controls are extended to systems with (partially) autonomous or self-evolving behavior or logic (i.e., AI systems). They must operate within defined tasks or spaces, allow human intervention at all times and log data on safety-related decision-making processes for software-based safety systems. This is crucial for accountability and post-incident analysis.
- Human-machine interaction: Risks related to the coexistence of humans and machines in shared spaces are explicitly addressed.
- Third-party conformity assessment: High-risk AI systems used as safety components (e.g., collision avoidance, autonomous navigation) now require mandatory third-party conformity assessment.
2.3 Artificial Intelligence Act (AI Act) 4
AI systems functioning as safety components of machinery (as per the Machinery Regulation) are automatically classified as high-risk AI systems under the AI Act, triggering stringent obligations, including:
- Robust risk management systems: Continuous, iterative processes throughout the AI system’s lifecycle, addressing foreseeable risks and misuse, as well as interaction with other systems (Article 9).
- Data governance and technical documentation: High-quality training, validation, and testing data, active mitigation of biases, and comprehensive technical documentation retained for 10 years (Article 10, 11).
- Record-keeping: Enabling automatic recording of safety-relevant events, essentially a “blackbox” for AI systems (Article 12).
- Transparency: Clear instructions and safety information for deployers, enabling understanding the AI system’s functionalities, capabilities, and limitations (Article 13).
- Human oversight: Mechanisms to ensure effective human oversight, including the possibility of intervention or stopping the system (Article 14).
- Accuracy, robustness, and cybersecurity: Mandated standards for these attributes – again, recognizing that cybersecurity flaws in AI can lead to product defects. Specifically on cybersecurity, it requires that high-risk AI systems “shall be resilient against attempts by unauthorized third parties to alter their use, outputs or performance by exploiting system vulnerabilities” (Article 15).
3. Outlook and practical considerations
The combined impact of the new PLD and the enhanced requirements under the GPSR, Machinery Regulation and AI Act create a complex risk environment. Businesses now carry extended responsibilities across the entire product lifecycle, especially for connected and AI‑driven products, with cybersecurity a key factor in assessing safety and defectiveness.
To make this landscape manageable and defensible, key actions include:
- Integrate cybersecurity by design
- Review supply contracts
- Update document retention policies
- Implement proactive risk monitoring
- Ensure product liability insurance
Further analysis on the PLD please see here:
EU Product Liability: Disclosure, Data & Your Business | Freshfields
The new Product Liability Directive | Freshfields
Further analysis on the GPSR please see here:
Further analysis of the AI Act please see here:
Our EU AI Act unpacked blogpost series
-------------------------
1 The PLD entered into force in December 2024 and must be implemented into national law by 9 December 2026.
2 Effective from 13 December 2024, the GPSR modernises the general product safety framework for consumer products.
3 The Machinery Regulation, applying from 20 January 2027, replaces the former Machinery Directive and modernises safety obligations for machinery and related products for the digital age.
4 Based on their risk level, the AI Act directly regulates AI systems across their lifecycle with obligations for providers, deployers, authorised representatives, importers, distributors, and third-party suppliers of AI systems. The AI Act applies if in-scope products are used in the EU or if the use of in-scope products affects persons in the EU. Its application dates are staggered, with core prohibitions and General Purpose AI (GPAI) rules already in force (February and August 2025), while rules for high-risk AI systems are expected to apply later (also depending on the outcome of the Digital Omnibus initiative).
