Product liability in the AI age
By Andrew Austin
Legal mechanisms exist to ensure businesses can be held liable if their products cause injury or loss. But in a world where products are increasingly complex – and are starting to incorporate artificial intelligence (AI) – do the rules need to change?
Current product liability laws were designed for a world where goods were relatively simple physical items that didn’t change much once they left the factory. But many products that are coming to market today are stretching these boundaries – because they are a complex combination of hardware, software and services, and because we are moving towards a future in which they can evolve without human input.
This sort of AI – that is, a machine learning algorithm that can learn and ‘think’ on its own – is still some way off. There has been a lot of debate in academic circles about whether such technology should be considered to have separate legal personality – making it, rather than its creators, liable for its impact – but it’s doubtful whether any regulator would want that, particularly where consumers are concerned. After all, authorities are there to ensure people can seek redress, and historically product liability regimes have channelled responsibility towards the producer and its insurers.
There has been a lot of debate in academic circles about whether such technology should be considered to have separate legal personality – making it, rather than its creators, liable for its impact
If a product causes a consumer to suffer injury or loss, liability claims are brought under legal regimes such as the EU’s Product Liability Directive. In time these frameworks may need to change as new technologies emerge, but the EU regime is technology-neutral and the courts have applied it to a wide range of products over the years, many of which did not exist when the Directive became law in 1985.
In terms of future reform, the European Commission is leading the way via its expert group on liability and new technologies. The group comprises two sub-groups, one considering how the current Product Liability Directive needs to change, and the other looking at the wider ethical and legal implications of AI and related technologies. I’m a member of the former, which is drafting guidance for courts and businesses to help them apply the EU product liability regime to cases involving technologies such as AI. This process may also lead the Commission to propose formal changes to the Directive in time. We’ll see the first output from the sub-group this year.
A ‘product’ is currently defined in the Directive as any movable good plus electricity, but the Commission is exploring whether this should also cover software that’s embedded in, or downloaded on to, a physical product (and therefore forms part of it). There seems to me to be no reason why a consumer shouldn’t be able to make a claim under the regime against the manufacturer of a household item that catches fire as a result of a problem with its operating software, when they could definitely do so if it was a physical component that failed.
However, there are situations that could arise where the language of the Directive would run up against some more fundamental barriers. Say you were in a self-driving car that swerves and causes an accident. The responsibility could lie with you; the car-maker; the designer of the self-driving system; the developer of the mapping or sensor software; the company that provides data to these applications; the supplier of network connectivity; a cyber-attacker; or a combination of all of them. It has always been clear that the regime does not cover services, and as there is increasingly a fine line between a physical product and the service it delivers, at some point the rules themselves will need to be revised. Another point of interest for the expert group is how liability should be allocated in the case of products made by 3D printing, particularly where the consumer builds the end product using designs supplied by one third party and a 3D printer supplied by another. The regime exists to protect consumers – but could it also channel liability towards them if something goes wrong?
Say you were in a self-driving car that swerves and causes an accident. The responsibility could lie with you; the car-maker; the designer of the self-driving system; the developer of the mapping or sensor software; the company that provides data to these applications; the supplier of network connectivity; a cyber-attacker; or a combination of all of them.
In cases where the delineation between producer and consumer is more straightforward, the information that’s provided to end users and other affected parties can make a big difference to the likelihood of a producer being found liable. The producer’s separate contractual relationships with its component suppliers, software and data services providers and others may then determine whether it, or another party, ultimately ends up paying. As a result, contracts need to be in writing and clearly define each party’s responsibilities, while consumer-facing documentation needs to be clear about a product’s intended uses and to contain appropriate instructions for use and warnings. All this means that innovation teams need to engage with lawyers so they can help the business manage risk.
Andrew Austin is a partner at Freshfields and a member of the European Commission’s expert group on liability and new technologies.
The AI patent boom
By Dr. Sonja Mroß and Wolrad Prinz zu Waldeck und Pyrmont, Freshfields
Product liability in the AI age
By Andrew Austin, Freshfields
Who owns the output?
By Giles Pratt and Emily Rich, Freshfields
Inside Europe's AI strategy
By Eugene McQuaid, Freshfields
Can we trust machines that 'think'?
By Jess Steele, Freshfields
How will AI be controlled?
By Professor Ryan Calo, University of Washington
AI technology: a lawyer's guide
By Giles Pratt and Sam Hancock, Freshfields
By Timandra Harkness, Author of Big data: does size matter
Harnessing AI to reduce risk
Freshfields case study