Product Liability and AI (Part 3): Commission plans to overhaul EU product liability law
The Commission adopted two proposals to adapt liability rules to the digital age, circular economy and the impact of global value chains. First, it proposes to modernise the existing rules on strict liability of manufacturers for defective products through a newly drafted Product Liability Directive. The proposal is mainly driven by the challenges digital economy and AI impose on the directive’s decade-old definitions and concepts, but the new rules will equally apply to all products, from garden chairs to medical devices. Second, the Commission suggests a targeted harmonisation of national liability rules, facilitating compensation for damages caused by AI-driven products. The proposals are far more claimant-friendly than expected. Should they be implemented, the European product liability regime will change drastically.
Focus on product liability
Artificial intelligence (“AI”) is omnipresent, an area of strategic importance and a key driver of economic development. At the same time, numerous novel legal questions arise in connection with AI, especially with regard to product liability. Because the existing rules do not always provide adequate answers to these questions and to avoid a national patchwork, the EU institutions are striving to establish and harmonise specific AI legislation. Following its proposal for a regulation laying down harmonised rules on artificial intelligence (“draft AI Act”, read more here), the EU Commission wants to tackle specifically product liability issues and presented a proposal for a directive on liability for defective products (“draft PL Directive”) and a proposal for a directive on adapting non contractual civil liability rules to artificial intelligence (“draft AI Liability Directive”) on 28 September 2022. While the draft AI Act aims to ensure that high-risk AI systems comply with safety and fundamental rights requirements (e.g. data governance, transparency, human oversight), the proposed liability rules will ensure that it is possible to seek compensation when AI systems cause damage.
From the very beginning, product liability was at the centre of the EU’s regulating efforts in the context of AI. This is not surprising, as stakeholders like consumer organisations, NGOs and members of the public have repeatedly reiterated their view that the very core characteristics of AI – opacity, complexity, limited predictability, and semi-autonomous behaviour – make it difficult to pursue or enforce liability claims. Furthermore, although the Commission’s own evaluation of the existing regime, published in 2018, concluded that the rules were in general fit for purpose, it reiterated already then that it was legally unclear how to apply the directive’s decade-old definitions and concepts to products in the modern digital economy. The application of the directive in the area of emerging digital technologies were further analysed in the White Paper on Artificial Intelligence and other Commission and expert reports.
That being said, the now proposed changes are extremely claimant-friendly and will facilitate bringing strict and fault-based product liability claims. On top of that, the draft PL Directive requires member states to ensure that a person acting on behalf of one or more injured persons can bring strict product liability claims – in connection with the Collective Redress Directive, mass consumer product liability claims will be possible in every member state.
Draft PL Directive
The draft PL Directive modernises and reinforces the current established rules, based on strict liability of manufacturers for the compensation of personal injury, damage to property or data loss caused by unsafe products. The proposal in particular aims to ensure liability rules reflect the nature and risks of products in the digital age. To reach this aim, the Commission proposes amendments in almost all provisions of the current Product Liability Directive (“PLD”).
Main elements
As main amendments, the draft revised PL Directive
- confirms that AI systems, AI-enabled goods and software are in-scope “products”, meaning that compensation is available when damage is caused by defective AI or software, without the injured person having to prove the manufacturer’s fault;
- stipulates that not only hardware manufacturers but also software developers, providers and providers of digital services that affect how the product works (such as a navigation service in an autonomous vehicle) can be held liable;
- ensures that there is always an economic operator in the EU against whom a compensation claim can be made (meaning a further extension of in-scope operators – for example, where no authorised representative is available, claims can also be brought against fulfilment service providers which provide packaging services or similar and secondary liability is extended, covering not only retailers and distributors, but also online platforms allowing consumers to conclude distance contracts);
- stipulates that manufacturers can be held liable for changes they make to products they have already placed on the market, including when these changes are triggered by machine learning or software updates;
- covers material losses due to the loss, destruction or corruption of data and medically recognised harm to psychological health;
- alleviates the burden of proof in complex cases, which could include certain cases involving AI systems, and when products fail to comply with safety requirements;
- introduces new rebuttable presumptions for defect and causation: although the burden of proof for defect, damage and causation remains with the claimant in general, the proposal stipulates three presumptions which might in practice shift the burden of proof to the defendant:
- the product is presumed to be defective, if the defendant fails to comply with an obligation to disclose relevant evidence, or the claimant establishes that the product does not comply with safety regulations or that the damage was caused by an obvious malfunction of the product during normal use or under ordinary circumstances;
- causation is presumed, where it has been established that the product is defective and the damage caused is of a kind typically consistent with the defect in question;
- in certain cases, defect and/or causation are presumed where a claimant “faces excessive difficulties”, due to scientific or technical complexity.
- stipulates new obligations to produce documents and information: If claimants are able to prove a prima facie case, defendants will be obliged to disclose “relevant evidence”. Although only “necessary and proportionate” evidence to support the claim must be produced and adequate protections must be put in place for trade secrets, the introduction of these one-sided disclosure obligations would be a novelty in many European legal systems.
Further amendments
The test for determining whether a product is defective – i.e. whether the product provided the safety which the public at large is entitled to expect – generally remains the same as under the PLD. However, to reflect the changing nature of AI products, factors such as self-learning functions or interconnectedness of products have been added to the non-exhaustive list of factors to be taken into account in the assessment of defectiveness. The newly added criteria of user expectations, i.e. "the specific expectations of the end-users for whom the product is intended”, could introduce undesirable subjectivity into the currently objectively shaped entitled expectation test.
The proposal also keeps the possibility for the manufacturer to exempt itself from liability under certain circumstances, but proposes to also adapt the rules to take into account the specific characteristics of digital and AI products. According to the explanatory memorandum to the draft, in the interest of uniform consumer protection as well as a level playing field for manufacturers, the so called “state of the art defence” (meaning the exemption afforded to manufacturers for scientifically and technically undiscoverable defects) should apply in all member states and the possibility to derogate should not be upheld.
A further amendment relates to the 10 year cut-off period, which shall be extended from 10 to 15 years in cases where the symptoms of personal injury are slow to emerge (this will probably not be of specific relevance for AI products, but for example in cases following ingestion of a chemical product). Also, the point in time where the cut-off period begins to run will change in some cases (e.g. the date of the substantial modification of a product instead of its initial marketing).
Draft AI Liability Directive
While the Product Liability Directive is the centrepiece of product liability legislation in the EU, its application is limited to claims against the “manufacturer“ for damage caused by defective products; material losses due to loss of life, damage to health or property and data loss; and to claims made by private individuals. Next to these rules, national fault-based liability regimes exist as a “second pillar”. To ensure that facilitations for claimants apply to all types of claims relating to malfunctioning AI systems, the new AI Liability Directive aims at a targeted reform of national fault-based liability regimes. It shall apply to claims against any person for fault that influenced the AI system which caused the damage; any type of damage covered under national law (including resulting from discrimination or breach of fundamental rights like privacy); and claims made by any natural or legal person.
The proposal focuses on only two measures to take the specificities of national fault-based liability regimes into account: the disclosure of information about AI systems and the presumption of a causal link between the AI system and the damage. Although the proposal is thus limited in scope, it is important to note that these are far-reaching and practically very relevant aspects. Moreover, the proposal also provides for an evaluation of the directive and envisages further instruments (e.g. no-fault liability rules) should the evaluation deem additional measures necessary.
Disclosure obligations
The Commission proposes that courts may, upon request of a (potential) claimant, order providers of high-risk AI systems, but also persons subject to the provider's obligations and users of such systems to disclose or preserve relevant evidence at its disposal about a specific high-risk AI system that is suspected of having caused damage, provided that certain conditions are met.
The disclosure and preservation obligations shall be subject to restrictions, which are in parts similar to the ones under the draft PL Directive:
- The (potential) claimant shall have to substantiate the application with sufficient evidence of the plausibility of a claim for damages.
- The claimant shall have to undertake all proportionate attempts at gathering the relevant evidence from the defendant. However, the courts may also order disclosure prior to proceedings so that the potential claimant can assess the chances of the claim and avoid unnecessary litigation.
- The courts shall be obliged to order only the disclosure of information which appears necessary and proportionate to substantiate the claim. As regards proportionality, the court shall have to take into account the legitimate interests of all parties in order to achieve a fair balance between the legitimate interests of the parties involved (e.g., business secrets and confidential information).
Presumption of causality
For a liability claim to be successful, the tortfeasor's act or omission must have been causal for the damage. In relation to AI systems, this means that the act or omission giving rise to a breach of the duty of care must have produced or failed to produce an output that led to the damage. According to the Commission, proving this causal link could be difficult for a claimant, who would have to explain the inner functioning of the AI system. Therefore, the Proposal provides for a presumption of the causal link if certain conditions are met:
- The claimant can demonstrate fault or there is a presumption of fault with regard to the damaging event in question. For high-risk AI systems, the Commission considers only breaches of certain obligations under the proposed AI Act relevant as damaging events.
- It can be considered reasonably likely that the fault has influenced the output produced by the AI system or the failure of the AI system to produce an output.
- The claimant has demonstrated that the output produced by the AI system or the failure of the AI system to produce an output gave rise to the damage.
Moreover, in the case of a claim for damages concerning a high-risk AI system, a national court shall not apply the presumption where the defendant demonstrates that sufficient evidence and expertise is reasonably accessible for the claimant to prove the causal link. In the case of a claim for damages concerning an AI system that is not a high-risk AI system, on the other hand, the presumption shall only apply where the national court considers it excessively difficult for the claimant to prove the causal link. There are also restrictions for cases against a defendant who used the AI system in the course of a personal, non-professional activity. Finally, the defendant shall have the right to rebut the presumption.
Way forward
The adoption of the proposals marks the beginning of the legislative process. The European Parliament and the Council will examine the proposals thoroughly for defining their respective positions. A political agreement, which will be the basis for the formal adoption of the directives by the co-legislators, may well require amendments and compromises in the course of the discussions, with the AI Act being discussed intensively in parallel. If and when adopted, the member states will have some time to implement the directives.
Yet, companies engaged in AI are well-advised to monitor the further legislative developments thoroughly and implement appropriate risk prevention mechanisms. If adopted as proposed, the directives will set new and very claimant-friendly standards for product liability in case of AI systems, but also in general.
Moreover, even further-reaching amendments are on the horizon: The draft AI Liability Directive suggests a review of the directive within five years after the end of the implementation period. The Commission shall examine whether the objectives were reached and, if necessary, propose further measures for adoption, such as the introduction of harmonised no-fault liability rules for certain AI systems and mandatory insurance for the operation of the AI systems.