EU – Taking responsibility for artificial intelligence: New tort liability proposals
As part of its digital reform package, the European Commission has proposed a new Directive to harmonise certain national non-contractual civil liability rules (“AI Liability Directive”).
The proposed AI Liability Directive was issued at the end of September 2022 and is closely linked to other pieces of EU legislation, in particular the proposed AI Act to which the AI Liability Directive consistently refers (the “AI Act”) as well as the Product Liability Directive. An analysis of the proposed AI Liability Directive in the context of product liability law can be found here.
Purpose – Compensation not prevention
While the AI Act has a preventive scope (i.e., avoiding AI-related damage from occurring), the proposed AI Liability Directive has a compensatory scope (i.e., recompensing those that have nonetheless suffered damage).
The purpose of the AI Liability Directive is to ensure that persons claiming compensation for the damage caused by an AI system enjoy similar protection as those incurring damage from other products. The Commission believes a new and specific regulatory framework is required due to the specific characteristics of AI, such as opacity, autonomous behaviour, complexity and limited predictability, all of which challenge the application of existing liability rules.
However, to avoid encroaching upon national civil liability rules any more than necessary, the proposed AI Liability Directive focuses on two specific tools, namely by the introduction of: (i) specific rules on disclosure of evidence; and (ii) a new rebuttable presumption of causality.
Disclosure obligations for ‘high-risk’ AI
The proposed AI Act qualifies certain AI systems as ‘high-risk’. These are AI systems used in critical infrastructures, safety components of products and certain essential private and public services. The AI Act provides for specific documentation and logging requirements for high-risk AI systems.
The proposed AI Liability Directive applies additional obligations to these ex ante requirements by providing that national courts must be empowered to order the provider of such high-risk AI system (as well as certain other persons who can be in possession of relevant information) to disclose relevant evidence (or to preserve information) at its disposal. The disclosure mechanism works as follows:
- courts must be able to order such disclosures in the context of proceedings on the merits, but also before initiation of proceedings on the merits, with different conditions applying depending on whether proceedings on the merits have been initiated or not;
- on the one hand, it therefore aims to provide potential claimants with effective means to request disclosure before initiating proceedings to help identify potentially liable persons and relevant evidence for a claim. That may then serve to exclude incorrectly identified potential defendants, reducing the amount of unnecessary litigation;
- on the other hand, it does not give potential claimants leave to go on a ‘fishing expedition’: a request for disclosure by a potential claimant must be accompanied by facts and evidence sufficient to support a plausible claim for damages, and national courts must limit the disclosure to that which is necessary and proportionate to support a claim. The proposed Directive also foresees that national courts are granted the power to protect the confidentiality of trade secrets in the context of such a disclosure;
- finally, where a defendant fails to comply with an order by a national court to disclose or to preserve evidence, national courts shall presume non-compliance with the relevant duty of care that the evidence requested was intended to prove. The defendant would however be able to rebut that presumption by submitting evidence to the contrary.
The definitions of the AI Act, to which the AI Liability Directive refers, are still going through the EU’s legislative process. The narrower definition of the notion of ‘AI system’, currently advocated by some members of the European Parliament, would limit the scope of the AI Liability Directive, whereas a more broad and inclusive definition would lead to wide applicability of the new liability rules.
Rebuttable presumption of causal link in the case of fault
The second tool employed by the Commission to alleviate the burden of proof for persons harmed by AI systems, is a rebuttable presumption of a causal link between: (i) the fault of the defendant; and (ii) the output produced by the AI system (or the failure of the AI system to produce an output).
This presumption is intended to address the difficulties in providing evidence that a specific input for which the potentially liable person is responsible has caused a specific AI system output that has led to the damage. It is therefore limited in scope but still very valuable for claimants in liability cases related to AI.
For the presumption of causality to apply, three conditions must be fulfilled:
- it must be established that the defendant committed a fault, consisting in a breach of a duty of care that is directly intended to protect against the damage that occurred. For ‘high-risk’ AI systems, the proposed Directive defines fault by reference to an exhaustive list of obligations under the proposed AI Act;
- it is reasonably likely that the fault has influenced the output produced by the AI system or the failure of the AI system to produce an output; and
- the claimant has demonstrated that the output produced by the AI system (or the failure of the AI system to produce an output) gave rise to the damage.
The proposed Directive provides for a few additional limitations to the application of this presumption. For example, in the case of a claim for damages concerning a standard (not ‘high-risk’) AI system, the presumption only applies where the national court considers it excessively difficult for the claimant to prove the causal link. With regard to ‘high-risk’ AI systems, on the other hand, the presumption does not apply when the defendant demonstrates that sufficient evidence and expertise is reasonably accessible for the claimant to prove the causal link.
Finally, the presumption of causality is rebuttable. Nonetheless, it will likely be difficult for a defendant to provide evidence sufficient for such a rebuttal. This would require either evidence of a negative fact (i.e. that the breach of a duty of care did not cause the harm suffered) or evidence that the harm is the result of another cause (which will require a full view on the facts by the defendant).
Additional bits and bobs
A few additional elements are notable:
- the proposed Directive only addresses non-contractual civil liability. It does not apply to contractual or criminal liability;
- the transition period is intended to be two years after the entry into force of the Directive;
- the proposed AI Liability Directive follows a minimum harmonisation approach. National laws will therefore be able to provide additional or more far-reaching support to claimants in cases of damage caused by AI systems, for example by introducing or maintaining reversals of the burden of proof or strict liability regimes; and
- the proposed Directive provides that the Commission shall review the application of the Directive five years after the end of the transposition period, with a particular focus on evaluating the appropriateness of no-fault liability rules for claims against the operators of certain AI systems, and the potential need for mandatory insurance coverage for the operation of certain AI systems.