Tackling AI liability: Insights from the latest EPRS study

In response to criticism from various stakeholders and upon request from the European Parliament's Committee on Legal Affairs (read more here), the European Parliamentary Research Service (EPRS) has conducted a complementary impact assessment of the proposal for a directive on adapting non-contractual civil liability rules to AI (AILD, read more on the proposal here). Focusing on specific research questions, the study explores shortcomings and potential improvements with a view to reducing market fragmentation and ensuring a cautious yet progressive pathway for AI deployment across Europe.

The study’s main focus points

The EPRS criticizes the proposal for its insufficient analysis of the possible policy options. Notably, the EPRS highlights the options of adopting an AI liability regulation (instead of a directive), combining strict liability with liability caps, and introducing a broader range of negligence presumptions or a full, but rebuttable reversal of the burden of proof. Other general criticism relates to an insufficient analysis of the costs and benefits of the available regulatory options, especially in respect of a strict liability regime as proposed earlier by the EU Parliament.

The EPRS also addresses particular aspects of the AILD proposal, including the following:

  • Scope: In contrast to the new Product Liability Directive (PLD, read more here), the AILD shall encompass non-professional users in addition to professional ones and harm done to professional users. The AILD shall also cover areas beyond the PLD such as discrimination, sustainability effects (i.e. climate harms and sustainability risks arising from the mass deployment of large AI models), or pure economic loss. More generally, the EPRS recommends expanding the current proposal to a comprehensive software liability instrument. According to the EPRS, this would align with the PLD which also applies to non-AI software, avoid market fragmentation and enhance regulatory clarity across the EU.
  • Definitions: To ensure coherence and legal clarity, the AILD should adopt the definitions used in the AI Act (read more on the AI Act here). With regard to risk categories, the EPRS recommends moving from “high-risk” to “high-impact” AI systems which include general-purpose AI systems such as ChatGPT, old legislative framework systems such as autonomous vehicles, and insurance applications beyond health and life insurance.
  • Presumptions and rebuttals: While generally supporting the AILD concept of rebuttable presumptions, the EPRS suggests various amendments to ensure alignment with the AI Act. For instance, the causality presumption should be rebuttable in cases where initial violations of the AI Act are later remedied. For AI systems banned under the AI Act, on the other side, the EPRS wants to assume strict liability for any damages they cause without room for rebuttal. The EPRS also recommends including a causality presumption for post-processing AI Act violations (e.g. a violation of the obligation of human oversight, cf. Art. 14, 26 (5), (2) of the AI Act). Furthermore, it clarifies that merely acknowledging the inherent fallibility of machine learning models should not automatically serve as a valid defence. 
  • Disclosure of evidence: Should the AILD adhere to a fault-based system, the EPRS highlights the necessity of making evidence accessible and understandable to non-experts, such as consumers or their legal counsel, to facilitate better litigation outcomes. For claims by non-competitors, for instance, the EPRS recommends lowering the burden of proof for plausibility and requiring only the demonstration of damage and involvement of an AI system to trigger disclosure.
  • Joint liability: The EPRS believes that the fair sharing of liability in the AI value chain requires a streamlined redress framework in the AILD to reduce reliance on varied national laws. It suggests different policy options, including a presumption of equal liability among entities in the value chain and the protection of downstream parties through prohibitions of contractual clauses that waive or significantly modify the right of recourse. The EPRS also recommends that the EU Commission, as foreseen in Article 41 of the Data Act, develop model contractual terms and standard contractual clauses for the allocation of AI liability along the value chain.
Outlook

The EPRS's study brings to light various aspects for discussion in the evolving landscape of AI regulation. Alignment with the AI Act as well as the PLD and emphasis on a more inclusive and clear regulatory framework is particularly vital. This approach might not only reduce legal complexities but would also ensure that the liability framework keeps pace with technological advancements.

That being said, some of the EPRS’s recommendations are extremely far-reaching and controversial. This applies particularly to the proposals relating to strict liability and moving to a regulation instead of a directive as those ideas deeply interfere with the member states’ liability systems. Other proposals might be more feasible. It thus remains to be seen how the EU Commission, which authored the original proposal and is the only institution empowered to initiate legislation, will react to the new study and other criticism in the coming months.