The Expert Group’s Report on Liability for Artificial Intelligence and Other Emerging Digital Technologies: a critical assessment (original) (raw)
Related papers
ERA Forum
The revision of the Product Liability Directive plays a key role in the policy strategy of resolving what might be called the artificial intelligence liability jigsaw puzzle. By accommodating the Product Liability Directive to the challenges of artificial intelligence systems, several policy quandaries have been aptly addressed and frontally solved. Two bridges have been built. First, a bridge between the Artificial Intelligence Act and the national fault-based liability rules (draft Artificial Intelligence Liability Directive), and secondly, a complementariness bridge between the Product Liability Directive and fault-based liability rules (draft Revised Product Liability Directive). The second interplay largely revolves around a generously expanded and invigorated product liability regime. These policy choices do not ensure that the jigsaw puzzle produces a fully harmonised image of liability in respect of artificial intelligence, insofar as the system of liability is still partial...
Luisa Antoniolli and Paola Iamiceli (eds), "The making of European Private Law: changes and challenges)", University of Trento (2023 Forthcoming), 2023
The aim of this article is to demonstrate that the application of the principle of subsidiarity to European regulation of compensation for damage attributable to artificial intelligence requires more than adjustments to fault-based liability, with the necessary creation of compensation funds for injuries caused by high-risk artificial intelligence systems. The conclusion is supported by an analysis of the relationship between the innovation principle and the precautionary principle in the regulation of artificial intelligence and by the specific features of this emerging digital technology.
Civil liability applicable to artificial intelligence: a preliminary critique of the European Parliament Resolution of 2020, 2020
On 20 October 2020, the European Parliament approved a Resolution with recommendations to the Commission on a civil liability regime for artificial intelligence (2020/2014(INL)) (the 2020 Resolution). The Resolution highlighted the importance of defining a clear and harmonised civil liability regime in Europe for the development of artificial intelligence technologies and the products and services that benefit from them, so as to provide due legal certainty for producers, operators, affected persons and other third parties. The same motivation had previously led the European Parliament to put forward a series of proposals on the subject of liability in its 2017 Resolution (European Parliament Resolution of 16 February 2017 with recommendations to the Commission on Civil Law Rules on Robotics (2015/2103(INL)). There are some important differences between the two documents. These differences reflect the deeper analysis that the European Commission, in particular, has been engaged in with regard to civil liability for harm attributable to artificial intelligence systems. On this subject, we may note, lastly, the “Report on the safety and liability implications of Artificial Intelligence, the Internet of Things and robotics” (COM(2020) 64 final - Report), which accompanies the European Commission White Paper on Artificial Intelligence – COM(2020) 65 final, of 19 February 2020 (White Paper). One important element underpinning the 2020 Resolution is the distinction made between two different regimes for establishing civil liability for harm attributed to artificial intelligence-systems: strict liability for high-risk situations, and subjective liability with a presumption of fault, in other situations. The aim of these brief notes is to, on one hand, provide a critical assessment of how far this option corresponds to the assumptions on which the Resolution is based, also taking a brief look at the concept of wrongdoing proposed and the scope of compensable harm, and, on the other hand, to assess the adequacy of the distinction indicated in the light of the choices made in the 2017 Resolution and, above all, in the light of the challenges that the development of artificial intelligence raises for the traditional foundations on which compensation regimes have been based up to the present.
Artificial intelligence and civil liability—do we need a new regime?
International Journal of Law and Information Technology
Artificial intelligence (AI) is almost ubiquitous, featuring innumerable facets of daily life. For all its advantages, however, it carries risks of harm. In this article, we discuss how the law of tort should deal with these risks. We take account of the need for any proposed scheme of liability to protect the existing values of tort law without acting as a barrier to innovation. To this end, we propose a strict liability regime in respect of personal injury and death, and a bespoke fault-based regime for dignitary or reputational injuries. For other losses, we take the view that there is no justification for introducing any new regime, on the basis that AI applications do not introduce substantial added degrees of risk that would justify departing from the existing scheme of liability arising under the current law of tort.
Liability and emerging digital technologies: an EU perspective
forthcoming in Notre Dame Journal of International & Comparative Law, Volume 11, 2020
This Article seeks to ascertain whether the current liability regimes are fit for the new digital environment and to envisage possible measures to face the new reality. To this end, it first reconstructs the current liability framework, which at European level is quite fragmented and only partially harmonised and analyses the feature of EDTs to illustrate how they challenge the current legal landscape, to the point of questioning traditional liability notions. Second, it surveys the EU institutions’ position on these challenges and focuses the findings of the Report on Liability for AI and emerging digital technologies, as it provides a valid starting point for discussing any adjustments that may be needed. Finally, it highlights the need of a multi-faced approach to tackle the ever-changing issues raised by EDTs through an overview of the most viable options to complement liability rules, also from an ex-ante perspective.
Producer Liability for AI-Based Technologies in the European Union
International Law Research
The manufacturer's liability for defective products has remained almost unmodified since 1985 when Directive 85/374/EEC (=PLD) was enacted. Perhaps new technology based on artificial intelligence (=AI) could bring about a turning point in the regulation if concepts such as "product" and "defect" or aspects such as "grounds of liability", the so-called "development risks defense", and the "solidarity" are reconsidered. The Group of Experts on Liability and New Technologies (=NTF), in its “Liability for AI and other emerging digital technologies” Report, recommends, inter alia, the regulation of two different civil liability regimes: strict liability and fault-based liability. Thus, it will be necessary to determine precisely the cases to which these regimes apply and how to deal with “uncertain causation”. The alleviation of the victim’s burden of proof should be considered. From the various documents being published, it appears t...
The European AI Liability Directives -Critique of a Half-Hearted Approach and Lessons for the Future
The optimal liability framework for AI systems remains an unsolved problem across the globe. With ChatGPT and other large models taking the technology to the next level, solutions are urgently needed. In a much-anticipated move, the European Commission advanced two proposals outlining the European approach to AI liability in September 2022: a novel AI Liability Directive (AILD) and a revision of the Product Liability Directive (PLD). They constitute the final cornerstone of AI regulation in the EU. Crucially, the liability proposals and the AI Act are inherently intertwined: the latter does not contain any individual rights of affected persons, and the former lack specific, substantive rules on AI development and deployment. Taken together, these acts may well trigger a "Brussels effect" in AI regulation, with significant consequences for the US and other countries. Against this background, this paper makes three novel contributions. First, it examines in detail the Commission proposals and shows that, while making steps in the right direction, they ultimately represent a half-hearted approach: if enacted as foreseen, AI liability in the EU will primarily rest on disclosure of evidence mechanisms and a set of narrowly defined presumptions concerning fault, defectiveness and causality. Hence, second, the article makes suggestions for amendments to the proposed AI liability framework. They are collected in a concise Annex at the end of the paper. I argue, inter alia, that the dichotomy between the fault-based AILD Proposal and the supposedly strict liability PLD Proposal is fictional and should be abandoned; that an EU framework for AI liability should comprise one fully harmonizing regulation instead of two insufficiently coordinated directives; and that the current proposals unjustifiably collapse fundamental distinctions between social and individual risk by equating high-risk AI systems in the AI Act with those under the liability framework. Third, based on an analysis of the key risks AI poses, the final part of the paper maps out a road for the future of AI liability and regulation, in the EU and beyond. More specifically, I make four key proposals. Effective compensation should be ensured by combining truly strict liability for certain high-risk AI systems with general presumptions of defectiveness, fault and causality in cases involving SMEs or non-high-risk AI systems. The paper introduces a novel distinction between illegitimate- and legitimate-harm models to delineate strict liability’s scope. Truly strict liability should be reserved for high-risk AI systems that, from a social perspective, should not cause harm (illegitimate-harm models, e.g., autonomous vehicles or medical AI). Models meant to cause some unavoidable harm by ranking and rejecting individuals (legitimate-harm models, e.g., credit scoring or insurance scoring) may only face rebuttable presumptions of defectiveness and causality. General-purpose AI systems should only be subjected to high-risk regulation, including liability for high-risk AI systems, in specific high-risk use cases for which they are deployed. Consumers ought to be liable based on regular fault, in general. Furthermore, innovation and legal certainty should be fostered through a comprehensive regime of safe harbors, defined quantitatively to the best extent possible. Moreover, trustworthy AI remains an important goal for AI regulation. Hence, the liability framework must specifically extend to non-discrimination cases and provide for clear rules concerning explainability (XAI). Finally, awareness for the climate effects of AI, and digital technology more broadly, is rapidly growing in computer science. In diametrical opposition to this shift in discourse and understanding, however, EU legislators thoroughly neglect environmental sustainability in both the AI Act and the proposed liability regime. To counter this, I propose to jump-start sustainable AI regulation via sustainability impact assessments in the AI Act and sustainable design defects in the liability regime. In this way, the law may help spur not only fair AI and XAI, but potentially also sustainable AI (SAI).
Diritto Penale Contemporaneo - Rivista Trimestrale, 2023
Drawing on European interventions in the field of Artificial Intelligence (in particular the Proposal for a Regulation of April 2021 (AI Act)), the article reflects on the apportionment of responsibilities between the manufacturer and the user of AI systems when a negligence offence occurs due to an error of the algorithm, defined here as “artificial negligence”. It is argued that the manufacturer’s liability could be assessed for noncompliance with rules established by written norms (case of “specific negligence”) or through the reasonable man standard (case of “generic negligence”). For this purpose, a notion of “artificial diligence” is given since it is argued that the reasonable manufacturer parameter will be modelled on the product that complies with the characteristics specified by law and is safe at the same time. Then, discussing the hypothesis of artificial negligence, a distinction between cases of errors that are ex ante foreseeable and unforeseeable is offered in order to address the manufacturer’s liability and to define his duty of care. As far as the user is concerned, the duty of information, vigilance and to intervene are investigated, to conclude that compliance with the duty of “human oversight” in the assessment of negligence should be ascertained in concreto, in order to evaluate whether to exclude culpability or even the objective dimension of negligence, according to the reasonable man standard.