Artificial intelligence and civil liability—do we need a new regime? (original) (raw)

In support of “no-fault” civil liability rules for artificial intelligence

SN Social Sciences

Civil liability is traditionally understood as indirect market regulation, since the risk of incurring liability for damages gives incentives to invest in safety. Such an approach, however, is inappropriate in the markets of artificial intelligence devices. In fact, according to the current paradigm of civil liability, compensation is allowed only to the extent that "someone" is identified as a debtor. However, in many cases it would not be useful to impose the obligation to pay such compensation to producers and programmers: the algorithms, in fact, can "behave" far independently from the instructions initially provided by programmers so that they can err despite no flaw in design or implementation. Therefore, application of "traditional" civil liability to AI may represent a disincentive to new technologies based on artificial intelligence. This is why I think artificial intelligence requires that the law evolves, on this matter, from an issue of civil liability into one of financial management of losses. No-fault redress schemes could be an interesting and worthy regulatory strategy in order to enable this evolution. Of course, such schemes should apply only in cases where there is no evidence that producers and programmers have acted under conditions of negligence, imprudence or unskillfulness and their activity is adequately compliant with scientifically validated standards.

Inefficiency of legal laws in Applying to Damages Caused by Artificial Intelligence

The emergence and increasing progress of artificial intelligence has faced the legal science with unsolvable challenges. Artificial intelligence systems, like other new technologies, have faced serious challenges from the principle of accountability and legal rules about civil responsibilities (compensation for damages caused by artificial intelligence systems). This is an important issue and ensures the confidence of potential victims of these systems and trust in the artificial intelligence industry. In the face of changes in smart technology, the courts experience challenges in applying traditional laws that the current laws are unable to respond to, and regulatory organizations and legislators must pay attention to the fact that the current laws are not responsive in monitoring artificial intelligence and exercising legal responsibilities. They need to contemplate to enact the special and new laws. However, the important issue that the legislators in all legal systems are concerned with is whether artificial intelligence is considered as a legal entity or not, and whether artificial intelligence can be tried before the courts, the issue which has not yet been answered. This article, while reviewing the nature and elements of artificial intelligence, which is necessary for lawyers, examines the various aspects of the challenges facing the science of law in the field of artificial intelligence and examines the ineffectiveness of the laws governing the damages caused by artificial intelligence. The result is that the rules of law need to be revised in dealing with the responsibilities arising from artificial intelligence.

Civil and Criminal Liability in Cases of Artificial Intelligence Failure

Artificial Intelligence has started to become an important part of our day-to-day life and in the near future, Artificial Intelligence (AI) based technology is going to be introduced in the country especially in form of self-driving cars by Tesla and archrivals. These technologies are being capable of performing various autonomous tasks including but not limited to interactions with human beings. However, use of AI based technologies may give rise to disputes where one party may be the Artificial Intelligence itself, and to deal with such situations there is a need of proper regulatory framework for the adjudication of such disputes. This paper attempts to analyze the methods by which other countries are dealing with the problem while striking a balance between protecting the rights of the victim vis-à-vis the interests of manufacturers and programmers of AI. This paper further focuses on the factors that are needed to be ascertained while deciding the liability in accidents caused by Artificial Intelligence by committing an offence in criminal matters and breach of duty in civil matters. Keywords: Artificial Intelligence (AI), Legal Personality, Civil Liability, Criminal Liability, Tort negligence, Regulation of AI.

Adjudicating cases of Negligence in Artificial Intelligence

Jayashree S Shet, 2020

The modern technological world is working towards exploring new dimensions in the field of Artificial Intelligence (AI). It is a well-known fact that AI is developing at a fast pace and in all likelihood, it will take over mankind in various sectors of the society. Such developments must be deliberated upon and requires the modification of the existing legislative framework in India. The main aim of this article is to address the uncertainty with respect to the personhood of AI and to assess the intelligence of such a non-human entity. Although such inventions carry their own set of menaces, it is imperative that such developments should continue as it contributes to the progress of the country. A serious challenge with respect to the imposition of liability on AI has to be looked into, in the cases of negligence. This article deals with two ways in which such liability can be attributed. Firstly, by ascribing legal personality and making them accountable for their own acts. Secondly, by transferring it to their users or manufacturers. In the course of this article we will examine whether such AI systems will be capable of satisfying the essential requirements of personhood, i.e., possessing rights and duties. This raises the issue of the standard of care that shall be attributed to them in cases of negligence. Such standard of care shall be determined based on the degree of risk to which others are exposed and the aim to be attained by such a perilous activity. This article dwells into the comprehensive theory of criminal law with respect to negligence and involves, in addition to the AI, the liability of the programmer, user, and other entities and further deliberates upon the punishments which can be attributed to them. Hence, it is important to ensure that such AI technology functions within the boundaries of law.

Liability for harm caused by AI in healthcare: an overview of the core legal concepts

Frontiers in Pharmacology, 2023

The integration of artificial intelligence (AI) into healthcare in Africa presents transformative opportunities but also raises profound legal challenges, especially concerning liability. As AI becomes more autonomous, determining who or what is responsible when things go wrong becomes ambiguous. This article aims to review the legal concepts relevant to the issue of liability for harm caused by AI in healthcare. While some suggest attributing legal personhood to AI as a potential solution, the feasibility of this remains controversial. The principal-agent relationship, where the physician is held responsible for AI decisions, risks reducing the adoption of AI tools due to potential liabilities. Similarly, using product law to establish liability is problematic because of the dynamic learning nature of AI, which deviates from static products. This fluidity complicates traditional definitions of product defects and, by extension, where responsibility lies. Exploring alternatives, risk-based determinations of liability, which focus on potential hazards rather than on specific fault assignments, emerges as a potential pathway. However, these, too, present challenges in assigning accountability. Strict liability has been proposed as another avenue. It can simplify the compensation process for victims by focusing on the harm rather than on the fault. Yet, concerns arise over the economic impact on stakeholders, the potential for unjust reputational damage, and the feasibility of a global application. Instead of approaches based on liability, reconciliation holds much promise to facilitate regulatory sandboxes. In conclusion, while the integration of AI systems into healthcare holds vast potential, it necessitates a re-evaluation of our legal frameworks. The central challenge is how to adapt traditional concepts of liability to the novel and unpredictable nature of AI-or to move away from liability towards reconciliation. Future discussions and research must navigate these complex waters and seek solutions that ensure both progress and protection.

Civil Liability and Artificial Intelligence: Who is Responsible for Damages Caused by Autonomous Intelligent Systems?

2020

Abstract: The article has as its object of analysis the inquiries related to civil liability in cases where the damage is caused by systems equipped with artificial intelligence. With this intent, the study analyzes the possibility of considering the autonomous system responsible for the damage, as well as what are the essential requirements for the analysis of civil liability in these cases. In addition, the article proposes to understand how the exclusions of civil liability in the described situation work, in addition to making a consideration regarding the Bills of Law no 5.051/2019 and no 5.691/2019, in progress in the National Congress, which deal with the principles for the use of autonomous intelligence, as well as the incentives of the development of new technologies in the Brazilian territory. The study intends to emphasize that the rules of responsibility need to find a balance between protecting citizens from possible damage arising from activities carried out by an arti...

Civil liability applicable to artificial intelligence: a preliminary critique of the European Parliament Resolution of 2020

Civil liability applicable to artificial intelligence: a preliminary critique of the European Parliament Resolution of 2020, 2020

On 20 October 2020, the European Parliament approved a Resolution with recommendations to the Commission on a civil liability regime for artificial intelligence (2020/2014(INL)) (the 2020 Resolution). The Resolution highlighted the importance of defining a clear and harmonised civil liability regime in Europe for the development of artificial intelligence technologies and the products and services that benefit from them, so as to provide due legal certainty for producers, operators, affected persons and other third parties. The same motivation had previously led the European Parliament to put forward a series of proposals on the subject of liability in its 2017 Resolution (European Parliament Resolution of 16 February 2017 with recommendations to the Commission on Civil Law Rules on Robotics (2015/2103(INL)). There are some important differences between the two documents. These differences reflect the deeper analysis that the European Commission, in particular, has been engaged in with regard to civil liability for harm attributable to artificial intelligence systems. On this subject, we may note, lastly, the “Report on the safety and liability implications of Artificial Intelligence, the Internet of Things and robotics” (COM(2020) 64 final - Report), which accompanies the European Commission White Paper on Artificial Intelligence – COM(2020) 65 final, of 19 February 2020 (White Paper). One important element underpinning the 2020 Resolution is the distinction made between two different regimes for establishing civil liability for harm attributed to artificial intelligence-systems: strict liability for high-risk situations, and subjective liability with a presumption of fault, in other situations. The aim of these brief notes is to, on one hand, provide a critical assessment of how far this option corresponds to the assumptions on which the Resolution is based, also taking a brief look at the concept of wrongdoing proposed and the scope of compensable harm, and, on the other hand, to assess the adequacy of the distinction indicated in the light of the choices made in the 2017 Resolution and, above all, in the light of the challenges that the development of artificial intelligence raises for the traditional foundations on which compensation regimes have been based up to the present.

AI and Criminal Liability

Indian Journal of Artificial Intelligence and Law, Volume 1, Issue 1 (2020), 0

Artificial Intelligence is basically a study of how to make a system, which can think, behave and act exactly or better than what a human being can act or react. It tends to the issues of making AIs more wise than human, and guaranteeing that they utilize their propelled insight for good as opposed to ill. In the field of Criminal Law, the ultimate concerns for Artificial Intelligence are whether an autonomous vehicle, drones and robots should also be given a status of electronic person? Or robot considered as a legal personality just like-corporations (as a legal person-who can sue and be sued as given to Sophia-a citizenship in Saudi Arabia) or would it be considered as a like it as an individual person within the purview of law. The likelihood of making thinking machines raises a large group of criminal issues. Artificial Intelligence has evolved out of from four basic subjects: Psychology, Philosophy, Mathematics and Linguistic, they are making a big role in an enhancement of Artificial Intelligence. This paper intends to identify issues and challenges pertaining to crimes and criminals/offenders, especially in terms of whether we should consider software programme as a product or service, as earlier it happened in case of considering electricity as a product rather than considering as a service, now that what is the obstacle is here, in the case of negligence(rash and negligent driving) , strict product liability, and vicarious liability in the field of law of penal and torts, where India lacks specific legislation. The question of legal liability arises when unmanned vehicle is involved in a car accident, the surgical system is involved in a surgical error or the trading algorithm is involved in fraud, etc., now the question is who will be held liable for these offences. Before we delve into the potential of Artificial Intelligence, let's take a step back to understand AI's legal issues pertaining to legal liability of Artificial Intelligence systems under the head of legal categories such as: Law of Torts and, Criminal Law .Such determination is likely to get more muddled with the onset of AI, particularly due to the possibility of it being accorded the status of a person in law. I will explore criminal implications of AI / in relation to the use of AI. This is the most new aspects in the field of the laws of robots, selfdriving car and drones in contrast to traditional forms of responsibility-proof for other's behaviour, like children, employees, or pets which gets in addition to new strict liability policies, mitigating through the insurance models, systems authentication, and the mechanism of allotting the burden of proof. Further this paper will critically analyze the nuances of using AI system in the field of penal law. At the end this paper will suggest and recommend solutions to overcome these issues and challenges through the use of doctrinal with qualitative research methods.

Challenges of Criminal Liability for Artificial Intelligence Systems

Exploration of AI in Contemporary Legal Systems, 2024

The idea of artificial intelligence first surfaced at the turn of the 20th century, with the goal of enabling machines to carry out tasks that resemble those performed by humans. Since then, a number of theories have been proposed regarding the extent to which mistakes made by artificial intelligence entities and systems may be held accountable, particularly in the area of criminal law. This chapter seeks to clarify this matter by discussing the legal obstacles that surround the question of criminal responsibility for artificial intelligence's actions. It also offers concepts, justifications, and factors that tackle this problem by using a comparative analytical and descriptive methodology. The chapter concluded with a proposal for international cooperation to develop a legal and ethical framework for the worldwide use of artificial intelligence. Given the anticipated widespread use of this technology in the future, governments could use this framework as a reference.

Non-contractual liability applicable to artificial intelligence: towards a corrective reading of the European intervention

Luisa Antoniolli and Paola Iamiceli (eds), "The making of European Private Law: changes and challenges)", University of Trento (2023 Forthcoming), 2023

The aim of this article is to demonstrate that the application of the principle of subsidiarity to European regulation of compensation for damage attributable to artificial intelligence requires more than adjustments to fault-based liability, with the necessary creation of compensation funds for injuries caused by high-risk artificial intelligence systems. The conclusion is supported by an analysis of the relationship between the innovation principle and the precautionary principle in the regulation of artificial intelligence and by the specific features of this emerging digital technology.