The Extension of Legal Personhood in Artificial Intelligence (original) (raw)

Robot as Legal Person: Electronic Personhood in Robotics and Artificial Intelligence

Front. Robot. AI, 2021

This paper seeks to investigate the proposal to create a legal (electronic) personhood for robots with artificial intelligence based on the European Parliament resolution with recommendations on Civil Law and Robotics. To this end, we highlight the various risks and problems present in this type of initiative, especially in view of the current trend of expanding legal subjectivity in various jurisdictions. In addition to an anthropomorphic rhetoric, we can observe the prevalence of a pragmatic line that seeks to be guided, mainly, by the model of corporations, without taking into account, however, problems present in the process of embodiment of companies and the particular function of the term legal person in the grammar of Law.

Legal Personhood for Artificial Intelligence (Preprint)

AI & Society, 2019

The concept of artificial intelligence is not new nor is the notion that it should be granted legal protections given its influence on human activity. What is new, on a relative scale, is the notion that artificial intelligence can possess citizenship-a concept reserved only for humans, as it presupposes the idea of possessing civil duties and protections. Where there are several decades' worth of writing on the concept of the legal status of computational artificial artefacts in the USA and elsewhere, it is surprising that law makers internationally have come to a standstill to protect our silicon brainchildren. In this essay, it will be assumed that future artificial entities, such as Sophia the Robot, will be granted citizenship on an international scale. With this assumption, an analysis of rights will be made with respect to the needs of a non-biological intelligence possessing legal and civic duties akin to those possessed by humanity today. This essay does not present a full set of rights for artificial intelligence-instead, it aims to provide international jurisprudence evidence aliunde ab extra de lege lata for any future measures made to protect non-biological intelligence.

FROM ARTIFICIAL INTELLIGENCE TO ARTIFICIAL CONSCIOUSNESS: POSSIBLE LEGAL BASES FOR THE HUMAN-ROBOT RELATIONSHIPS IN THE FUTURE

FROM ARTIFICIAL INTELLIGENCE TO ARTIFICIAL CONSCIOUSNESS: POSSIBLE LEGAL BASES FOR THE HUMAN-ROBOT RELATIONSHIPS IN THE FUTURE, 2019

In the future, robots will increasingly resemble human beings and people will engage in social interaction with them. Accordingly, this paper aims to pave the way for analyse the research problem in the case of social robots, the probable legal status of artificial intelligence in the future. The article will discuss the differences between artificial intelligence versus artificial consciousness because AI poses societal challenges so it is currently undergoing a number of important developments and the law must be rapidly changed in society so that firstly, the difference between artificial intelligence and artificial consciousness is attempted to be demystified. Subsequently, the analysis will be subjected to a current legal status of Artificial Intelligence in EU with particular emphasis on case-law in matters of intellectual property. Also possible future scenarios will be discussed in this article The starting point of the research were source queries and literature studies aimed at jointly defining competence profiles of robot-human and key from the point of view of cybersecurity challenges 4.0. Next, the most important legal and political EU programming documents were analysed and assessed in terms of their vision of society 4.0. Next, a decision-making method was used to see the impact of particular instruments applied by the union in the framework of the policy of cyberspace protection and the phenomenon of robot-human relations. In connection with this, the following basic questions arise: firstly, the direction in which contemporary policy of combating cyber-terrorism should be aimed at institutional and legal matters, and secondly, to what extent well-guided cyber-security policy is able to influence the security of relations robot-human?

Sustainability of Artificial Intelligence: Reconciling Human Rights with Legal Rights of Robots

With the advancement of artificial intelligence and humanoid robotics and an ongoing debate between human rights and rule of law, moral philosophers, legal and political scientists are facing difficulties to answer the questions like, “Do humanoid robots have same rights as of humans and if these rights are superior to human rights or not and why?” This paper argues that the sustainability of human rights will be under question because, in near future the scientists (considerably the most rational people) will be the biggest critics of the human rights. Whereas to make artificial intelligence sustainable, it is very important to reconcile it with human rights. Above all, there is a need to find a consensus between human rights and robotics rights in the framework of our established legal systems.

Robot rights: at what point should an intelligent machine be considered a ‘person’?

2017

These are some of the issues being discussed by the European Parliament's Committee on Legal Affairs. Last year it released a draft report and motion calling for a set of civil law rules on robotics regulating their manufacture, use, autonomy and impact upon society. Of the legal solutions proposed, perhaps most interesting was the suggestion of creating a legal status of "electronic persons" for the most sophisticated robots.

Editorial: Should Robots Have Standing? The Moral and Legal Status of Social Robots

Frontiers in Robotics and AI, 2022

Editorial on the Research Topic Should Robots Have Standing? The Moral and Legal Status of Social Robots In a proposal issued by the European Parliament (Delvaux, 2016) it was suggested that robots might need to be considered "electronic persons" for the purposes of social and legal integration. The very idea sparked controversy, and it has been met with both enthusiasm and resistance. Underlying this disagreement, however, is an important moral/legal question: When (if ever) would it be necessary for robots, AI, or other socially interactive, autonomous systems to be provided with some level of moral and/or legal standing? This question is important and timely because it asks about the way that robots will be incorporated into existing social organizations and systems. Typically technological objects, no matter how simple or sophisticated, are considered to be tools or instruments of human decision making and action. This instrumentalist definition (Heidegger, 1977; Feenberg, 1991; Johnson, 2006) not only has the weight of tradition behind it, but it has so far proved to be a useful method for responding to and making sense of innovation in artificial intelligence and robotics. Social robots, however, appear to confront this standard operating procedure with new and unanticipated opportunities and challenges. Following the predictions developed in the computer as social actor studies and the media equation (Reeves and Nass, 1996), users respond to these technological objects as if they were another socially situated entity. Social robots, therefore, appear to be more than just tools, occupying positions where we respond to them as another socially significant Other. This Research Topic of Frontiers in Robotics seeks to make sense of the social significance and consequences of technologies that have been deliberately designed and deployed for social presence and interaction. The question that frames the issue is "Should robots have standing?" This question is derived from an agenda-setting publication in environmental law and ethics written by Christopher Stone, Should Trees Have Standing? Toward Legal Rights for Natural Objects (1974). In extending this mode of inquiry to social robots, contributions to this Research Topic of the journal will 1) debate whether and to what extent robots can or should have moral status and/or legal standing, 2) evaluate the benefits and the costs of recognizing social status, when it involves technological objects and artifacts, and 3) respond to and provide guidance for developing an intelligent and informed plan for the responsible integration of social robots. In order to address these matters, we have assembled a team of fifteen researchers from across the globe and from different disciplines, who bring to this conversation a wide range of viewpoints and methods of investigation. These contributions can be grouped and organized under the following four subject areas:

Legal personhood for the integration of AI systems in the social context: a study hypothesis

AI & SOCIETY, 2022

In this paper, I shall set out the pros and cons of assigning legal personhood on artificial intelligence systems (AIs) under civil law. More specifically, I will provide arguments supporting a functionalist justification for conferring personhood on AIs, and I will try to identify what content this legal status might have from a regulatory perspective. Being a person in law implies the entitlement to one or more legal positions. I will mainly focus on liability as it is one of the main grounds for the attribution of legal personhood, like for collective legal entities. A better distribution of responsibilities resulting from unpredictably illegal and/or harmful behaviour may be one of the main reasons to justify the attribution of personhood also for AI systems. This means an efficient allocation of the risks and social costs associated with the use of AIs, ensuring the protection of victims, incentives for production, and technological innovation. However, the paper also considers other legal positions triggered by personhood in addition to responsibility: specific competencies and powers such as, for example, financial autonomy, the ability to hold property, make contracts, sue (and be sued).

The layers of being and the questions of robot ethics

Információs Társadalom, 2018

The paper seeks to analyze the new ethical dilemmas that arise in the social contexts of the robot world. It is based on the theoretical foundation of the ontology of Nicolai Hartmann, which finds the place of ever-increasing artificial intelligence in reality among the layers of being. From this starting point, it examines the summative studies of the robotics analysis already developed in English and looks at their correction that needs to be made in the theory of four-layered human existence in comparison with the analyzes so far. Human existence and the life of human communities are based on the cumulative regularities of the layers of being that are built upon each other through evolution, according to the theses of Nicolai Hartmann's ontology (Hartmann, 1962). The accelerated development and increasing use of artificial intelligence (AI) in recent years in this structure directly affects the top layer of the four (physical, biological, spiritual and intellectual) layers of being, increasing its strength to the detriment of the lower ones. And with the later development of artificial intelligence, eventually breaking away from human control and gaining independence, it can be perceived as an evolutionarily created new layer of being. Unlike the three previous evolutionary leaps, however, it would not require all the lower layers of being. Taking into account the robots that are the physical incarnations of AI today, AI only needs the physical layer of being. (Pokol, 2017). Against this theoretical backdrop, the analyses in this study seek to explore the emerging moral and related legal dilemmas within the mechanisms of contemporary societies that are increasingly permeated by artificial intelligence, while at the same time considering the extent to which the analytical framework changes when the multi-layered nature of human lives, and thus society, is constantly kept in mind.

Robot Companions as Case-Scenario for Assessing the “Subjectivity” of Autonomous Agents. Some Philosophical and Legal Remarks

2012

1 In this paper the European flagship project proposal Robot Companion for Citizens (RCC), grounded on the idea of developing robot companions for citizens, is taken as a case scenario for investigating the feasibility of ascribing rights and duties to autonomous robots from a legal and philosophical standpoint. In talking about rights and duties with respect to robots endowed with autonomous decision capabilities, one should face the implications that inevitably these terms rise, especially in the field of law. The paper points out the technological problems related to the application of the notion of duty to robots and the problems deriving from attributing a legal subjectivity to nonhuman entities such as robot.

Legal Person- or Agenthood of Artificial Intelligence Technologies

Acta Baltica Historiae et Philosophiae Scientiarum

Artificial intelligence (AI) is developing rapidly. There are technologies available that fulfil several tasks better than humans can and even behave like humans to some extent. Thus, the situation prompts the question whether AI should be granted legal person- and/or agenthood? There have been similar situations in history where the legal status of slaves or indigenous peoples was discussed. Still, in those historical questions, the subjects under study were always natural persons, i.e., they were living beings belonging to the species Homo sapiens. We analyse the situation from moral-ethical and practical perspectives. The final conclusion is that the currently existing AIs are still so far removed from humans that there is simply no need to think seriously about legal person- or agenthood. Doing so would mean imposing obligations on the AI to follow. This, in turn, would mean that certain rights in relation to those obligations would have to be granted as well. By all evidence, t...