Why Can´t We Regard Robots As People? (original) (raw)

The Kant-Inspired Indirect Argument for Non-Sentient Robot Rights

AI and Ethics, 2023

Some argue that robots could never be sentient, and thus could never have intrinsic moral status. Others disagree, believing that robots indeed will be sentient and thus will have moral status. But a third group thinks that, even if robots could never have moral status, we still have a strong moral reason to treat some robots as if they do. Drawing on a Kantian argument for indirect animal rights, a number of technology ethicists contend that our treatment of anthropomorphic or even animal-like robots could condition our treatment of humans: treat these robots well, as we would treat humans, or else risk eroding good moral behavior toward humans. But then, this argument also seems to justify giving rights to robots, even if robots lack intrinsic moral status. In recent years, however, this indirect argument in support of robot rights has drawn a number of objections. In this paper I have three goals. First, I will formulate and explicate the Kant-inspired indirect argument meant to support robot rights, making clearer than before its empirical commitments and philosophical presuppositions. Second, I will defend the argument against a number of objections. The result is the fullest explication and defense to date of this well-known and influential but often criticized argument. Third, however, I myself will raise a new concern about the argument's use as a justification for robot rights. This concern is answerable to some extent, but it cannot be dismissed fully. It shows that, surprisingly, the argument's advocates have reason to resist, at least somewhat, producing the sorts of robots that, on their view, ought to receive rights.

Between Angels and Animals: The Question of Robot Ethics, or Is Kantian Moral Agency Desirable?

faculty.evansville.edu

In this paper, I examine a variety of agents that appear in Kantian ethics in order to determine which would be necessary to make a robot a genuine moral agent. However, building such an agent would require that we structure into a robot’s behavioral repertoire the possibility for immoral behavior, for only then can the moral law, according to Kant, manifest itself as an ought, a prerequisite for being able to hold an agent morally accountable for its actions. Since building a moral robot requires the possibility of immoral behavior, I go on to argue that we cannot morally want robots to be genuine moral agents, but only beings that simulate moral behavior. Finally, I raise but do not answer the question that if morality requires us to want robots that are not genuine moral agents, why should we want something different in the case of human beings.

MORAL STATUS AND INTELLIGENT ROBOTS

The Southern Journal of Philosophy, 2022

The great technological achievements in the recent past regarding artificial intelligence (AI), robotics, and computer science make it very likely, according to many experts in the field, that we will see the advent of intelligent and autonomous robots that either match or supersede human capabilities in the midterm (within the next 50 years) or long term (within the next 100-300 years). Accordingly, this article has two main goals. First, we discuss some of the problems related to ascribing moral status to intelligent robots, and we examine three philosophical approachesthe Kantian approach, the relational approach, and the indirect duties approachthat are currently used in machine ethics to determine the moral status of intelligent robots. Second, we seek to raise broader awareness among moral philosophers of the important debates in machine ethics that will eventually affect how we conceive of key concepts and approaches in ethics and moral philosophy. The effects of intelligent and autonomous robots on our traditional ethical and moral theories and concepts will be substantial and will force us to revise and reconsider many established understandings. Therefore, it is essential to turn attention to debates over machine ethics now so that we can be better prepared to respond to the opportunities and challenges of the future.

Robots and the Limits of Morality

In this chapter, I ask whether we can coherently conceive of robots as moral agents and as moral patients. I answer both questions negatively but conditionally: for as long as robots lack certain features, they can be neither moral agents nor moral patients. These answers, of course, are not new. They have, yet, recently been the object of sustained critical attention (Coeckelbergh 2014; Gunkel 2014). The novelty of this contribution, then, resides in arriving at these precise answers by way of arguments that avoid these recent challenges. This is achieved by considering the psychological and biological bases of moral practices and arguing that the relevant differences in such bases are sufficient, for the time being, to exclude robots from adopting, both, an active and a passive moral role.

Can a Robot Be a Person? De-Facing Personhood and Finding It Again with Lévinas

Journal of Moral Theology

The question “Can a robot be a person?” has emerged of late in the field of bioethics. The paper addresses the question in dialogue with Emmanuel Levinas. It begins with something like an archeological reconstruction of personhood in modernity, in order to locate the context out of which the question posed, “can a robot be a person?” might take on meaning. Descartes, Hume and Kant are the most important exponents of the story, their position emerging in direct contradiction with the classical metaphysics of the person, such as one finds in Thomas Aquinas. Levinas rejects the rationalist perspective of a bodiless mind, a person reduced to her cognitive capacities, no less than the empirical version of a mindless body, both understandings of personhood being de facto prevalent in contemporary bioethics, especially in the Anglo-American version of it. On the other hand, as Levinas suggests, to be a person is to be “manifested in the exteriority of the face, which is not the disclosure ...

Welcoming Robots into the Moral Circle: A Defence of Ethical Behaviourism

Science and Engineering Ethics, 2019

Can robots have significant moral status? This is an emerging topic of debate among roboticists and ethicists. This paper makes three contributions to this debate. First, it presents a theory-'ethical behaviourism'-which holds that robots can have significant moral status if they are roughly performatively equivalent to other entities that have significant moral status. This theory is then defended from seven objections. Second, taking this theoretical position onboard, it is argued that the performative threshold that robots need to cross in order to be afforded significant moral status may not be that high and that they may soon cross it (if they haven't done so already). Finally, the implications of this for our procreative duties to robots are considered, and it is argued that we may need to take seriously a duty of 'procreative beneficence' towards robots.

The layers of being and the questions of robot ethics

Információs Társadalom, 2018

The paper seeks to analyze the new ethical dilemmas that arise in the social contexts of the robot world. It is based on the theoretical foundation of the ontology of Nicolai Hartmann, which finds the place of ever-increasing artificial intelligence in reality among the layers of being. From this starting point, it examines the summative studies of the robotics analysis already developed in English and looks at their correction that needs to be made in the theory of four-layered human existence in comparison with the analyzes so far. Human existence and the life of human communities are based on the cumulative regularities of the layers of being that are built upon each other through evolution, according to the theses of Nicolai Hartmann's ontology (Hartmann, 1962). The accelerated development and increasing use of artificial intelligence (AI) in recent years in this structure directly affects the top layer of the four (physical, biological, spiritual and intellectual) layers of being, increasing its strength to the detriment of the lower ones. And with the later development of artificial intelligence, eventually breaking away from human control and gaining independence, it can be perceived as an evolutionarily created new layer of being. Unlike the three previous evolutionary leaps, however, it would not require all the lower layers of being. Taking into account the robots that are the physical incarnations of AI today, AI only needs the physical layer of being. (Pokol, 2017). Against this theoretical backdrop, the analyses in this study seek to explore the emerging moral and related legal dilemmas within the mechanisms of contemporary societies that are increasingly permeated by artificial intelligence, while at the same time considering the extent to which the analytical framework changes when the multi-layered nature of human lives, and thus society, is constantly kept in mind.

New Challenges for Ethics: The Social Impact of Posthumanism, Robots, and Artificial Intelligence

Journal of Healthcare Engineering, 2021

The ethical approach to science and technology is based on their use and application in extremely diverse fields. Less prominence has been given to the theme of the profound changes in our conception of human nature produced by the most recent developments in artificial intelligence and robotics due to their capacity to simulate an increasing number of human activities traditionally attributed to man as manifestations of the higher spiritual dimension inherent in his nature. Hence, a kind of contrast between nature and artificiality has ensued in which conformity with nature is presented as a criterion of morality and the artificial is legitimized only as an aid to nature. On the contrary, this essay maintains that artificiality is precisely the specific expression of human nature which has, in fact, made a powerful contribution to the progress of man. However, science and technology do not offer criteria to guide the practical and conceptual use of their own contents simply because...

Can Social Robots Qualify for Moral Consideration? Reframing the Question about Robot Rights

Information

A controversial question that has been hotly debated in the emerging field of robot ethics is whether robots should be granted rights. Yet, a review of the recent literature in that field suggests that this seemingly straightforward question is far from clear and unambiguous. For example, those who favor granting rights to robots have not always been clear as to which kinds of robots should (or should not) be eligible; nor have they been consistent with regard to which kinds of rights-civil, legal, moral, etc.-should be granted to qualifying robots. Also, there has been considerable disagreement about which essential criterion, or cluster of criteria, a robot would need to satisfy to be eligible for rights, and there is ongoing disagreement as to whether a robot must satisfy the conditions for (moral) agency to qualify either for rights or (at least some level of) moral consideration. One aim of this paper is to show how the current debate about whether to grant rights to robots would benefit from an analysis and clarification of some key concepts and assumptions underlying that question. My principal objective, however, is to show why we should reframe that question by asking instead whether some kinds of social robots qualify for moral consideration as moral patients. In arguing that the answer to this question is "yes," I draw from some insights in the writings of Hans Jonas to defend my position.