The Rise of the Robots and the Crisis of Moral Patiency (original) (raw)
Related papers
Just an Artifact: Why Machines Are Perceived as Moral Agents
2011
Abstract How obliged can we be to AI, and how much danger does it pose us? A surprising proportion of our society holds exaggerated fears or hopes for AI, such as the fear of robot world conquest, or the hope that AI will indefinitely perpetuate our culture. These misapprehensions are symptomatic of a larger problem--a confusion about the nature and origins of ethics and its role in society.
Robots and the Limits of Morality
In this chapter, I ask whether we can coherently conceive of robots as moral agents and as moral patients. I answer both questions negatively but conditionally: for as long as robots lack certain features, they can be neither moral agents nor moral patients. These answers, of course, are not new. They have, yet, recently been the object of sustained critical attention (Coeckelbergh 2014; Gunkel 2014). The novelty of this contribution, then, resides in arriving at these precise answers by way of arguments that avoid these recent challenges. This is achieved by considering the psychological and biological bases of moral practices and arguing that the relevant differences in such bases are sufficient, for the time being, to exclude robots from adopting, both, an active and a passive moral role.
MORAL STATUS AND INTELLIGENT ROBOTS
The Southern Journal of Philosophy, 2022
The great technological achievements in the recent past regarding artificial intelligence (AI), robotics, and computer science make it very likely, according to many experts in the field, that we will see the advent of intelligent and autonomous robots that either match or supersede human capabilities in the midterm (within the next 50 years) or long term (within the next 100-300 years). Accordingly, this article has two main goals. First, we discuss some of the problems related to ascribing moral status to intelligent robots, and we examine three philosophical approachesthe Kantian approach, the relational approach, and the indirect duties approachthat are currently used in machine ethics to determine the moral status of intelligent robots. Second, we seek to raise broader awareness among moral philosophers of the important debates in machine ethics that will eventually affect how we conceive of key concepts and approaches in ethics and moral philosophy. The effects of intelligent and autonomous robots on our traditional ethical and moral theories and concepts will be substantial and will force us to revise and reconsider many established understandings. Therefore, it is essential to turn attention to debates over machine ethics now so that we can be better prepared to respond to the opportunities and challenges of the future.
New Challenges for Ethics: The Social Impact of Posthumanism, Robots, and Artificial Intelligence
Journal of Healthcare Engineering, 2021
The ethical approach to science and technology is based on their use and application in extremely diverse fields. Less prominence has been given to the theme of the profound changes in our conception of human nature produced by the most recent developments in artificial intelligence and robotics due to their capacity to simulate an increasing number of human activities traditionally attributed to man as manifestations of the higher spiritual dimension inherent in his nature. Hence, a kind of contrast between nature and artificiality has ensued in which conformity with nature is presented as a criterion of morality and the artificial is legitimized only as an aid to nature. On the contrary, this essay maintains that artificiality is precisely the specific expression of human nature which has, in fact, made a powerful contribution to the progress of man. However, science and technology do not offer criteria to guide the practical and conceptual use of their own contents simply because...
Behind the mask: machine morality
Journal of Experimental & Theoretical Artificial Intelligence, 2014
Contents Joel Parthemore and Blay Whitby-Moral Agency, Moral Responsibility, and Artefacts 8 John Basl-Machines as Moral Patients We Shouldn't Care About (Yet) 17 Benjamin Matheson-Manipulation, Moral Responsibility and Machines 25 Alejandro Rosas-The Holy Will of Ethical Machines 29 Keith Miller, Marty Wolf and Frances Grodzinsky-Behind the Mask: Machine Morality 33 Erica Neely-Machines and the Moral Community 38 Mark Coeckelbergh-Who Cares about Robots? 43 David J. Gunkel-A Vindication of the Rights of Machines 46 Steve Torrance-The Centrality of Machine Consciousness to Machine Ethics 54 Rodger Kibble-Can an Unmanned Drone Be a Moral Agent? 61 Marc Champagne and Ryan Tonkens-Bridging the Responsibility Gap in Automated Warfare 67 Joanna Bryson-Patiency Is Not a Virtue: Suggestions for Co-Constructing an Ethical Framework Including Intelligent Artefacts 13 Johnny Søraker-Is There Continuity Between Man and Machine?
The Machine Question: Critical Perspectives on AI, Robots and Ethics
2012
One of the enduring concerns of moral philosophy is deciding who or what is deserving of ethical consideration. Much recent attention has been devoted to the "animal question"--consideration of the moral status of nonhuman animals. In this book, David Gunkel takes up the "machine question": whether and to what extent intelligent and autonomous machines of our own making can be considered to have legitimate moral responsibilities and any legitimate claim to moral consideration. The machine question poses a fundamental challenge to moral thinking, questioning the traditional philosophical conceptualization of technology as a tool or instrument to be used by human agents. Gunkel begins by addressing the question of machine moral agency: whether a machine might be considered a legitimate moral agent that could be held responsible for decisions and actions. He then approaches the machine question from the other side, considering whether a machine might be a moral patient due legitimate moral consideration. Finally, Gunkel considers some recent innovations in moral philosophy and critical theory that complicate the machine question, deconstructing the binary agent–patient opposition itself. Technological advances may prompt us to wonder if the science fiction of computers and robots whose actions affect their human companions (think of HAL in 2001: A Space Odyssey) could become science fact. Gunkel's argument promises to influence future considerations of ethics, ourselves, and the other entities who inhabit this world.
Frontiers in Robotics and AI, 2021
Moral status can be understood along two dimensions: moral agency [capacities to be and do good (or bad)] and moral patiency (extents to which entities are objects of moral concern), where the latter especially has implications for how humans accept or reject machine agents into human social spheres. As there is currently limited understanding of how people innately understand and imagine the moral patiency of social robots, this study inductively explores key themes in how robots may be subject to humans' (im)moral action across 12 valenced foundations in the moral matrix: care/harm, fairness/unfairness, loyalty/betrayal, authority/subversion, purity/degradation, liberty/oppression. Findings indicate that people can imagine clear dynamics by which anthropomorphic, zoomorphic, and mechanomorphic robots may benefit and suffer at the hands of humans (e.g., affirmations of personhood, compromising bodily integrity, veneration as gods, corruption by physical or information interventions). Patterns across the matrix are interpreted to suggest that moral patiency may be a function of whether people diminish or uphold the ontological boundary between humans and machines, though even moral upholdings bare notes of utilitarianism.
Welcoming Robots into the Moral Circle: A Defence of Ethical Behaviourism
Science and Engineering Ethics, 2019
Can robots have significant moral status? This is an emerging topic of debate among roboticists and ethicists. This paper makes three contributions to this debate. First, it presents a theory-'ethical behaviourism'-which holds that robots can have significant moral status if they are roughly performatively equivalent to other entities that have significant moral status. This theory is then defended from seven objections. Second, taking this theoretical position onboard, it is argued that the performative threshold that robots need to cross in order to be afforded significant moral status may not be that high and that they may soon cross it (if they haven't done so already). Finally, the implications of this for our procreative duties to robots are considered, and it is argued that we may need to take seriously a duty of 'procreative beneficence' towards robots.
The possibilities of machine morality
2023
This thesis shows morality to be broader and more diverse than its human instantiation. It uses the idea of machine morality to argue for this position. Specifically, it contrasts the possibilities open to humans with those open to machines to meaningfully engage with the moral domain. This contrast identifies distinctive characteristics of human morality, which are not fundamental to morality itself, but constrain our thinking about morality and its possibilities. It also highlights the inherent potential of machine morality to be radically different from its human counterpart and the implications this has for the moral significance of machines. My argument is particularly focussed on moral theory, which is the study of the observable and hypothetical conceptual structures of morality. By identifying structures that are recognisably moral in nature but which sit outside the boundaries of human realisation, we have tangible proof that a meaningful distinction exists between human morality and the wider moral domain. This is achieved by showing that certain essentially human limits restrict the conceptual possibilities open to human realisation. The tight coupling between these limits and the existing conceptual structures of human morality also explains why it is unjustifiable to assume that the same structures would be suitable for machines. They do not share these same limits with us, which leads me to conclude that many x conceptual structures are quite distinctive to human morality and that the structures of machine morality would be significantly different. Four examples illustrate these conclusions concretely. The first, supererogation, is an example of a moral concept that doesn't easily extend to machines. Human limits dictate what it is reasonable to expect from one another and restrict our ability to pursue aspirational moral goals. I show that machine supererogation, if it is at all possible, would require a very different justificatory basis to be coherent. The second, agency, is an example of a concept whose structures extend beyond the bounds of human realisation. The greater flexibility of artificial identity allows machines to experiment with novel forms of inter-and intra-agency. In comparison, human agency structures are limited by their tight coupling with human conceptions of identity. The third, moral aspiration, is a concept with a distinctive function in human morality. Certain aspirational ends are peculiar in that they are obviously unrealisable and even undesirable, yet their pursuit is instrumentally justifiable. This justification depends on cognitive limits that aren't shared by machines, which leads me to conclude that the role of moral aspiration in machine morality, if there is any, would necessarily differ from its human counterpart. The fourth, moral responsibility, is an example of a concept whose existing practices don't translate over to machines. We don't understand machine agency well enough to be able to judge a machine's culpability or effectively blame it. Consequently, I suggest that a responsibility conception prioritising understanding over blame is a more promising avenue for a shared conception suitable for both humans and machines. This thesis does not speculate about the existence of moral machines. This remains an open question, and one that is largely irrelevant for my conclusions as the idea alone xi is enough to advance our thinking. It does this by helping us identify the boundaries of human morality, and then, to think beyond them by recognising the possibilities of machine morality. Conclusion xvi
Can robots be responsible moral agents? And why should we care?
Connection Science
Can robots be moral agents? And why should we care? Principle: Humans, not robots, are responsible agents. Robots should be designed; operated as far as is practicable to comply with existing laws & fundamental rights & freedoms, including privacy. This principle highlights the need for humans to accept responsibility for robot behaviour and in that it is commendable. However it raises further questions about legal and moral responsibility. The issues considered here are (i) the reasons for assuming that humans and not robots are responsible agents (ii) whether it is sufficient to design robots to comply with existing laws and human rights and (iii) the implications, for robot deployment, of the assumption that robots are not morally responsible.