Machines and the Moral Community (original) (raw)

The possibilities of machine morality

2023

This thesis shows morality to be broader and more diverse than its human instantiation. It uses the idea of machine morality to argue for this position. Specifically, it contrasts the possibilities open to humans with those open to machines to meaningfully engage with the moral domain. This contrast identifies distinctive characteristics of human morality, which are not fundamental to morality itself, but constrain our thinking about morality and its possibilities. It also highlights the inherent potential of machine morality to be radically different from its human counterpart and the implications this has for the moral significance of machines. My argument is particularly focussed on moral theory, which is the study of the observable and hypothetical conceptual structures of morality. By identifying structures that are recognisably moral in nature but which sit outside the boundaries of human realisation, we have tangible proof that a meaningful distinction exists between human morality and the wider moral domain. This is achieved by showing that certain essentially human limits restrict the conceptual possibilities open to human realisation. The tight coupling between these limits and the existing conceptual structures of human morality also explains why it is unjustifiable to assume that the same structures would be suitable for machines. They do not share these same limits with us, which leads me to conclude that many x conceptual structures are quite distinctive to human morality and that the structures of machine morality would be significantly different. Four examples illustrate these conclusions concretely. The first, supererogation, is an example of a moral concept that doesn't easily extend to machines. Human limits dictate what it is reasonable to expect from one another and restrict our ability to pursue aspirational moral goals. I show that machine supererogation, if it is at all possible, would require a very different justificatory basis to be coherent. The second, agency, is an example of a concept whose structures extend beyond the bounds of human realisation. The greater flexibility of artificial identity allows machines to experiment with novel forms of inter-and intra-agency. In comparison, human agency structures are limited by their tight coupling with human conceptions of identity. The third, moral aspiration, is a concept with a distinctive function in human morality. Certain aspirational ends are peculiar in that they are obviously unrealisable and even undesirable, yet their pursuit is instrumentally justifiable. This justification depends on cognitive limits that aren't shared by machines, which leads me to conclude that the role of moral aspiration in machine morality, if there is any, would necessarily differ from its human counterpart. The fourth, moral responsibility, is an example of a concept whose existing practices don't translate over to machines. We don't understand machine agency well enough to be able to judge a machine's culpability or effectively blame it. Consequently, I suggest that a responsibility conception prioritising understanding over blame is a more promising avenue for a shared conception suitable for both humans and machines. This thesis does not speculate about the existence of moral machines. This remains an open question, and one that is largely irrelevant for my conclusions as the idea alone xi is enough to advance our thinking. It does this by helping us identify the boundaries of human morality, and then, to think beyond them by recognising the possibilities of machine morality. Conclusion xvi

Attempts to Attribute Moral Agency to Intelligent Machines are Misguided

Machine ethics are quickly becoming an important part of artificial intelligence research. We argue that attempts to attribute moral agency to intelligent machines are misguided, whether applied to infrahuman or superhuman AIs. Humanity should not put its future in the hands of the machines that do not do exactly what we want them to, since we will not be able to take power back. In general, a machine should never be in a position to make any non-trivial ethical or moral judgments concerning people unless we are confident, preferably with mathematical certainty, that these judgments are what we truly consider ethical.

Towards Moral Machines: A Discussion with Michael Anderson and Susan Leigh Anderson

Conatus

At the turn of the 21st century, Susan Leigh Anderson and Michael Anderson conceived and introduced the Machine Ethics research program, that aimed to highlight the requirements under which autonomous artificial intelligence (AI) systems could demonstrate ethical behavior guided by moral values, and at the same time to show that these values, as well as ethics in general, can be representable and computable. Today, the interaction between humans and AI entities is already part of our everyday lives; in the near future it is expected to play a key role in scientific research, medical practice, public administration, education and other fields of civic life. In view of this, the debate over the ethical behavior of machines is more crucial than ever and the search for answers, directions and regulations is imperative at an academic, institutional as well as at a technical level. Our discussion with the two inspirers and originators of Machine Ethics highlights the epistemological, meta...

Is morality the last frontier for machines?

New Ideas in Psychology, 2019

This paper examines some ethical and cognitive aspects of machines making moral decisions in difficult situations. We compare the situations when humans have to make tough moral choices with those in which machines make such decisions. We argue that in situations where machines make tough moral choices, it is important to produce justification for those decisions that are psychologically compelling and acceptable by people.

Bridging Two Realms of Machine Ethics

We address problems in machine ethics dealt with using computational techniques. Our research has focused on Computational Logic, particularly Logic Programming, and its appropriateness to model morality, namely moral permissibility, its justification, and the dual-process of moral judgments regarding the realm of the individual. In the collective realm, we, using Evolutionary Game Theory in populations of individuals, have studied norms and morality emergence computationally. These, to start with, are not equipped with much cognitive capability, and simply act from a predetermined set of actions. Our research shows that the introduction of cognitive capabilities, such as intention recognition, commitment, and apology, separately and jointly, reinforce the emergence of cooperation in populations, comparatively to their absence. Bridging such capabilities between the two realms helps understand the emergent ethical behavior of agents in groups, and implements them not just in simulations, but in the world of future robots and their swarms. Evolutionary Anthropology provides teachings.

MORAL STATUS AND INTELLIGENT ROBOTS

The Southern Journal of Philosophy, 2022

The great technological achievements in the recent past regarding artificial intelligence (AI), robotics, and computer science make it very likely, according to many experts in the field, that we will see the advent of intelligent and autonomous robots that either match or supersede human capabilities in the midterm (within the next 50 years) or long term (within the next 100-300 years). Accordingly, this article has two main goals. First, we discuss some of the problems related to ascribing moral status to intelligent robots, and we examine three philosophical approachesthe Kantian approach, the relational approach, and the indirect duties approachthat are currently used in machine ethics to determine the moral status of intelligent robots. Second, we seek to raise broader awareness among moral philosophers of the important debates in machine ethics that will eventually affect how we conceive of key concepts and approaches in ethics and moral philosophy. The effects of intelligent and autonomous robots on our traditional ethical and moral theories and concepts will be substantial and will force us to revise and reconsider many established understandings. Therefore, it is essential to turn attention to debates over machine ethics now so that we can be better prepared to respond to the opportunities and challenges of the future.

Consciousness and Ethics: Artificially Conscious Moral Agents

International Journal of Machine Consciousness, 2011

What roles or functions does consciousness fulfill in the making of moral decisions? Will artificial agents capable of making appropriate decisions in morally charged situations require machine consciousness? Should the capacity to make moral decisions be considered an attribute essential for being designated a fully conscious agent? Research on the prospects for developing machines capable of making moral decisions and research on machine consciousness have developed as independent fields of inquiry. Yet there is significant overlap. Both fields are likely to progress through the instantiation of systems with artificial general intelligence (AGI). Certainly special classes of moral decision making will require attributes of consciousness such as being able to empathize with the pain and suffering of others. But in this article we will propose that consciousness also plays a functional role in making most if not all moral decisions. Work by the authors of this article with LIDA, a computational and conceptual model of human cognition, will help illustrate how consciousness can be understood to serve a very broad role in the making of all decisions including moral decisions.

OLD DRAFT - post-REVISION DRAFT ABOVE - Autonomous Reboot: the challenges of artificial moral agency and the ends of Machine ethics (part 1

Ryan Tonkens (2009) has issued a seemingly impossible challenge, to articulate a comprehensive ethical framework within which artificial moral agents (AMAs) satisfy a Kantian inspired recipe-"rational" and "free"-while also satisfying perceived prerogatives of machine ethicists to facilitate the creation of AMAs that are perfectly and not merely reliably ethical. Challenges for machine ethicists have also been presented by Anthony Beavers and Wendell Wallach. Beavers pushes for the reinvention of traditional ethics in order to avoid "ethical nihilism" due to the reduction of morality to mechanical causation. Wallach pushes for redoubled efforts toward a comprehensive account of ethics to guide machine ethicists on the issue of artificial moral agency. Options thus present themselves: reinterpret traditional ethics in a way that affords a comprehensive account of moral agency inclusive of both artificial and natural agents, or give up on the possibility and "muddle through" regardless. This series of papers pursues the first option, meets Tonkens' "challenge" and pursues Wallach's ends through Beavers' proposed means, by "landscaping" traditional moral theory in resolution of a comprehensive account of moral agency. This first paper establishes the challenge and sets out the tradition in terms of which an adequate solution should be assessed. The next paper in this series responds to the challenge in Kantian terms, and shows that a Kantian AMA is not only a possibility for Machine ethics research, but a necessary one.