Towards Moral Machines: A Discussion with Michael Anderson and Susan Leigh Anderson (original) (raw)

Machine Law, Ethics, and Morality in the Age of Artificial Intelligence

Machine Law, Ethics, and Morality in the Age of Artificial Intelligence, 2021

This book is dedicated to expert research topics, and analyses of ethics-related inquiry, at the machine ethics and morality level: key players, benefits, problems, policies, and strategies. Gathering some of the leading voices that recognize and understand the complexities and intricacies of human-machine ethics early on in this phenomenon provides a resourceful compendium to be accessed by decision-makers and theorists concerned with identification and adoption of human-machine ethics initiatives, leading to needed policy adoption and reform for human-machine entities, their technologies, and their societal and legal obligations. The book encompasses theory and practice sections for each topical component of important areas of human-machine ethics both in existence today, and prospective for the future. Yet, human-machine ethics is not a futuristic matter; it is ever-present, here and now – thanks largely to ongoing technological advances in robotics, artificial intelligence, nanotechnology, and synthetic biology, among other allied fields.

AI Ethics and Machine Ethics

Handbook on the Ethics of Artificial Intelligence (Eds. David Gunkel), 97-112 , 2024

This book chapter aims to offer readers insight into the key distinctions between these two fields, specifically in terms of their subject matter, viewpoints, and approaches. Following this introduction, the second section delves into AI ethics, while the third section explores machine ethics. The fourth section offers a synopsis of two important intersecting issues – the moral standing of AI systems and AI ethics in facial recognition technology (as exemplified by China’s Social Credit Point System). The final section provides some concluding remarks.

From Machine Ethics to Computational Ethics

Research into the ethics of artificial intelligence is often categorized into two subareas-robot ethics and machine ethics. Many of the definitions and classifications of the subject matter of these subfields, as found in the literature, are conflated, which I seek to rectify. In this essay, I infer that using the term 'machine ethics' is too broad and glosses over issues that the term computational ethics best describes. I show that the subject of inquiry of computational ethics is of great value and indeed is an important frontier in developing ethical artificial intelligence systems (AIS). I also show that computational is a distinct, often neglected field in the ethics of AI. In contrast to much of the literature, I argue that the appellation 'machine ethics' does not sufficiently capture the entire project of embedding ethics into AI/S and hence the need for computational ethics. This essay is unique for two reasons; first, it offers a philosophical analysis of the subject of computational ethics that is not found in the literature. Second, it offers a finely grained analysis that shows the thematic distinction among robot ethics, machine ethics and computational ethics.

The problem of machine ethics in artificial intelligence

AI & SOCIETY, 2017

The advent of the intelligent robot has occupied a significant position in society over the past decades and has given rise to new issues in society. As we know, the primary aim of artificial intelligence or robotic research is not only to develop advanced programs to solve our problems but also to reproduce mental qualities in machines. The critical claim of artificial intelligence (AI) advocates is that there is no distinction between mind and machines and thus they argue that there are possibilities for machine ethics, just as human ethics. Unlike computer ethics, which has traditionally focused on ethical issues surrounding human use of machines, AI or machine ethics is concerned with the behaviour of machines towards human users and perhaps other machines as well, and the ethicality of these interactions. The ultimate goal of machine ethics, according to the AI scientists, is to create a machine that itself follows an ideal ethical principle or a set of principles; that is to say, it is guided by this principle or these principles in decisions it makes about possible courses of action it could take. Thus, machine ethics task of ensuring ethical behaviour of an artificial agent. Although, there are many philosophical issues related to artificial intelligence, but our attempt in this paper is to discuss, first, whether ethics is the sort of thing that can be computed. Second, if we are ascribing mind to machines, it gives rise to ethical issues regarding machines. And if we are not drawing the difference between mind and machines, we are not only redefining specifically human mind but also the society as a whole. Having a mind is, among other things, having the capacity to make voluntary decisions and actions. The notion of mind is central to our ethical thinking, and this is because the human mind is self-conscious, and this is a property that machines lack, as yet.

The Limits of Machine Ethics

Religions

Machine Ethics has established itself as a new discipline that studies how to endow autonomous devices with ethical behavior. This paper provides a general framework for classifying the different approaches that are currently being explored in the field of machine ethics and introduces considerations that are missing from the current debate. In particular, law-based codes implemented as external filters for action-which we have named filtered decision making-are proposed as the basis for future developments. The emergence of values as guides for action is discussed, and personal language -together with subjectivity-are indicated as necessary conditions for this development. Last, utilitarian approaches are studied and the importance of objective expression as a requisite for their implementation is stressed. Only values expressed by the programmer in a public language-that is, separate of subjective considerations-can be evolved in a learning machine, therefore establishing the limits of present-day machine ethics.

Ethical content in artificial intelligence systems: A demand explained in three critical points

Frontiers in Psychology, 2023

Artificial intelligence (AI) advancements are changing people's lives in ways never imagined before. We argue that ethics used to be put in perspective by seeing technology as an instrument during the first machine age. However, the second machine age is already a reality, and the changes brought by AI are reshaping how people interact and flourish. That said, ethics must also be analyzed as a requirement in the content. To expose this argument, we bring three critical points-autonomy, right of explanation, and value alignment-to guide the debate of why ethics must be part of the systems, not just in the principles to guide the users. In the end, our discussion leads to a reflection on the redefinition of AI's moral agency. Our distinguishing argument is that ethical questioning must be solved only after giving AI moral agency, even if not at the same human level. For future research, we suggest appreciating new ways of seeing ethics and finding a place for machines, using the inputs of the models we have been using for centuries but adapting to the new reality of the coexistence of artificial intelligence and humans.

Dimensions and Limitations of AI Ethics

EthicAI=Labs, 2022

This article addresses the "ethical turn" in the studies of AI which is framed by the discourses of the so-called "empirical turn" (Verbeek). In this context, the main research goal in the field of AI ethics is implementing ethical principles within machines considered as growing increasingly autonomous in their agency. Outlined are the ethical theories and design approaches to achieving this goal as well as some of the technical and conceptual challenges faced by the AI researchers. However, in discussing the question of implementing ethical restrictions in machines to prevent them from harming humankind and better serve its pursuit of happiness, an ethical contradiction arises (Gunkel, 2020: 547). If we want to speak of ethics, shouldn’t we begin to consider artificial system’s rights and not only their obligations? This understanding dissolves the power relationship between humans and machines discussed above. And finally, this article points to a fundamental limitation of AI ethics research in that the existing discourses rather serve as “a tool for policy making” without any ambition of radically questioning the framework (ontological, metaphysical, transcendental or politico-economic), which provides the conditions of innovation and existence of technologies as such.

Bridging Two Realms of Machine Ethics

We address problems in machine ethics dealt with using computational techniques. Our research has focused on Computational Logic, particularly Logic Programming, and its appropriateness to model morality, namely moral permissibility, its justification, and the dual-process of moral judgments regarding the realm of the individual. In the collective realm, we, using Evolutionary Game Theory in populations of individuals, have studied norms and morality emergence computationally. These, to start with, are not equipped with much cognitive capability, and simply act from a predetermined set of actions. Our research shows that the introduction of cognitive capabilities, such as intention recognition, commitment, and apology, separately and jointly, reinforce the emergence of cooperation in populations, comparatively to their absence. Bridging such capabilities between the two realms helps understand the emergent ethical behavior of agents in groups, and implements them not just in simulations, but in the world of future robots and their swarms. Evolutionary Anthropology provides teachings.