Toward machines that behave ethically better than humans do (extended abstract) (original) (raw)

Toward machines that behave ethically better than humans do

Abstract With the increasing dependence on autonomous operating agents and robots the need for ethical machine behavior rises. This paper presents a moral reasoner that combines connectionism, utilitarianism and ethical theory about moral duties. The moral decision-making matches the analysis of expert ethicists in the health domain. This may be useful in many applications, especially where machines interact with humans in a medical context.

Is morality the last frontier for machines?

New Ideas in Psychology, 2019

This paper examines some ethical and cognitive aspects of machines making moral decisions in difficult situations. We compare the situations when humans have to make tough moral choices with those in which machines make such decisions. We argue that in situations where machines make tough moral choices, it is important to produce justification for those decisions that are psychologically compelling and acceptable by people.

Integrating robot ethics and machine morality: the study and design of moral competence in robots (2016)

Your article is protected by copyright and all rights are held exclusively by Springer Science +Business Media Dordrecht. This e-offprint is for personal use only and shall not be selfarchived in electronic repositories. If you wish to self-archive your article, please use the accepted manuscript version for posting on your own website. You may further deposit the accepted manuscript version in any repository, provided it is only made publicly available 12 months after official publication or later and provided acknowledgement is given to the original source of publication and a link is inserted to the published article on Springer's website. The link must be accompanied by the following text: "The final publication is available at link.springer.com".

Critiquing the Reasons for Making Artificial Moral Agents

Science and Engineering Ethics

Many industry leaders and academics from the field of machine ethics would have us believe that the inevitability of robots coming to have a larger role in our lives demands that robots be endowed with moral reasoning capabilities. Robots endowed in this way may be referred to as artificial moral agents (AMA). Reasons often given for developing AMAs are: the prevention of harm, the necessity for public trust, the prevention of immoral use, such machines are better moral reasoners than humans, and building these machines would lead to a better understanding of human morality. Although some scholars have challenged the very initiative to develop AMAs, what is currently missing from the debate is a closer examination of the reasons offered by machine ethicists to justify the development of AMAs. This closer examination is especially needed because of the amount of funding currently being allocated to the development of AMAs (from funders like Elon Musk) coupled with the amount of attention researchers and industry leaders receive in the media for their efforts in this direction. The stakes in this debate are high because moral robots would make demands on society; answers to a host of pending questions about what counts as an AMA and whether they are morally responsible for their behavior or not. This paper shifts the burden of proof back to the machine ethicists demanding that they give good reasons to build AMAs. The paper argues that until this is done, the development of commercially available AMAs should not proceed further.

Robots and moral obligations

Robots and Moral Obligations. In: What Social Robots Can and Should Do: Proceedings of Robophilosophy 2016/TRANSOR 2016, 290, 2016

Using Roger Crisp's [1] arguments for well-being as the ultimate source of moral reasoning, this paper argues that there are no ultimate, non-derivative reasons to program robots with moral concepts such as moral obligation, morally wrong or morally right. Although these moral concepts should not be used to program robots, they are not to be abandoned by humans since there are still reasons to keep using them, namely: as an assessment of the agent, to take a stand or to motivate and reinforce behaviour. Because robots are completely rational agents they don't need these additional motivations, they can suffice with a concept of what promotes well-being. How a robot knows which action promotes well-being to the greatest degree is still up for debate, but a combination of top-down and bottom-up approaches seem to be the best way.

Autonomous Machines, Moral Judgment, and Acting for the Right Reasons

Ethical Theory and Moral Practice

Modern weapons of war have undergone precipitous technological change over the past generation and the future portends even greater advances. Of particular interest are so- called ‘autonomous weapon systems’ (henceforth, AWS), that will someday purportedly have the ability to make life and death targeting decisions ‘on their own.’ Despite the strong and widespread sentiments against such weapons, however, proffered philosophical arguments against AWS are often found lacking in substance. We propose that the prevalent moral aversion to AWS is supported by a pair of compelling objections. First, we argue that even a sophisticated robot is not the kind of thing that is capable of replicating human moral judgment. This conclusion follows if human moral judgment is not codifiable, i.e. it cannot be captured by a list of rules. Moral judgment requires either the ability to engage in wide reflective equilibrium, the ability to perceive certain facts as moral considerations, moral imagination, or the ability to have moral experiences with a particular phenomenological character. Robots cannot in principle possess these abilities, so robots cannot in principle replicate human moral judgment. If robots cannot in principle replicate human moral judgment then it is morally problematic to deploy AWS with that aim in mind. Second, we then argue that even if it is possible for a sufficiently sophisticated robot to make ‘moral decisions’ that are extensionally indistinguishable from (or better than) human moral decisions, these ‘decisions’ could not be made for the right reasons. This means that the ‘moral decisions’ made by AWS are bound to be morally deficient in at least one respect even if they are extensionally indistinguishable from human ones.

A conceptual and computational model of moral decision making in human and artificial agents

Topics in cognitive science, 2010

Recently, there has been a resurgence of interest in general, comprehensive models of human cognition. Such models aim to explain higher-order cognitive faculties, such as deliberation and planning. Given a computational representation, the validity of these models can be tested in computer simulations such as software agents or embodied robots. The push to implement computational models of this kind has created the field of artificial general intelligence (AGI). Moral decision making is arguably one of the most challenging tasks for computational approaches to higher-order cognition. The need for increasingly autonomous artificial agents to factor moral considerations into their choices and actions has given rise to another new field of inquiry variously known as Machine Morality, Machine Ethics, Roboethics, or Friendly AI. In this study, we discuss how LIDA, an AGI model of human cognition, can be adapted to model both affective and rational features of moral decision making. Usin...

The problem of machine ethics in artificial intelligence

AI & SOCIETY, 2017

The advent of the intelligent robot has occupied a significant position in society over the past decades and has given rise to new issues in society. As we know, the primary aim of artificial intelligence or robotic research is not only to develop advanced programs to solve our problems but also to reproduce mental qualities in machines. The critical claim of artificial intelligence (AI) advocates is that there is no distinction between mind and machines and thus they argue that there are possibilities for machine ethics, just as human ethics. Unlike computer ethics, which has traditionally focused on ethical issues surrounding human use of machines, AI or machine ethics is concerned with the behaviour of machines towards human users and perhaps other machines as well, and the ethicality of these interactions. The ultimate goal of machine ethics, according to the AI scientists, is to create a machine that itself follows an ideal ethical principle or a set of principles; that is to say, it is guided by this principle or these principles in decisions it makes about possible courses of action it could take. Thus, machine ethics task of ensuring ethical behaviour of an artificial agent. Although, there are many philosophical issues related to artificial intelligence, but our attempt in this paper is to discuss, first, whether ethics is the sort of thing that can be computed. Second, if we are ascribing mind to machines, it gives rise to ethical issues regarding machines. And if we are not drawing the difference between mind and machines, we are not only redefining specifically human mind but also the society as a whole. Having a mind is, among other things, having the capacity to make voluntary decisions and actions. The notion of mind is central to our ethical thinking, and this is because the human mind is self-conscious, and this is a property that machines lack, as yet.