Attempts to Attribute Moral Agency to Intelligent Machines are Misguided (original) (raw)

The Problem Of Moral Agency In Artificial Intelligence

2021 IEEE Conference on Norbert Wiener in the 21st Century (21CW), 2021

Humans have invented intelligent machinery to enhance their rational decision-making procedure, which is why it has been named 'augmented intelligence'. The usage of artificial intelligence (AI) technology is increasing enormously with every passing year, and it is becoming a part of our daily life. We are using this technology not only as a tool to enhance our rationality but also heightening them as the autonomous ethical agent for our future society. Norbert Wiener envisaged 'Cybernetics' with a view of a brain-machine interface to augment human beings' biological rationality. Being an autonomous ethical agent presupposes an 'agency' in moral decision-making procedure. According to agency's contemporary theories, AI robots might be entitled to some minimal rational agency. However, that minimal agency might not be adequate for a fully autonomous ethical agent's performance in the future. If we plan to implement them as an ethical agent for the future society, it will be difficult for us to judge their actual stand as a moral agent. It is well known that any kind of moral agency presupposes consciousness and mental representations, which cannot be articulated synthetically until today. We can only anticipate that this milestone will be achieved by AI scientists shortly, which will further help them triumph over 'the problem of ethical agency in AI'. Philosophers are currently trying a probe of the pre-existing ethical theories to build a guidance framework for the AI robots and construct a tangible overview of artificial moral agency. Although, no unanimous solution is available yet. It will land up in another conflicting situation between biological, moral agency and autonomous ethical agency, which will leave us in a baffled state. Creating rational and ethical AI machines will be a fundamental future research problem for the AI field. This paper aims to investigate 'the problem of moral agency in AI' from a philosophical outset and hold a survey of the relevant philosophical discussions to find a resolution for the same.

AI and moral thinking: how can we live well with machines to enhance our moral agency

AI Ethics, 2020

Humans should never relinquish moral agency to machines, and machines should be 'aligned' with human values; but we also need to consider how broad assumptions about our moral capacities and the capabilities of AI, impact on how we think about AI and ethics. Consideration of certain approaches, such as the idea that we might programme our ethics into machines, may rest upon a tacit assumption of our own moral progress. Here I consider how broad assumptions about morality act to suggest certain approaches in addressing the ethics of AI. Work in the ethics of AI would benefit from closer attention not just to what our moral judgements should be, but also to how we deliberate and act morally: the process of moral decisionmaking. We must guard against any erosion of our moral agency and responsibilities. Attention to the differences between humans and machines, alongside attention to ways in which humans fail ethically, could be useful in spotting specific, if limited, ways that AI assist us to advance our moral agency.

Can artificial intelligences be moral agents?

New Ideas in Psychology, 2019

The paper addresses the question whether artificial intelligences can be moral agents. We begin by observing that philosophical accounts of moral agency, in particular Kantianism and utilitarianism, are very abstract theoretical constructions: no human being can ever be a Kantian or a utilitarian moral agent. Ironically, it is easier for a machine to approximate this idealised type of agency than it is for homo sapiens. We then proceed to outline the structure of human moral practices. Against this background, we identify two conditions of moral agency: internal and external. We argue further that the existing AI architectures are unable to meet the two conditions. In consequence, machines-at least at the current stage of their development-cannot be considered moral agents.

Responses to a Critique of Artificial Moral Agents

Springer, 2019

The field of machine ethics is concerned with the question of how to embed ethical behaviors, or a means to determine ethical behaviors, into artificial intelligence (AI) systems. The goal is to produce artificial moral agents (AMAs) that are either implicitly ethical (designed to avoid unethical consequences) or explicitly ethical (designed to behave ethically). Van Wynsberghe and Robbins’ (2018) paper Critiquing the Reasons for Making Artificial Moral Agents critically addresses the reasons offered by machine ethicists for pursuing AMA research; this paper, co-authored by machine ethicists and commentators, aims to contribute to the machine ethics conversation by responding to that critique.

Whose morality? Which rationality? Challenging artificial intelligence as a remedy for the lack of moral enhancement

Humanities and Social Sciences Communications

Moral implications of the decision-making process based on algorithms require special attention within the field of machine ethics. Specifically, research focuses on clarifying why even if one assumes the existence of well-working ethical intelligent agents in epistemic terms, it does not necessarily mean that they meet the requirements of autonomous moral agents, such as human beings. For the purposes of exemplifying some of the difficulties in arguing for implicit and explicit ethical agents in Moor’s sense, three first-order normative theories in the field of machine ethics are put to test. Those are Powers’ prospect for a Kantian machine, Anderson and Anderson’s reinterpretation of act utilitarianism and Howard and Muntean’s prospect for a moral machine based on a virtue ethical approach. By comparing and contrasting the three first-order normative theories, and by clarifying the gist of the differences between the processes of calculation and moral estimation, the possibility f...

A Case for Machine Ethics in Modeling Human-Level Intelligent Agents

Kritike, 2018

This paper focuses on the research field of machine ethics and how it relates to a technological singularity—a hypothesized, futuristic event where artificial machines will have greater-than-human-level intelligence. One problem related to the singularity centers on the issue of whether human values and norms would survive such an event. To somehow ensure this, a number of artificial intelligence researchers have opted to focus on the development of artificial moral agents, which refers to machines capable of moral reasoning, judgment, and decision-making. To date, different frameworks on how to arrive at these agents have been put forward. However, there seems to be no hard consensus as to which framework would likely yield a positive result. With the body of work that they have contributed in the study of moral agency, philosophers may contribute to the growing literature on artificial moral agency. While doing so, they could also think about how the said concept could affect other important philosophical concepts.

Responsibility assignment won’t solve the moral issues of artificial intelligence

AI and Ethics

Who is responsible for the events and consequences caused by using artificially intelligent tools, and is there a gap between what human agents can be responsible for and what is being done using artificial intelligence? Both questions presuppose that the term ‘responsibility’ is a good tool for analysing the moral issues surrounding artificial intelligence. This article will draw this presupposition into doubt and show how reference to responsibility obscures the complexity of moral situations and moral agency, which can be analysed with a more differentiated toolset of moral terminology. It suggests that the impression of responsibility gaps only occurs if we gloss over the complexity of the moral situation in which artificial intelligent tools are employed and if—counterfactually—we ascribe them some kind of pseudo-agential status.

Artificial Moral Agency: Philosophical Assumptions, Methodological Challenges, and Normative Solutions

In review, 2019

The field of "machine ethics" has raised the issue of the moral agency and responsibility of artificial entities, like computers and robots under the heading of "artificial moral agents" (AMA). In this article, we work through philosophical assumptions, conceptual and logical variations, and identify methodological as well as conceptual problems based on an analysis of all sides in this debate. A main conclusion is that many of these problems can be better handled by moving the discussion into a more outright normative ethical territory. Rather than locking the issue to be about AMA, a more fruitfull way forward is to address to what extent both machines and human beings should be in different ways included in different (sometimes shared) practices of ascribing moral agency and responsibility.

Morality in the AI World

Law and Business

AIs’ presence in and influence on human life is growing. AIs are seen more and more as autonomously acting agents, which creates a challenge to build ethics into their design. This paper defends the thesis that we need to equip AI with artificial conscience to make them capable of wise judgements. An argument is built in three steps. First, the concept of decision is presented, and second, the Asilomar Principles for AI development are analysed. It is then shown that to meet those principles AI needs the capability of passing moral judgements on right and wrong, of following that judgement, and of passing a meta-judgement on the correctness of a given moral judgement, which is a role of conscience. In classical philosophy, the ability to discover right and wrong and to stick to one's judgement of what is right action in given circumstances is called practical wisdom. The conclusion is that we should equip AI with artificial wisdom. Some problems stemming from ascribing moral age...