How AI Can Aid Bioethics (original) (raw)

Artificial Artificial Intelligence: Measuring Influence of AI 'Assessments' on Moral Decision-Making

2020

Given AI's growing role in modeling and improving decision-making, how and when to present users with feedback is an urgent topic to address. We empirically examined the effect of feedback from false AI on moral decision-making about donor kidney allocation. We found some evidence that judgments about whether a patient should receive a kidney can be influenced by feedback about participants' own decision-making perceived to be given by AI, even if the feedback is entirely random. We also discovered different effects between assessments presented as being from human experts and assessments presented as being from AI.

Whose morality? Which rationality? Challenging artificial intelligence as a remedy for the lack of moral enhancement

Humanities and Social Sciences Communications

Moral implications of the decision-making process based on algorithms require special attention within the field of machine ethics. Specifically, research focuses on clarifying why even if one assumes the existence of well-working ethical intelligent agents in epistemic terms, it does not necessarily mean that they meet the requirements of autonomous moral agents, such as human beings. For the purposes of exemplifying some of the difficulties in arguing for implicit and explicit ethical agents in Moor’s sense, three first-order normative theories in the field of machine ethics are put to test. Those are Powers’ prospect for a Kantian machine, Anderson and Anderson’s reinterpretation of act utilitarianism and Howard and Muntean’s prospect for a moral machine based on a virtue ethical approach. By comparing and contrasting the three first-order normative theories, and by clarifying the gist of the differences between the processes of calculation and moral estimation, the possibility f...

How to use AI ethically for ethical decision-making

The American Journal of Bioethics, 2022

What counts as a good decision depends on the domain. In diagnostic imaging, for instance, a good decision involves diagnosing cancer if and only if the patient has cancer. In clinical ethics, good decision-making is defined in terms of the extent to which the following two goals are met: 1. Accuracy: The decision is the right one, where the “right” decision is that which best aligns with relevant justifying values, principles and their respective weights as they apply to the case at hand. 2. Transparency: The patients are provided with an explanation of the decision in terms of relevant values, principles and how they are weighed. In other words, the patients are offered reasons that explain and justify the decision. For the use of artificial intelligence in clinical ethics to be ethically justified, it should improve the transparency and accuracy of ethical decision-making beyond that which physicians and ethics committees are currently capable of providing.

Artificial Intelligence and Resource Allocation in Health Care: The Pro- cess-Outcome Divide in Perspectives on Moral Decision-Making

2020

Pandemics or health emergencies create situations where the demand for clinical resources greatly exceeds the supply leading to health providers making morally complex resource allocation decisions. To help with these types of decisions, health care providers are increasingly deploying artificial intelligence (AI)-enabled intelligent decision support systems. This paper presents a synopsis of the current debate on these AI-enabled tools to suggest that the existing commentary is outcome-centric i.e. it presents competing narratives where AI is described as a cause for problematic or solution-oriented abstract and material outcomes. Human decision-making processes such as empathy, intuition, and structural and agentic knowledge that go into making moral decisions in clinical settings are largely ignored in this discussion. It is argued here that this process-outcome divide in our understanding of moral decision-making can prevent us from taking the long view on consequences such as c...

AI and moral thinking: how can we live well with machines to enhance our moral agency

AI Ethics, 2020

Humans should never relinquish moral agency to machines, and machines should be 'aligned' with human values; but we also need to consider how broad assumptions about our moral capacities and the capabilities of AI, impact on how we think about AI and ethics. Consideration of certain approaches, such as the idea that we might programme our ethics into machines, may rest upon a tacit assumption of our own moral progress. Here I consider how broad assumptions about morality act to suggest certain approaches in addressing the ethics of AI. Work in the ethics of AI would benefit from closer attention not just to what our moral judgements should be, but also to how we deliberate and act morally: the process of moral decisionmaking. We must guard against any erosion of our moral agency and responsibilities. Attention to the differences between humans and machines, alongside attention to ways in which humans fail ethically, could be useful in spotting specific, if limited, ways that AI assist us to advance our moral agency.

Moral Thinking and Artificial Intelligence 2022

submitted to Danish Yearbook of Philosophy, 2022

Will artificial intelligence (AI) decision-making support erode human ability to think critically about moral? The two level of moral thinking, the critical and the intuitive, devised by the moral philosopher Professor Richard M. Hare is (re-)introduced. We consider the possibility of AI moral agency and Hare's notion of an archangel as an ideal moral agent. Understanding and human weakness are key concepts to morality. Changing goals are essential to critical thinking, and neither AI nor fanatics change goals. Are deterministic systems morally appraisable anyway? Revamping Hare's argument, error theory and moral realism are reconsidered in the light of machine learning algorithms (MLA). Intuitive thinking as a hallmark of human thinking is briefly considered, presenting some examples of applied decision-making. We then consider impartiality/impersonality of evaluation support. Conclusively, AI will not leave humans without moral agency, but AI may lure human beings into leaving moral decisions to AI.

Artificial Intelligence as a Socratic Assistant for Moral Enhancement

Neuroethics, 2019

The moral enhancement of human beings is a constant theme in the history of humanity. Today, faced with the threats of a new, globalised world, concern over this matter is more pressing. For this reason, the use of biotechnology to make human beings more moral has been considered. However, this approach is dangerous and very controversial. The purpose of this article is to argue that the use of another new technology, AI, would be preferable to achieve this goal. Whilst several proposals have been made on how to use AI for moral enhancement, we present an alternative that we argue to be superior to other proposals that have been developed.

Artificial Intelligence and Morality: A Social Responsibility

Journal of Intelligence Studies in Business

Both the globe and technology are growing more quickly than ever. Artificial intelligence's design and algorithm are being called into question as its deployment becomes more widespread, raising moral and ethical issues. We use artificial intelligence in a variety of industries to improve skill, service, and performance. Hence, it has both proponents and opponents. AI uses a given collection of data to derive action or knowledge. There is therefore always a chance that it will contain some inaccurate information. Since artificial intelligence is created by scientists and engineers, it will always present issues with accountability, responsibility, and system reliability. There is great potential for economic development, societal advancement, and improved human security and safety thanks to artificial intelligence.

Computational Models of Ethical Reasoning: Challenges, Initial Steps, and Future Directions

H ow can machines support or, even more significantly, replace humans in performing ethical reasoning? This question greatly interests machine ethics researchers. Imbuing a computer with the ability to reason about ethical problems and dilemmas is as difficult a task as there is for AI scientists and engineers. First, ethical reasoning is based on abstract principles that you can't easily apply in a formal, deductive fashion. So, the favorite tools of logicians and mathematicians, such as firstorder logic, aren't applicable. Second, throughout intellectual history, philosophers have proposed many theoretical frameworks, such as Aristotelian virtue theory, 1 the ethics of respect for persons, 2 act utilitarianism, 3 utilitarianism, 4 and prima facie duties, 5 and no universal agreement exists on which ethical theory or approach is the best. Furthermore, any of these theories or approaches could be the focus of inquiry, but all are difficult to make computational without relying on simplifying assumptions and subjective interpretation. Finally, ethical issues touch human beings profoundly and fundamentally. The premises, beliefs, and principles that humans use to make ethical decisions are quite varied, not fully understood, and often inextricably intertwined with religious beliefs.

Toward machines that behave ethically better than humans do (extended abstract)

Belgian/Netherlands Artificial Intelligence Conference, 2012

With the increasing dependence on autonomous operating agents and robots the need for ethical machine behavior rises. This paper presents a moral reasoner that combines connectionism, utilitarianism and ethical theory about moral duties. The moral decision-making matches the analysis of expert ethicists in the health domain. This may be useful in many applications, especially where machines interact with humans in a medical context. Additionally, when connected to a cognitive model of emotional intelligence and affective decision making, it can be explored how moral decision making impacts affective behavior.