Robots and Respect: A Reply to Sparrow (original) (raw)
Related papers
Autonomous Machines, Moral Judgment, and Acting for the Right Reasons
Ethical Theory and Moral Practice
Modern weapons of war have undergone precipitous technological change over the past generation and the future portends even greater advances. Of particular interest are so- called ‘autonomous weapon systems’ (henceforth, AWS), that will someday purportedly have the ability to make life and death targeting decisions ‘on their own.’ Despite the strong and widespread sentiments against such weapons, however, proffered philosophical arguments against AWS are often found lacking in substance. We propose that the prevalent moral aversion to AWS is supported by a pair of compelling objections. First, we argue that even a sophisticated robot is not the kind of thing that is capable of replicating human moral judgment. This conclusion follows if human moral judgment is not codifiable, i.e. it cannot be captured by a list of rules. Moral judgment requires either the ability to engage in wide reflective equilibrium, the ability to perceive certain facts as moral considerations, moral imagination, or the ability to have moral experiences with a particular phenomenological character. Robots cannot in principle possess these abilities, so robots cannot in principle replicate human moral judgment. If robots cannot in principle replicate human moral judgment then it is morally problematic to deploy AWS with that aim in mind. Second, we then argue that even if it is possible for a sufficiently sophisticated robot to make ‘moral decisions’ that are extensionally indistinguishable from (or better than) human moral decisions, these ‘decisions’ could not be made for the right reasons. This means that the ‘moral decisions’ made by AWS are bound to be morally deficient in at least one respect even if they are extensionally indistinguishable from human ones.
For many, lethal autonomous weapons, or “killer robots,” are the stuff of nightmares. They have already been the subject of vigorous debate and copious scholarship (Adams 2001, Asaro 2012, Sparrow 2007, Wallach 2008). Some activists are already calling for a moratorium on the development of autonomous weapons, and others are calling for an outright ban. This article considers the strategic value of autonomous weapons and their current legal status. But the focus of this article is on the presumptive moral case in favor of killer robots and the torrent of criticisms that have been unleashed against them in recent years.
Binary Bullets: The Ethics of Cyberwarfare, 2016
Is there an ideal war, a best possible war? Is there a war greater than which no war can be conceived? What would such a war be like, and are there any means of waging war that satisfy this description? I will suggest that cyberwarfare offers the possibility of just such an ideal war. As long as the concept of an ideal war is coherent—as I argue in this essay—we should answer the opening question like this: An ideal war would be a war wherein civilian casualties were minimal or nonexistent and where acts of violence perfectly discriminated between combatants and noncombatants (§1). Cyberwarfare has made possible this kind of ideal warfare for the first time by profoundly improving a state’s ability to direct its force discriminately and to ensure that force is proportional (§2). Since cyberwarfare does not raise any moral concerns serious enough to countervail its clear benefits, we are obligated to prefer cyber means where practical (§3). These benefits of cyberwarfare undermine the moral stringency of the proportionality and probability of success criteria of jus ad bellum (§4).
Averting the moral free-for-all of autonomous weapons
Waging warfare and developing technologies are two distinctively human activities that have long been closely intertwined. Emerging technologies have continually shaped military strategy and policy, from the invention of Greek fire, to gunpowder, artillery, nuclear weapons, and GPS- and laser-guided munitions. The diligent student of military history is also a keen observer of technological change. Once again, a new technology on the horizon promise to drastically alter the texture and norms of combat: lethal autonomous weapons. Remarkable advances in artificial intelligence, machine learning, and computer vision lend an urgent credibility to the suggestion that reliable autonomous weapons are possible.
While robots and automata have traditionally belonged to the realm of fiction, they are rapidly becoming an issue for the disarmament community. On the one hand, some experts believe that robots programmed to adhere to international humanitarian law (IHL) will be able to act more ethically than human beings on the battlefield. On the other hand, several commentators have disputed this claim, contending that the use of robots – or autonomous weapon systems (AWSs) – will lower the threshold to use violent force, and that such machines will be unable to discriminate between soldiers and civilians. Accordingly, this (essentially utilitarian) discussion of the consequences the deployment of AWSs is likely to have, remains locked in a word-against-word argument. Rather than focusing on the direct humanitarian effects of AWSs, people calling for a pre-emptive ban should point to the issue of moral agency, and the relationship between AWSs and human beings. Machine Autonomy and the Uncanny is an attempt at separating the question of ‘harm’ from questions pertaining to ‘the harmer’. The use of AWSs poses grave problems for the doctrine of the moral equality of soldiers, for the dignity of all parties involved, and for both legal and moral responsibility.
Self-Defense without a " Self "
The preceding chapter argued that if we created AWS capable of complying with contemporary targeting doctrine, we would be creating strategic actors and not merely force multipliers. This chapter takes a view from a different direction. In particular, this chapter asks: how would the use of AWS challenge a right to self-defense under jus ad bellum? Given that AWS have no " self " to defend, since they are not moral agents and are incapable of being killed or harmed, does their use change or limit our justification to use lethal force? This chapter argues that the ability to use AWS in the stay of human warfighters does challenge justifications to use lethal force on two fronts. First, it proscribes militaries from using lethal force in response to attacks against their robotic warfighters. If there is no lethal threat, one cannot justify using lethal force in response. This radical asymmetry, in turn, affects the way in which collectivities may justify using force on grounds of a right of national self-defense. In other words, the potential to use AWS to fight wars affects our jus ad bellum proportionality calculations, even in the face of attack against them, the result of which is a rather perverse effect: that the possession and the ability to use AWS prohibits their use. If there is no lethal threat, there can be no use of lethal force to respond. The argument proceeds in four sections.
The ethical and legal case against autonomy in weapons systems
In order to be counted as autonomous, a weapons system must perform the critical functions of target selection and engagement without any intervention by human operators. Human rights organizations, as well as a growing number of States, have been arguing for banning weapons systems satisfying this condition – that are usually referred to as autonomous weapons system (AWS) on this account – and for maintaining a meaningful human control (MHC) over any weapons systems. This twofold goal has been pursued by leveraging on ethical and legal arguments, which spell out a variety of deontological or consequentialist reasons. Roughly speaking, deontological arguments support the conclusion that by deploying AWS one is likely or even bound to violate moral and legal obligations of special sorts of agents (military commanders and operators) or moral and legal rights of special sorts of patients (AWS potential victims). Consequentialist arguments substantiate the conclusion that prohibiting AWS is expected to protect peace and security, thereby enhancing collective human welfare, more effectively than the incompatible choice of permitting their use. Contrary to a widespread view, this paper argues that deontological and consequentialist reasons can be coherently combined so as to provide mutually reinforcing ethical and legal reasons for banning AWS. To this end, a confluence model is set forth that enables one to solve potential conflicts between these two approaches, by prioritizing deontological arguments over consequentialist ones. Finally, it is maintained that the proposed confluence model significantly bears on the issue of what it is to exercise genuine MHC on existing and future AWS. Indeed, full autonomy is allowed by the confluence model in the case of some anti-materiel defensive AWS; it is to be curbed instead in the case of both lethal AWS and future AWS which may seriously jeopardize peace and stability.