Ryan Jenkins | California Polytechnic State University at San Luis Obispo (original) (raw)

Papers by Ryan Jenkins

Research paper thumbnail of Big Brother Goes to School: Best Practices for Campus Surveillance Technologies During the COVID-19 Pandemic

Techne, 2020

Few sectors are more affected by COVID-19 than higher education. There is growing recognition tha... more Few sectors are more affected by COVID-19 than higher education. There is growing recognition that reopening the densely populated communities of higher education will require surveillance technologies, but many of these technologies pose threats to the privacy of the very students, faculty, and staff they are meant to protect. The authors have a history of working with our institution’s governing bodies to provide ethical guidance on the use of technologies, especially including those with significant implications for privacy. Here, we draw on that experience to provide guidelines for using surveillance technologies to reopen college campuses safely and responsibly, even under the specter of covid. We aim to generalize our recommendations, so they are sensitive to the practical realities and constraints that universities face.

Bookmarks Related papers MentionsView impact

Research paper thumbnail of Artificial Intelligence and Predictive Policing: A Roadmap for Research

Against a backdrop of historic unrest and criticism, the institution of policing is at an inflect... more Against a backdrop of historic unrest and criticism, the institution of policing is at an inflection point. Policing practices, and the police use of technology, are under heightened scrutiny. One of the most prominent and controversial of these practices centrally involves technology and is often called “predictive policing.” Predictive policing is the use of computer algorithms to forecast when and where crimes will take place — and sometimes even to predict the identities of perpetrators or victims. Criticisms of predictive policing combine worries about artificial intelligence and bias, about power structures and democratic accountability, about the responsibilities of private tech companies selling the software, and about the fundamental relationship between state and citizen. In this report, we present the initial findings from a three-year project to investigate the ethical implications of predictive policing and develop ethically sensitive and empirically informed best practices for both those developing these technologies and the police departments using them.

Bookmarks Related papers MentionsView impact

Research paper thumbnail of Autonomous Weapons Systems and the Moral Equality of Combatants

Ethics & Information Technology, 2020

To many, the idea of autonomous weapons systems (AWS) killing human beings is grotesque. Yet crit... more To many, the idea of autonomous weapons systems (AWS) killing human beings is grotesque. Yet critics have had difficulty explaining why it should make a significant moral difference if a human combatant is killed by an AWS as opposed to being killed by a human combatant. The purpose of this paper is to explore the roots of various deontological concerns with AWS and to consider whether these concerns are distinct from any concerns that also apply to long- distance, human-guided weaponry. We suggest that at least one major driver of the intuitive moral aversion to lethal AWS is that their use disrespects their human targets by violating the martial contract between human combatants. On our understanding of this doctrine, service personnel cede a right not to be directly targeted with lethal violence to other human agents alone. Artificial agents, of which AWS are one example, cannot understand the value of human life. A human combatant cannot transfer his privileges of targeting enemy combatants to a robot. Therefore, the human duty-holder who deploys AWS breaches the martial contract between human combatants and disrespects the targeted combatants. We consider whether this novel deontological objection to AWS forms the foundation of several other popular yet imperfect deontological objections to AWS.

Bookmarks Related papers MentionsView impact

Research paper thumbnail of How to evaluate counter-drone products

This paper examines counter-drone solutions and suggests that a satisfactory solution is one that... more This paper examines counter-drone solutions and suggests that a satisfactory solution is one that is successful at detecting, identifying, and, if appropriate, mitigating a wide range of hostile drones with minimal human oversight. This paper then evaluates the landscape of counter-drone technologies to examine their benefits and drawbacks. While no counter-drone solution is a silver bullet, this paper ultimately endorses non-kinetic, low-power, “smart jamming” counter-drone solutions as supreme.

Bookmarks Related papers MentionsView impact

Research paper thumbnail of Robot Warfare

For many, lethal autonomous weapons, or “killer robots,” are the stuff of nightmares. They have a... more For many, lethal autonomous weapons, or “killer robots,” are the stuff of nightmares. They have already been the subject of vigorous debate and copious scholarship (Adams 2001, Asaro 2012, Sparrow 2007, Wallach 2008). Some activists are already calling for a moratorium on the development of autonomous weapons, and others are calling for an outright ban. This article considers the strategic value of autonomous weapons and their current legal status. But the focus of this article is on the presumptive moral case in favor of killer robots and the torrent of criticisms that have been unleashed against them in recent years.

Bookmarks Related papers MentionsView impact

Research paper thumbnail of Rule-consequentialism and moral relativism

Rule-consequentialism is usually taken to recommend a single ideal code for all moral agents. Rel... more Rule-consequentialism is usually taken to recommend a single ideal code for all moral agents. Relativized forms of rule consequentialism, which specify different moral rules for different social groups, have been considered before, yet they have not received serious attention until recently. Here I argue that, depending on their theoretical motivations, some rule-consequentialists have very good reasons to be relativists. Namely, rule-consequentialists who find compelling the theory’s coherence with our considered moral intuitions or are moved by consequentialist considerations ought to support a scheme of multiple relativized moral codes.

Bookmarks Related papers MentionsView impact

Research paper thumbnail of A Dilemma for Moral Deliberation in AI

Many social trends are conspiring to drive the adoption of greater automation in society. The eco... more Many social trends are conspiring to drive the adoption of greater automation in society. The economic benefits of automation have motivated a dramatic transition to automated manufacturing for several decades. As we project these trends just a few years into the future, it is undeniable that we will see a greater offloading of human decisionmaking to robots. Many of these decisions are morally salient: for example, decisions about how benefits and burdens are distributed and weighed against each other, whether your autonomous car decides to brake or swerve, or whether to engage an enemy combatant on the battlefield. We suggest that the question of AI consciousness poses a dilemma. If we want robots to abide by either consequentialist or deontological theories, whether artificially intelligent agents will be conscious or not, we will face serious, and perhaps insurmountable difficulties.

Bookmarks Related papers MentionsView impact

Research paper thumbnail of When robots should do the wrong thing

In the first section, we argue that deontological evaluations do not apply to the actions of robo... more In the first section, we argue that deontological evaluations do not apply to the actions of robots. For this reason, robots should act like consequentialists, even if consequentialism is false. In the second section, we argue that, even though robots should act like consequentialists, it is sometimes wrong to create robots that do. At the end of that section and in the next, we show how specific forms of uncertainty can make it permissible, and sometimes obligatory, to create robots that obey moral views that one thinks are false.

Bookmarks Related papers MentionsView impact

Research paper thumbnail of The need for moral algorithms in autonomous vehicles

Several manufacturers have pledged to sell autonomous vehicles (AVs) within the next few years. T... more Several manufacturers have pledged to sell autonomous vehicles (AVs) within the next few years. These are cars that offload much of the task of driving from the human driver to a computer. Rudimentary autonomous features already exist in some cars: these include features that keep cars within their proper lane, avoid collisions by applying brakes, and regulate the distance between the car and the car in front of it (so-called " smart cruise control "). Fully autonomous vehicles, which may be technologically feasible in just a few years, would be capable of taking over completely from the human driver. However, as with the introduction of any technology that holds the promise to impact human life, we should be careful to scrutinize the moral dimensions of these products. We should also examine the assumptions their creators can make in the process of design. Unfortunately, some manufacturers still seem to misunderstand the moral dimensions of autonomous decision making in vehicles or are dubious that autonomous vehicles raise any difficult ethical problems.

Bookmarks Related papers MentionsView impact

Research paper thumbnail of Averting the moral free-for-all of autonomous weapons

Waging warfare and developing technologies are two distinctively human activities that have long ... more Waging warfare and developing technologies are two distinctively human activities that have long been closely intertwined. Emerging technologies have continually shaped military strategy and policy, from the invention of Greek fire, to gunpowder, artillery, nuclear weapons, and GPS- and laser-guided munitions. The diligent student of military history is also a keen observer of technological change. Once again, a new technology on the horizon promise to drastically alter the texture and norms of combat: lethal autonomous weapons. Remarkable advances in artificial intelligence, machine learning, and computer vision lend an urgent credibility to the suggestion that reliable autonomous weapons are possible.

Bookmarks Related papers MentionsView impact

Research paper thumbnail of Autonomous Vehicles Ethics & Law: Towards an Overlapping Consensus

Bookmarks Related papers MentionsView impact

Research paper thumbnail of Cyberwarfare as Ideal War

Binary Bullets: The Ethics of Cyberwarfare, 2016

Is there an ideal war, a best possible war? Is there a war greater than which no war can be conce... more Is there an ideal war, a best possible war? Is there a war greater than which no war can be conceived? What would such a war be like, and are there any means of waging war that satisfy this description? I will suggest that cyberwarfare offers the possibility of just such an ideal war. As long as the concept of an ideal war is coherent—as I argue in this essay—we should answer the opening question like this: An ideal war would be a war wherein civilian casualties were minimal or nonexistent and where acts of violence perfectly discriminated between combatants and noncombatants (§1). Cyberwarfare has made possible this kind of ideal warfare for the first time by profoundly improving a state’s ability to direct its force discriminately and to ensure that force is proportional (§2). Since cyberwarfare does not raise any moral concerns serious enough to countervail its clear benefits, we are obligated to prefer cyber means where practical (§3). These benefits of cyberwarfare undermine the moral stringency of the proportionality and probability of success criteria of jus ad bellum (§4).

Bookmarks Related papers MentionsView impact

Research paper thumbnail of Right Intention and the Ends of War

The jus ad bellum criterion of right intention (CRI) is a central guiding principle of just war t... more The jus ad bellum criterion of right intention (CRI) is a central guiding principle of just war theory. In its schematic form, it asserts that a country’s resort to war is just only if that country resorts to war for the right reasons. Though the CRI enjoys widespread endorsement from classical and contemporary just war theorists, there is also widespread confusion, and little in the way of consensus, about how to specify the CRI. It remains ambiguous in both its scope and its stringency. We seek to clear up the conceptual confusion associated with the CRI by evaluating the plausibility of several distinct ways of understanding the criterion. In doing so, we will pose a dilemma for supporters of the criterion, which we believe provides an independent reason for excluding the CRI from LOAC. The dilemma arises from consideration of two understandings of the the CRI. On one understanding, the Motive Camp, a resort to war is just only if a state’s motives which explain its resort to war are of the right kind. On a second understanding of the CRI, which we will call the Plan Camp, a state’s resort to war is just only if it plans to adhere to the principles of just war while achieving its just cause. In presenting the first horn of our dilemma for the CRI, we argue that if the Plan Camp is correct, then the CRI is superfluous, because it does not do any work that is not already covered by the probability of success criterion of just war theory. We then develop the second horn of our dilemma for the CRI. We argue that on the most plausible version of the Motive Camp, the CRI is not superfluous, but that it is threatened by an infinite regress. If this regress cannot be resolved, we are left without a plausible interpretation of the CRI, which constitutes a significant and novel reason for leaving the CRI out of LOAC.

Bookmarks Related papers MentionsView impact

Research paper thumbnail of Robots and Respect: A Reply to Sparrow

Robert Sparrow (2016) has recently argued that several initially plausible arguments in favor of ... more Robert Sparrow (2016) has recently argued that several initially plausible arguments in favor of the deployment of autonomous weapons systems (AWS) in warfare fail, and that their deployment faces a serious moral objection: deploying AWS fails to express the respect for the casualties of war that morality requires. Sparrow's argument against AWS relies on the claim that they are distinct from accepted weapons of war in that they either fail to transmit an attitude of respect or they transmit an attitude of disrespect. We argue that this distinction between AWS and widely accepted weapons is illusory, and so cannot ground a moral difference between AWS and existing methods of waging war. We also suggest that, if deploying conventional soldiers in some situation would be permissible, and if we could expect deploying AWS to cause fewer civilian casualties, then it would be consistent with an intuitive understanding of respect to deploy AWS in this situation.

Bookmarks Related papers MentionsView impact

Research paper thumbnail of Autonomous Machines, Moral Judgment, and Acting for the Right Reasons

Ethical Theory and Moral Practice

Modern weapons of war have undergone precipitous technological change over the past generation an... more Modern weapons of war have undergone precipitous technological change over the past generation and the future portends even greater advances. Of particular interest are so- called ‘autonomous weapon systems’ (henceforth, AWS), that will someday purportedly have the ability to make life and death targeting decisions ‘on their own.’ Despite the strong and widespread sentiments against such weapons, however, proffered philosophical arguments against AWS are often found lacking in substance. We propose that the prevalent moral aversion to AWS is supported by a pair of compelling objections. First, we argue that even a sophisticated robot is not the kind of thing that is capable of replicating human moral judgment. This conclusion follows if human moral judgment is not codifiable, i.e. it cannot be captured by a list of rules. Moral judgment requires either the ability to engage in wide reflective equilibrium, the ability to perceive certain facts as moral considerations, moral imagination, or the ability to have moral experiences with a particular phenomenological character. Robots cannot in principle possess these abilities, so robots cannot in principle replicate human moral judgment. If robots cannot in principle replicate human moral judgment then it is morally problematic to deploy AWS with that aim in mind. Second, we then argue that even if it is possible for a sufficiently sophisticated robot to make ‘moral decisions’ that are extensionally indistinguishable from (or better than) human moral decisions, these ‘decisions’ could not be made for the right reasons. This means that the ‘moral decisions’ made by AWS are bound to be morally deficient in at least one respect even if they are extensionally indistinguishable from human ones.

Bookmarks Related papers MentionsView impact

Research paper thumbnail of Is Stuxnet Physical? Does It matter?

Journal of Military Ethics

Cyberweapons are software and software, at least intuitively, is nonphysical. Several authors hav... more Cyberweapons are software and software, at least intuitively, is nonphysical. Several authors have noted that this potentially renders problematic the application of normative frameworks like UN Charter Article 2(4) to cyberweapons. If Article 2(4) only proscribes the use of physical force, and if cyberweapons are nonphysical, then cyberweapons fall outside the purview of Article 2(4). This article explores the physicality of software, examining Stuxnet in particular. First, I show that with a few relatively uncontroversial metaphysical claims we can secure the conclusion that Stuxnet is physical. In particular, there exist instances of Stuxnet that are both located in space and causally efficacious, and this is very strong evidence for their being physical. Second, I argue that the question of physicality is actually irrelevant for the moral evaluation of an attack like Stuxnet because of its undeniably physical effects. Finally, I argue that some features of Stuxnet should make us optimistic about the prospects for discrimination and proportionality in cyberwarfare.

Bookmarks Related papers MentionsView impact

Research paper thumbnail of You’ve Earned It!: A Criticism of Sher’s Account of Desert in Wages

Social Philosophy Today, 2011

Bookmarks Related papers MentionsView impact

Talks by Ryan Jenkins

Research paper thumbnail of What's the perfect driverless car? It depends who you ask

As humans, we are obsessed with the drive for perfection. It's an essential part of the human con... more As humans, we are obsessed with the drive for perfection. It's an essential part of the human condition to bemoan our frailty and weakness and with the next breath aspire to perfect ourselves. In fact – the drive for perfection is clearest, I think, when it comes to the invention and introduction of new technologies. But what does it mean to be the best? What does it mean to be perfect? Few people think about the nature and value of perfection like philosophers do. And so it's appropriate that, as a philosopher, I think about the ethics of technology in particular. In the last decade, few technologies have captivated the public imagination like driverless cars. But it’s unrealistic to think that they’ll be able to avoid all crashes — animals will jump into the street or a boulder is going to come falling off a mountain, and the car is going to have to make a choice about how it steers and brakes before the human driver can. If the car is faced with an inevitable crash, what should it do?

Bookmarks Related papers MentionsView impact

Research paper thumbnail of Rule-consequentialism and moral relativism

According to Brad Hooker’s rule-consequentialism, actions are right if they are consistent with a... more According to Brad Hooker’s rule-consequentialism, actions are right if they are consistent with an “ideal code” of rules which, if internalized by everyone, would maximize expected wellbeing (§1). Hooker recognizes that a moral code including conditional rules that reference group membership—so that, for example, the rich and poor are under different obligations to donate to charity—would have higher expected consequences than one with uni- versal imperative rules. This leads to a kind of de facto moral relativism in society’s patterns of behavior. I argue that embracing actual moral relativism would do even more to increase ex- pected consequences and hence rule-consequentialists have good reason to be moral relativists (§2). Hooker resists this move, but his arguments are unconvincing (§3). Moreover, his resis- tance is especially strange given his embrace of diachronic moral relativism, the view that our moral obligations can change over time (§4). Hooker’s position therefore appears untenable (§5).

Bookmarks Related papers MentionsView impact

Research paper thumbnail of An in bello rule-consequentialist code of morality

I suggest that rule-consequentialism is especially well-suited to the project of unifying and jus... more I suggest that rule-consequentialism is especially well-suited to the project of unifying and justifying the rules of war. This is because rule-consequentialism shares a structural similarity with a plausible theory of in bello morality, namely, as a set of near-absolute rules chosen with reference to and justified by some consequentialist goal. For warfare, I suggest that goal is minimizing the horror of war. Accordingly, I sketch a rule-consequentialist in bello code of morality. I also discuss the moral dispositions of a soldier who has successfully internalized this moral code, i.e. the conscience she would have. Finally, I discuss the implications of this view for three contentious topics in the military ethics literature: the doctrine of double effect, supreme emergency, and the problem of noncompliance. We will see that this theory offers a plausible justification of the doctrine of double effect. We will also see that rule-consequentialism already boasts the conceptual resources to bring clarity to the notion of supreme emergency, namely, in the form of an "avoid disasters" clause that triggers in the face of especially catastrophic threats. Lastly, I argue that Walzer's principle of supreme emergency is too restrictive and the typical rule-consequentialist view too permissive when faced with noncompliance. The result of all of this is the beginning of an original, nuanced, and plausible unified view of in bello morality.

Bookmarks Related papers MentionsView impact

Research paper thumbnail of Big Brother Goes to School: Best Practices for Campus Surveillance Technologies During the COVID-19 Pandemic

Techne, 2020

Few sectors are more affected by COVID-19 than higher education. There is growing recognition tha... more Few sectors are more affected by COVID-19 than higher education. There is growing recognition that reopening the densely populated communities of higher education will require surveillance technologies, but many of these technologies pose threats to the privacy of the very students, faculty, and staff they are meant to protect. The authors have a history of working with our institution’s governing bodies to provide ethical guidance on the use of technologies, especially including those with significant implications for privacy. Here, we draw on that experience to provide guidelines for using surveillance technologies to reopen college campuses safely and responsibly, even under the specter of covid. We aim to generalize our recommendations, so they are sensitive to the practical realities and constraints that universities face.

Bookmarks Related papers MentionsView impact

Research paper thumbnail of Artificial Intelligence and Predictive Policing: A Roadmap for Research

Against a backdrop of historic unrest and criticism, the institution of policing is at an inflect... more Against a backdrop of historic unrest and criticism, the institution of policing is at an inflection point. Policing practices, and the police use of technology, are under heightened scrutiny. One of the most prominent and controversial of these practices centrally involves technology and is often called “predictive policing.” Predictive policing is the use of computer algorithms to forecast when and where crimes will take place — and sometimes even to predict the identities of perpetrators or victims. Criticisms of predictive policing combine worries about artificial intelligence and bias, about power structures and democratic accountability, about the responsibilities of private tech companies selling the software, and about the fundamental relationship between state and citizen. In this report, we present the initial findings from a three-year project to investigate the ethical implications of predictive policing and develop ethically sensitive and empirically informed best practices for both those developing these technologies and the police departments using them.

Bookmarks Related papers MentionsView impact

Research paper thumbnail of Autonomous Weapons Systems and the Moral Equality of Combatants

Ethics & Information Technology, 2020

To many, the idea of autonomous weapons systems (AWS) killing human beings is grotesque. Yet crit... more To many, the idea of autonomous weapons systems (AWS) killing human beings is grotesque. Yet critics have had difficulty explaining why it should make a significant moral difference if a human combatant is killed by an AWS as opposed to being killed by a human combatant. The purpose of this paper is to explore the roots of various deontological concerns with AWS and to consider whether these concerns are distinct from any concerns that also apply to long- distance, human-guided weaponry. We suggest that at least one major driver of the intuitive moral aversion to lethal AWS is that their use disrespects their human targets by violating the martial contract between human combatants. On our understanding of this doctrine, service personnel cede a right not to be directly targeted with lethal violence to other human agents alone. Artificial agents, of which AWS are one example, cannot understand the value of human life. A human combatant cannot transfer his privileges of targeting enemy combatants to a robot. Therefore, the human duty-holder who deploys AWS breaches the martial contract between human combatants and disrespects the targeted combatants. We consider whether this novel deontological objection to AWS forms the foundation of several other popular yet imperfect deontological objections to AWS.

Bookmarks Related papers MentionsView impact

Research paper thumbnail of How to evaluate counter-drone products

This paper examines counter-drone solutions and suggests that a satisfactory solution is one that... more This paper examines counter-drone solutions and suggests that a satisfactory solution is one that is successful at detecting, identifying, and, if appropriate, mitigating a wide range of hostile drones with minimal human oversight. This paper then evaluates the landscape of counter-drone technologies to examine their benefits and drawbacks. While no counter-drone solution is a silver bullet, this paper ultimately endorses non-kinetic, low-power, “smart jamming” counter-drone solutions as supreme.

Bookmarks Related papers MentionsView impact

Research paper thumbnail of Robot Warfare

For many, lethal autonomous weapons, or “killer robots,” are the stuff of nightmares. They have a... more For many, lethal autonomous weapons, or “killer robots,” are the stuff of nightmares. They have already been the subject of vigorous debate and copious scholarship (Adams 2001, Asaro 2012, Sparrow 2007, Wallach 2008). Some activists are already calling for a moratorium on the development of autonomous weapons, and others are calling for an outright ban. This article considers the strategic value of autonomous weapons and their current legal status. But the focus of this article is on the presumptive moral case in favor of killer robots and the torrent of criticisms that have been unleashed against them in recent years.

Bookmarks Related papers MentionsView impact

Research paper thumbnail of Rule-consequentialism and moral relativism

Rule-consequentialism is usually taken to recommend a single ideal code for all moral agents. Rel... more Rule-consequentialism is usually taken to recommend a single ideal code for all moral agents. Relativized forms of rule consequentialism, which specify different moral rules for different social groups, have been considered before, yet they have not received serious attention until recently. Here I argue that, depending on their theoretical motivations, some rule-consequentialists have very good reasons to be relativists. Namely, rule-consequentialists who find compelling the theory’s coherence with our considered moral intuitions or are moved by consequentialist considerations ought to support a scheme of multiple relativized moral codes.

Bookmarks Related papers MentionsView impact

Research paper thumbnail of A Dilemma for Moral Deliberation in AI

Many social trends are conspiring to drive the adoption of greater automation in society. The eco... more Many social trends are conspiring to drive the adoption of greater automation in society. The economic benefits of automation have motivated a dramatic transition to automated manufacturing for several decades. As we project these trends just a few years into the future, it is undeniable that we will see a greater offloading of human decisionmaking to robots. Many of these decisions are morally salient: for example, decisions about how benefits and burdens are distributed and weighed against each other, whether your autonomous car decides to brake or swerve, or whether to engage an enemy combatant on the battlefield. We suggest that the question of AI consciousness poses a dilemma. If we want robots to abide by either consequentialist or deontological theories, whether artificially intelligent agents will be conscious or not, we will face serious, and perhaps insurmountable difficulties.

Bookmarks Related papers MentionsView impact

Research paper thumbnail of When robots should do the wrong thing

In the first section, we argue that deontological evaluations do not apply to the actions of robo... more In the first section, we argue that deontological evaluations do not apply to the actions of robots. For this reason, robots should act like consequentialists, even if consequentialism is false. In the second section, we argue that, even though robots should act like consequentialists, it is sometimes wrong to create robots that do. At the end of that section and in the next, we show how specific forms of uncertainty can make it permissible, and sometimes obligatory, to create robots that obey moral views that one thinks are false.

Bookmarks Related papers MentionsView impact

Research paper thumbnail of The need for moral algorithms in autonomous vehicles

Several manufacturers have pledged to sell autonomous vehicles (AVs) within the next few years. T... more Several manufacturers have pledged to sell autonomous vehicles (AVs) within the next few years. These are cars that offload much of the task of driving from the human driver to a computer. Rudimentary autonomous features already exist in some cars: these include features that keep cars within their proper lane, avoid collisions by applying brakes, and regulate the distance between the car and the car in front of it (so-called " smart cruise control "). Fully autonomous vehicles, which may be technologically feasible in just a few years, would be capable of taking over completely from the human driver. However, as with the introduction of any technology that holds the promise to impact human life, we should be careful to scrutinize the moral dimensions of these products. We should also examine the assumptions their creators can make in the process of design. Unfortunately, some manufacturers still seem to misunderstand the moral dimensions of autonomous decision making in vehicles or are dubious that autonomous vehicles raise any difficult ethical problems.

Bookmarks Related papers MentionsView impact

Research paper thumbnail of Averting the moral free-for-all of autonomous weapons

Waging warfare and developing technologies are two distinctively human activities that have long ... more Waging warfare and developing technologies are two distinctively human activities that have long been closely intertwined. Emerging technologies have continually shaped military strategy and policy, from the invention of Greek fire, to gunpowder, artillery, nuclear weapons, and GPS- and laser-guided munitions. The diligent student of military history is also a keen observer of technological change. Once again, a new technology on the horizon promise to drastically alter the texture and norms of combat: lethal autonomous weapons. Remarkable advances in artificial intelligence, machine learning, and computer vision lend an urgent credibility to the suggestion that reliable autonomous weapons are possible.

Bookmarks Related papers MentionsView impact

Research paper thumbnail of Autonomous Vehicles Ethics & Law: Towards an Overlapping Consensus

Bookmarks Related papers MentionsView impact

Research paper thumbnail of Cyberwarfare as Ideal War

Binary Bullets: The Ethics of Cyberwarfare, 2016

Is there an ideal war, a best possible war? Is there a war greater than which no war can be conce... more Is there an ideal war, a best possible war? Is there a war greater than which no war can be conceived? What would such a war be like, and are there any means of waging war that satisfy this description? I will suggest that cyberwarfare offers the possibility of just such an ideal war. As long as the concept of an ideal war is coherent—as I argue in this essay—we should answer the opening question like this: An ideal war would be a war wherein civilian casualties were minimal or nonexistent and where acts of violence perfectly discriminated between combatants and noncombatants (§1). Cyberwarfare has made possible this kind of ideal warfare for the first time by profoundly improving a state’s ability to direct its force discriminately and to ensure that force is proportional (§2). Since cyberwarfare does not raise any moral concerns serious enough to countervail its clear benefits, we are obligated to prefer cyber means where practical (§3). These benefits of cyberwarfare undermine the moral stringency of the proportionality and probability of success criteria of jus ad bellum (§4).

Bookmarks Related papers MentionsView impact

Research paper thumbnail of Right Intention and the Ends of War

The jus ad bellum criterion of right intention (CRI) is a central guiding principle of just war t... more The jus ad bellum criterion of right intention (CRI) is a central guiding principle of just war theory. In its schematic form, it asserts that a country’s resort to war is just only if that country resorts to war for the right reasons. Though the CRI enjoys widespread endorsement from classical and contemporary just war theorists, there is also widespread confusion, and little in the way of consensus, about how to specify the CRI. It remains ambiguous in both its scope and its stringency. We seek to clear up the conceptual confusion associated with the CRI by evaluating the plausibility of several distinct ways of understanding the criterion. In doing so, we will pose a dilemma for supporters of the criterion, which we believe provides an independent reason for excluding the CRI from LOAC. The dilemma arises from consideration of two understandings of the the CRI. On one understanding, the Motive Camp, a resort to war is just only if a state’s motives which explain its resort to war are of the right kind. On a second understanding of the CRI, which we will call the Plan Camp, a state’s resort to war is just only if it plans to adhere to the principles of just war while achieving its just cause. In presenting the first horn of our dilemma for the CRI, we argue that if the Plan Camp is correct, then the CRI is superfluous, because it does not do any work that is not already covered by the probability of success criterion of just war theory. We then develop the second horn of our dilemma for the CRI. We argue that on the most plausible version of the Motive Camp, the CRI is not superfluous, but that it is threatened by an infinite regress. If this regress cannot be resolved, we are left without a plausible interpretation of the CRI, which constitutes a significant and novel reason for leaving the CRI out of LOAC.

Bookmarks Related papers MentionsView impact

Research paper thumbnail of Robots and Respect: A Reply to Sparrow

Robert Sparrow (2016) has recently argued that several initially plausible arguments in favor of ... more Robert Sparrow (2016) has recently argued that several initially plausible arguments in favor of the deployment of autonomous weapons systems (AWS) in warfare fail, and that their deployment faces a serious moral objection: deploying AWS fails to express the respect for the casualties of war that morality requires. Sparrow's argument against AWS relies on the claim that they are distinct from accepted weapons of war in that they either fail to transmit an attitude of respect or they transmit an attitude of disrespect. We argue that this distinction between AWS and widely accepted weapons is illusory, and so cannot ground a moral difference between AWS and existing methods of waging war. We also suggest that, if deploying conventional soldiers in some situation would be permissible, and if we could expect deploying AWS to cause fewer civilian casualties, then it would be consistent with an intuitive understanding of respect to deploy AWS in this situation.

Bookmarks Related papers MentionsView impact

Research paper thumbnail of Autonomous Machines, Moral Judgment, and Acting for the Right Reasons

Ethical Theory and Moral Practice

Modern weapons of war have undergone precipitous technological change over the past generation an... more Modern weapons of war have undergone precipitous technological change over the past generation and the future portends even greater advances. Of particular interest are so- called ‘autonomous weapon systems’ (henceforth, AWS), that will someday purportedly have the ability to make life and death targeting decisions ‘on their own.’ Despite the strong and widespread sentiments against such weapons, however, proffered philosophical arguments against AWS are often found lacking in substance. We propose that the prevalent moral aversion to AWS is supported by a pair of compelling objections. First, we argue that even a sophisticated robot is not the kind of thing that is capable of replicating human moral judgment. This conclusion follows if human moral judgment is not codifiable, i.e. it cannot be captured by a list of rules. Moral judgment requires either the ability to engage in wide reflective equilibrium, the ability to perceive certain facts as moral considerations, moral imagination, or the ability to have moral experiences with a particular phenomenological character. Robots cannot in principle possess these abilities, so robots cannot in principle replicate human moral judgment. If robots cannot in principle replicate human moral judgment then it is morally problematic to deploy AWS with that aim in mind. Second, we then argue that even if it is possible for a sufficiently sophisticated robot to make ‘moral decisions’ that are extensionally indistinguishable from (or better than) human moral decisions, these ‘decisions’ could not be made for the right reasons. This means that the ‘moral decisions’ made by AWS are bound to be morally deficient in at least one respect even if they are extensionally indistinguishable from human ones.

Bookmarks Related papers MentionsView impact

Research paper thumbnail of Is Stuxnet Physical? Does It matter?

Journal of Military Ethics

Cyberweapons are software and software, at least intuitively, is nonphysical. Several authors hav... more Cyberweapons are software and software, at least intuitively, is nonphysical. Several authors have noted that this potentially renders problematic the application of normative frameworks like UN Charter Article 2(4) to cyberweapons. If Article 2(4) only proscribes the use of physical force, and if cyberweapons are nonphysical, then cyberweapons fall outside the purview of Article 2(4). This article explores the physicality of software, examining Stuxnet in particular. First, I show that with a few relatively uncontroversial metaphysical claims we can secure the conclusion that Stuxnet is physical. In particular, there exist instances of Stuxnet that are both located in space and causally efficacious, and this is very strong evidence for their being physical. Second, I argue that the question of physicality is actually irrelevant for the moral evaluation of an attack like Stuxnet because of its undeniably physical effects. Finally, I argue that some features of Stuxnet should make us optimistic about the prospects for discrimination and proportionality in cyberwarfare.

Bookmarks Related papers MentionsView impact

Research paper thumbnail of You’ve Earned It!: A Criticism of Sher’s Account of Desert in Wages

Social Philosophy Today, 2011

Bookmarks Related papers MentionsView impact

Research paper thumbnail of What's the perfect driverless car? It depends who you ask

As humans, we are obsessed with the drive for perfection. It's an essential part of the human con... more As humans, we are obsessed with the drive for perfection. It's an essential part of the human condition to bemoan our frailty and weakness and with the next breath aspire to perfect ourselves. In fact – the drive for perfection is clearest, I think, when it comes to the invention and introduction of new technologies. But what does it mean to be the best? What does it mean to be perfect? Few people think about the nature and value of perfection like philosophers do. And so it's appropriate that, as a philosopher, I think about the ethics of technology in particular. In the last decade, few technologies have captivated the public imagination like driverless cars. But it’s unrealistic to think that they’ll be able to avoid all crashes — animals will jump into the street or a boulder is going to come falling off a mountain, and the car is going to have to make a choice about how it steers and brakes before the human driver can. If the car is faced with an inevitable crash, what should it do?

Bookmarks Related papers MentionsView impact

Research paper thumbnail of Rule-consequentialism and moral relativism

According to Brad Hooker’s rule-consequentialism, actions are right if they are consistent with a... more According to Brad Hooker’s rule-consequentialism, actions are right if they are consistent with an “ideal code” of rules which, if internalized by everyone, would maximize expected wellbeing (§1). Hooker recognizes that a moral code including conditional rules that reference group membership—so that, for example, the rich and poor are under different obligations to donate to charity—would have higher expected consequences than one with uni- versal imperative rules. This leads to a kind of de facto moral relativism in society’s patterns of behavior. I argue that embracing actual moral relativism would do even more to increase ex- pected consequences and hence rule-consequentialists have good reason to be moral relativists (§2). Hooker resists this move, but his arguments are unconvincing (§3). Moreover, his resis- tance is especially strange given his embrace of diachronic moral relativism, the view that our moral obligations can change over time (§4). Hooker’s position therefore appears untenable (§5).

Bookmarks Related papers MentionsView impact

Research paper thumbnail of An in bello rule-consequentialist code of morality

I suggest that rule-consequentialism is especially well-suited to the project of unifying and jus... more I suggest that rule-consequentialism is especially well-suited to the project of unifying and justifying the rules of war. This is because rule-consequentialism shares a structural similarity with a plausible theory of in bello morality, namely, as a set of near-absolute rules chosen with reference to and justified by some consequentialist goal. For warfare, I suggest that goal is minimizing the horror of war. Accordingly, I sketch a rule-consequentialist in bello code of morality. I also discuss the moral dispositions of a soldier who has successfully internalized this moral code, i.e. the conscience she would have. Finally, I discuss the implications of this view for three contentious topics in the military ethics literature: the doctrine of double effect, supreme emergency, and the problem of noncompliance. We will see that this theory offers a plausible justification of the doctrine of double effect. We will also see that rule-consequentialism already boasts the conceptual resources to bring clarity to the notion of supreme emergency, namely, in the form of an "avoid disasters" clause that triggers in the face of especially catastrophic threats. Lastly, I argue that Walzer's principle of supreme emergency is too restrictive and the typical rule-consequentialist view too permissive when faced with noncompliance. The result of all of this is the beginning of an original, nuanced, and plausible unified view of in bello morality.

Bookmarks Related papers MentionsView impact

Research paper thumbnail of Is Stuxnet Physical? Does It Matter?

Cyberweapons are software and software, at least intuitively, is nonphysical. Several authors hav... more Cyberweapons are software and software, at least intuitively, is nonphysical. Several authors have noted that this potentially renders problematic the application of normative frameworks like UN Charter Article 2(4) to cyberweapons. If Article 2(4) only proscribes the use of physical force, and if cyberweapons are nonphysical, then cyberweapons fall outside the purview of Article 2(4). This article explores the physicality of software, examining Stuxnet in particular. First, I show that with a few relatively uncontroversial metaphysical claims we can secure the conclusion that Stuxnet is physical. In particular, there exist instances of Stuxnet that are both located in space and causally efficacious, and this is very strong evidence for their being physical. Second, I argue that the question of physicality is actually irrelevant for the moral evaluation of an attack like Stuxnet because of its undeniably physical effects. Finally, I argue that some features of Stuxnet should make us optimistic about the prospects for discrimination and proportionality in cyberwarfare.

Bookmarks Related papers MentionsView impact

Research paper thumbnail of ‘Knockout Animals’? Knock it off: Vegetarian reasons besides suffering

Arguments for vegetarianism typically appeal to the sheer magnitude of unjustified suffering that... more Arguments for vegetarianism typically appeal to the sheer magnitude of unjustified suffering that factoring farming creates. It is natural to suppose that vegetarians would have the rug pulled out from under them should we find a reliable way of circumventing that suffering. An editorial published in The New York Times in early 2010 summarized some interesting new neurological research on lab rats that was able to cancel, it seemed, their ability to feel pain. These animals were called ‘knockout animals’. One apparent application is that we could possibly secure a moral blank check to raise animals in factory farms. In this paper, I outline additional reasons that count in favor of vegetarianism should the vegetarian cause be robbed of its most popular appeal.

Bookmarks Related papers MentionsView impact

Research paper thumbnail of Which consequentialism?

Bookmarks Related papers MentionsView impact

Research paper thumbnail of You've Earned It!: Is George Sher’s Account of Desert in Wages Defensible?

Desert is a notion ubiquitous in our moral discourse, and the its dictates are perhaps most impor... more Desert is a notion ubiquitous in our moral discourse, and the its dictates are perhaps most important when dealing with the distribution of material resources. George Sher has provided one account of desert in wages – answering the question, How do workers deserve their wage? Sher relies on the violation of preexisting “independent standards” that dictate how much of a certain good we think people are entitled to to begin with. He argues that the violation of these standards calls for a later compensation by either supplying the agent with an excess of or depriving them of the corresponding amount of the good which they had earlier had too little or too much of, respectively. I argue that this formalization of desert is flawed in the abstract sense, and that it additionally has intuitively unacceptable implications when applied to the concept of wages in particular.

Bookmarks Related papers MentionsView impact

Research paper thumbnail of My father and his father and his father: Generational codes in Hooker’s Rule-Consequentialism

"Moral progress takes place; moral standards change for the better. Things that were once thought... more "Moral progress takes place; moral standards change for the better. Things that were once thought to be morally acceptable are now thought obviously wrong. Making sense of moral progress is not often thought to be a necessary feature of a moral theory. It is not as integral as, say, being internally consistent or sufficiently action-guiding.1 However, if moral progress posed a problem for a particular theory, and that theory were unable to offer a plausible solution, we might think the theory were worse for it.

One theory that, I feel, is uniquely susceptible to this problem is Brad Hooker’s rule-consequentialism. In this paper, I will illustrate how one troubling scenario might arise and what kinds of problems it causes for the theory. I will explore four possible solutions, tentatively endorsing one of them. Though the flaw is not fatal for Hooker’s theory, it is incumbent upon him to provide a plausible solution within his theoretical framework."

Bookmarks Related papers MentionsView impact

Research paper thumbnail of Oxfam International

Springer Encyclopedia of Global Justice, 2012

Bookmarks Related papers MentionsView impact

Research paper thumbnail of Consumerism

Springer Encyclopedia of Global Justice, 2012

Bookmarks Related papers MentionsView impact

Research paper thumbnail of Moral Imperialism

Springer Encyclopedia of Global Justice, 2012

Bookmarks Related papers MentionsView impact

Research paper thumbnail of Philosophy of Fascism

This course will explore the philosophical underpinnings of fascism, the 20th century’s unique co... more This course will explore the philosophical underpinnings of fascism, the 20th century’s unique contribution to political philosophy. What this class (primarily) is: an exploration of the theoretical components of the fascist political philosophy and an analysis of some primary and secondary arguments in its favor. This course is meant to be an objective survey of the philosophical components that combine to create fascism. It is not meant to be either a defense or an attack of these views.

Bookmarks Related papers MentionsView impact

Research paper thumbnail of Syllabus: Philosophy of the Internet

This course explores the philosophy of the Internet. We will apply classical philosophical method... more This course explores the philosophy of the Internet. We will apply classical philosophical methodologies to the nature and implications of the Internet, examining it from perspectives such as ontology, metaphysics, epistemology, ethics, and politics. We will investigate, among other topics, the nature of cyberspace and digital objects, personal identity online, Internet speech, surveillance, algorithms, and fake news.

Bookmarks Related papers MentionsView impact

Research paper thumbnail of Syllabus: Environmental Ethics

This course covers controversies and theories in environmental ethics and environmental justice, ... more This course covers controversies and theories in environmental ethics and environmental justice, two separate but related domains of ethics. Environmental ethics asks the question: How are we allowed to interact with the rest of the cosmos, whether animate or inanimate? Environmental justice asks the question: Given that all of us need certain things from the cosmos to live, how should its finite resources be divided among us?

Bookmarks Related papers MentionsView impact

Research paper thumbnail of Syllabus: Technologies and Ethics of War

War is a uniquely human endeavor which has defined the course of humanity since prehistory and, d... more War is a uniquely human endeavor which has defined the course of humanity since prehistory and, despite the better angles of our nature, remains a grim fixture of the international stage. Yet most of us believe, despite the abject horrors that war always brings with it, that there is such a thing as a morally justified war. When is war just? What are we allowed to do in warfare?

Bookmarks Related papers MentionsView impact

Research paper thumbnail of Syllabus: Ethics, Science and Technology

This course explores the intersections of ethics, science, and technology. We will examine such q... more This course explores the intersections of ethics, science, and technology. We will examine such questions as: How do we define technology? Is technology value-neutral or does it have values " built into " it? How does technology change human life for the better (and for the worse)? How should we balance competing values such as safety, efficiency, and freedom? Does technology ameliorate or exacerbate injustice and social inequality? How does technology encourage certain ways of viewing the world, and how does it corrupt our notions of " objectivity "? Is it a problem that STEM fields lack diversity, and, if so, what should we do to address that? Is technology going to eat all of our jobs, and what if it does?

Bookmarks Related papers MentionsView impact

Research paper thumbnail of Syllabus: Philosophy of Technology

This course explores the philosophy of technology. We will examine such questions as: How do we d... more This course explores the philosophy of technology. We will examine such questions as: How do we define technology? Is technology value-neutral or does it have values “built into” it? Does technology evolve on its own, or does its progression reflect the priorities of some select interest groups? How does technology change human life for the better (and for the worse)? How does technology encourage certain ways of viewing and understanding the world?

Bookmarks Related papers MentionsView impact

Research paper thumbnail of Syllabus: War and Morality

Bookmarks Related papers MentionsView impact

Research paper thumbnail of Syllabus: Emerging Technologies, New Problems

Bookmarks Related papers MentionsView impact

Research paper thumbnail of Syllabus: Existentialism and the Meaning of Life

Bookmarks Related papers MentionsView impact

Research paper thumbnail of Syllabus: Contemporary Moral Issues (Applied Ethics)

Bookmarks Related papers MentionsView impact

Research paper thumbnail of Syllabus: Argumentative Writing, "Reading, Writing, and Reasoning"

Bookmarks Related papers MentionsView impact

Research paper thumbnail of Syllabus: Introduction to Ethics, "What Are Your Reasons?"

Bookmarks Related papers MentionsView impact

Research paper thumbnail of Dissertation Abstract: "On Good People: A New Defense of Rule-Consequentialism"

Bookmarks Related papers MentionsView impact