Crashing into the Unknown: An Examination of Crash-Optimization Algorithms Through the Two Lanes of Ethics and Law (original) (raw)
Related papers
Sue My Car Not Me: Products Liability and Accidents Involving Autonomous Vehicles
Autonomous vehicles will revolutionize society in the near future. Computers, however, are not perfect, and accidents will occur while the vehicle is in autonomous mode. This Article answers the question of who should be liable when an accident is caused in autonomous mode. This Article addresses the liability of autonomous vehicle by examining products liability through the use of four scenarios: the Distracted Driver; the Diminished Capabilities Driver; the Disabled Driver; and the Attentive Driver. Based on those scenarios, this Article suggests that the autonomous technology manufacturer should be liable for accidents caused in autonomous mode because the autonomous vehicle probably caused the accident. Liability should shift back to the " driver " depending on the nature of the driver and the ability of that person to prevent the accident. Thus, this Article argues that an autonomous vehicle manufacturer should be liable for accidents caused in autonomous mode for the Disabled Driver and partially for the Diminished Capabilities Driver and the Distracted Driver. This Article argues the Attentive Driver should be liable for most accidents caused in autonomous vehicles. Currently, products liability does not allocate the financial responsibility of an accident to the party that is responsible for the accident, and this Article suggests that courts and legislatures need to address tort liability for accidents caused in autonomous mode to ensure that the responsible party bears responsibility for accidents.
Three Types of Structural Discrimination Introduced by Autonomous Vehicles
The advent of autonomous vehicles has been hailed by commentators as introducing an improvement for traffic safety by promising to reduce the overall number of road accidents as the technology matures. Some advocates even hint at a moral imperative to structure incentives and smooth over barriers in order to induce widespread usage of autonomous vehicle in order to actualize this potential. The orientation towards safety concerns, however, foregrounds the debate on crash-optimization and imports the trolley-problem thought-experiments to the question of autonomous vehicles.
2017
Autonomous vehicular technology significantly stresses the issue of safety. Although the use of driverless cars raises considerable expectations of a general improvement in safety, new challenges concerning the safety aspects stem from the changing context. On the one hand, the paper addresses regulatory issues raised by the impact of technological changes, particularly standardization problems. On the other hand, the issue of liability questions is investigated as it might cause today's main legal obstacle for the wide spreading of autonomous cars, especially as autonomous cars might jeopardize the existing approaches to vehicular liability. The aim of this paper is to scrutinize the basic problems in both fields. We provide what, at the current state-of-the-art, appear to be reasonable recommendations from the perspective of technological regulation and law, in order to deal with the main problems that might hamper the development of autonomous transport technology.
Driving Into the Unknown: Examining the Crossroads of Criminal Law and Autonomous Vehicles
This Article examines the application of criminal law to autonomous vehicles. The Article begins by providing a background of autonomous vehicles. The Article then examines the general purposes of punishment and traffic laws. The Article next analyzes criminal responsibility for the operator with regard to rules of the road violations, driving under the influence, reckless driving, and vehicular manslaughter. The Article then examines location specific crimes, physical interference with a vehicle, and hacking. This Article argues that the law should be amended because the current application of criminal and traffic laws to autonomous vehicles will make programming the vehicles challenging and enforcing the laws difficult.
THE CHALLENGES OF AUTONOMOUS MOTOR VEHICLES FOR QUEENSLAND ROAD AND CRIMINAL LAWS
This article examines the challenges of autonomous motor vehicles for Queensland road and criminal laws. Autonomous vehicles refer to motor vehicles where driver decision making has been augmented or replaced by intelligent systems. Proponents of autonomous vehicles argue that they will virtually eliminate road accidents, boost productivity and provide significant environmental benefits. The key issue is that autonomous vehicles challenge the notion of human responsibility which lies at the core of Queensland's road and criminal laws. The road rules are predicated on a driver in control of the vehicle, the intoxication regime is concerned with the person in charge of the vehicle and the dangerous driving offences require a person who operates a vehicle. Notwithstanding this challenge, it can be seen that much of Queensland's law is adaptable to autonomous vehicles. However, there are some identifiable anomalies that require reform. I INTRODUCTION This article examines the challenges of autonomous vehicles for Queensland road and criminal laws. The challenge of autonomous vehicles is that by replacing or augmenting driver decision making, existing laws that assume driver responsibility seem inadequate. Nevertheless, it is argued that much of Queensland's road and criminal laws are adaptable to autonomous vehicles. However, there are some clear anomalies that will need to be addressed to effectively maximise the safety, economic and environmental benefits of autonomous vehicles. Furthermore, there will be the need for more detailed reforms as autonomous vehicles become more widespread. This argument is in three sections. The first section locates autonomous vehicles within a trajectory of innovation that has steadily reduced driver involvement with primary vehicle controls. The technical details of emerging autonomous vehicles will be canvassed and the safety, economic and environmental drivers for adoption will be outlined. This section concludes by identifying different 'levels' of automation for autonomous vehicles which forms the basis for the second section. This section also argues why a focus on Queensland law is appropriate and desirable. The second section audits the challenges of the different levels of automation for Queensland's road and criminal laws. 1 The focus is on the substantive provisions of the rules of the road, the. I would like to thank officers from the Queensland Department of Transport and Main Roads, Professor Ross Martin QC, Dr Colleen Davis, Ashley Pearson, Mark Brady and anonymous referees who provided comment on the paper. All errors, oversights and misunderstandings remain my own. 1 From the perspective of Queensland criminal law, it could be said that making a distinction between the road rules and the criminal law is artificial as the road rules are a subset of the broader criminal law in Queensland. However, this paper makes the distinction for specific reasons. First, whether the regulatory offences in the
Why Ethics Matters for Autonomous Cars
If motor vehicles are to be truly autonomous and able to operate responsibly on our roads, they will need to replicate – or do better than – the human decision-making process. But some decisions are more than just a mechanical application of traffic laws and plotting a safe path. They seem to require a sense of ethics, and this is a notoriously difficult capability to reduce into algorithms for a computer to follow. This chapter will explain why ethics matters for autonomous road vehicles, looking at the most urgent area of their programming.
Idaho Law Journal (to be published), 2017
The development of driving support and cruise assist systems in the automotive industry has been astonishing, accelerating dramatically in the last ten years: since the first DARPA Urban Challenge field tests have multiplied in the US – in California alone, there are currently 39 companies testing self-driving cars – and the once remote prospect of “driverless” vehicles becoming commercially available (coming to market) (in the next future) might not be so far from reality. A broad range of scientific studies suggests the implementation of fully automated driving systems may come soon. Highly Automated Vehicles (HAVs) are likely to profoundly transform our social habits, and to revolutionize our way of interacting with the surrounding environment; in addition, legal scholars have already outlined how automated vehicles create a multi-level challenge in terms of regulation, capable of impacting on different areas of the law. One of the areas where research is much needed is tort liability: in addressing the regulation of accidents caused by automated cars, jurists must assess whether tort liability rules – as they are currently shaped – are suited to govern the “car minus driver” complexity, while simultaneously holding on to their theoretical basis. Whether the current framework proves itself to be inadequate and irreparably “out of tune” with the new circulation dynamics, the only alternative will be to amend or renew it. In light of these considerations, our aim is to present a hypothetical system for liability arising from road accidents caused by driverless cars. This model should be interpreted as a theoretical guideline, which must be adapted and declined in accordance with the specific attributes entailed within each legal system. Consistently with this premise, the article is outlined as such: in Part I we set out (and argue in favour of) some assumptions on which analysis rests. The main postulates that we embrace are that: a) we will ultimately reach a degree of technology that is capable of entirely substituting the human driver on the road; b) a fully automated driving system will be able to manage the “behaviour” of the vehicle safer than its “organic” counterparty; and c) the most promising strategy in addressing the HAVs regulation is to focus primarily on investigating the risks involved in the circulation of “totally” automated cars – where the human driver has no role – rather than addressing already existent (or forthcoming) intermediate support technologies. In Part II, we present the main options available to lawmakers in allocating liability for road accidents caused by HAVs. Traditionally, four leading “players” have been traditionally considered – in the academic debate as well as in the regulatory proposals enacted by governmental and independent bodies – “potentially responsible” in case of road accidents involving HAVs: the driver of the car; its owner; the government (or, widely speaking, the general public) and the manufacturer of the vehicle: after analysing each potential figure, we conclude that the manufacturer is the most appropriate figure to be held liable in the case of road accident involving driverless cars. Part III of the article investigates, on the basis of the background established in Part II, the most widely preferred solutions proposed to regulate a hypothetical liability system for manufacturers: on one hand, we consider the role that rules on product liability can play, devoting our attention both to the EU and to the US regulation; on the other hand, we evaluate the impact of different strict liability options. In particular, as for the latter, we then proceed to a specific investigation of the hypothetical system proposed by Kenneth Abraham and Robert Rabin in their article “Automated Vehicles And Manufacturer Responsibility For Accidents: A New Legal Regime For A New Era” (2017). In Part IV, after “laying down the ground” through the analysis of previous solutions for regulating tort liability in accidents caused by Highly Automated Vehicles, and after underlining how none of them seems entirely satisfactory, we present our proposal for allocating risks in the driverless car world. We will illustrate, in particular, how a “two-steps” system – operating through a negligence assessment and a reward fund – represents an optimal solution to mediate amongst the conflicting needs in the regulation of driverless vehicles. In Part V, finally, we draw some Conclusions on the basis of the various aspects addressed in our analysis, and present some alternatives we considered (and excluded) in developing our system.
Responsibility for Crashes of Autonomous Vehicles: An Ethical Analysis
A number of companies including Google and BMW are currently working on the development of autonomous cars. But if fully autonomous cars are going to drive on our roads, it must be decided who is to be held responsible in case of accidents. This involves not only legal questions, but also moral ones. The first question discussed is whether we should try to design the tort liability for car manufacturers in a way that will help along the development and improvement of autonomous vehicles. In particular, Patrick Lin’s concern that any security gain derived from the introduction of autonomous cars would constitute a trade-off in human lives will be addressed. The second question is whether it would be morally permissible to impose liability on the user based on a duty to pay attention to the road and traffic and to intervene when necessary to avoid accidents. Doubts about the moral legitimacy of such a scheme are based on the notion that it is a form of defamation if a person is held to blame for causing the death of another by his inattention if he never had a real chance to intervene. Therefore, the legitimacy of such an approach would depend on the user having an actual chance to do so. The last option discussed in this paper is a system in which a person using an autonomous vehicle has no duty (and possibly no way) of interfering, but is still held (financially, not criminally) responsible for possible accidents. Two ways of doing so are discussed, but only one is judged morally feasible.
Autonomous Vehicle Technology A Guide for Policymakers
This document and trademark(s) contained herein are protected by law. This representation of RAND intellectual property is provided for noncommercial use only. Unauthorized posting of this publication online is prohibited. Permission is given to duplicate this document for personal use only, as long as it is unaltered and complete. Permission is required from RAND to reproduce, or reuse in another form, any of its research documents for commercial use. For information on reprint and linking permissions, please visit www.rand.org/pubs/permissions.html.