Are Self-Driving Cars Secure? Evasion Attacks Against Deep Neural Networks for Steering Angle Prediction (original) (raw)
Related papers
Adversarial Driving: Attacking End-to-End Autonomous Driving
2021
As research in deep neural networks has advanced, deep convolutional networks have become feasible for automated driving tasks. In particular, there is an emerging trend of employing end-to-end neural network models for the automation of driving tasks. However, previous research has shown that deep neural network classifiers are vulnerable to adversarial attacks. For regression tasks, however, the effect of adversarial attacks is not as well understood. In this paper, we devise two white-box targeted attacks against end-to-end autonomous driving systems. The driving systems use a regression model that takes an image as input and outputs a steering angle. Our attacks manipulate the behavior of the autonomous driving system by perturbing the input image. Both attacks can be initiated in real-time on CPUs without employing GPUs. The efficiency of the attacks is illustrated using experiments conducted in Udacity. Demo video: https://youtu.be/I0i8uN2oOP0y.
Evaluating Adversarial Attacks on Driving Safety in Vision-Based Autonomous Vehicles
IEEE Internet of Things Journal, 2021
In recent years, many deep learning models have been adopted in autonomous driving. At the same time, these models introduce new vulnerabilities that may compromise the safety of autonomous vehicles. Specifically, recent studies have demonstrated that adversarial attacks can cause a significant decline in detection precision of deep learning-based 3D object detection models. Although driving safety is the ultimate concern for autonomous driving, there is no comprehensive study on the linkage between the performance of deep learning models and the driving safety of autonomous vehicles under adversarial attacks. In this paper, we investigate the impact of two primary types of adversarial attacks, perturbation attacks and patch attacks, on the driving safety of vision-based autonomous vehicles rather than the detection precision of deep learning models. In particular, we consider two state-of-the-art models in visionbased 3D object detection, Stereo R-CNN and DSGN. To evaluate driving safety, we propose an end-to-end evaluation framework with a set of driving safety performance metrics. By analyzing the results of our extensive evaluation experiments, we find that (1) the attack's impact on the driving safety of autonomous vehicles and the attack's impact on the precision of 3D object detectors are decoupled, and (2) the DSGN model demonstrates stronger robustness to adversarial attacks than the Stereo R-CNN model. In addition, we further investigate the causes behind the two findings with an ablation study. The findings of this paper provide a new perspective to evaluate adversarial attacks and guide the selection of deep learning models in autonomous driving.
Simple Physical Adversarial Examples against End-to-End Autonomous Driving Models
2019 IEEE International Conference on Embedded Software and Systems (ICESS), 2019
Recent advances in machine learning, especially techniques such as deep neural networks, are promoting a range of high-stakes applications, including autonomous driving, which often relies on deep learning for perception. While deep learning for perception has been shown to be vulnerable to a host of subtle adversarial manipulations of images, end-toend demonstrations of successful attacks, which manipulate the physical environment and result in physical consequences, are scarce. Moreover, attacks typically involve carefully constructed adversarial examples at the level of pixels. We demonstrate the first end-to-end attacks on autonomous driving in simulation, using simple physically realizable attacks: the painting of black lines on the road. These attacks target deep neural network models for end-to-end autonomous driving control. A systematic investigation shows that such attacks are surprisingly easy to engineer, and we describe scenarios (e.g., right turns) in which they are highly effective, and others that are less vulnerable (e.g., driving straight). Further, we use network deconvolution to demonstrate that the attacks succeed by inducing activation patterns similar to entirely different scenarios used in training. Index Terms-machine learning, adversarial examples, autonomous driving, end-to-end learning
Adversarial Attacks and Defense Technologies on Autonomous Vehicles: A Review
Applied Computer Systems
In recent years, various domains have been influenced by the rapid growth of machine learning. Autonomous driving is an area that has tremendously developed in parallel with the advancement of machine learning. In autonomous vehicles, various machine learning components are used such as traffic lights recognition, traffic sign recognition, limiting speed and pathfinding. For most of these components, computer vision technologies with deep learning such as object detection, semantic segmentation and image classification are used. However, these machine learning models are vulnerable to targeted tensor perturbations called adversarial attacks, which limit the performance of the applications. Therefore, implementing defense models against adversarial attacks has become an increasingly critical research area. The paper aims at summarising the latest adversarial attacks and defense models introduced in the field of autonomous driving with machine learning technologies up until mid-2021.
Attacks on Machine Learning: Adversarial Examples in Connected and Autonomous Vehicles
2019 IEEE International Symposium on Technologies for Homeland Security (HST), 2019
Connected and autonomous vehicles (CAV a.k.a. driverless cars) offset human response for transportation infrastructure, enhancing traffic efficiency, travel leisure, and road safety. Behind the wheels of these mobile robots lies machine learning (ML) to automate mundane driving tasks and make decisions from situational awareness. Attacking ML, the brain of driverless cars, can cause catastrophes. This paper proposes a novel approach to attack CAV by fooling its ML model. Using adversarial examples in CAVs, the work demonstrates how adversarial machine learning can generate attacks hardly detectable by current ML classifiers for CAV misbehavior detection. First, adversarial datasets are generated by a traditional attack engine, which CAV misbehavior detection ML models can easily detect. Building attack ML model takes two phases: training and testing. Using supervised learning, Phase I trains the model on the time-series data, converted from the adversarial datasets. Phase II tests the model, which leads, for the next round of model improvement. The initial round deploys K-Nearest Neighbor (KNN) and Random Forest (RF) algorithms, respectively. The next round, guided by deep learning (DL) models, uses Logistic Regression (LG) of neural network and Long Short-Term Memory (LSTM) of recurrent neural network. The results, in precision-recall (PR) and receiver operating characteristic (ROC) curves, validate the effectiveness of the proposed adversarial ML models. This work reveals the vulnerability in ML. At the same time, it shows the promise to protect critical infrastructure by studying the opponent strategies. Future work includes retraining the adversarial ML models with real-world datasets from pilot CAV sites.
On Adversarial Robustness of Trajectory Prediction for Autonomous Vehicles
2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
Trajectory prediction is a critical component for autonomous vehicles (AVs) to perform safe planning and navigation. However, few studies have analyzed the adversarial robustness of trajectory prediction or investigated whether the worst-case prediction can still lead to safe planning. To bridge this gap, we study the adversarial robustness of trajectory prediction models by proposing a new adversarial attack that perturbs normal vehicle trajectories to maximize the prediction error. Our experiments on three models and three datasets show that the adversarial prediction increases the prediction error by more than 150%. Our case studies show that if an adversary drives a vehicle close to the target AV following the adversarial trajectory, the AV may make an inaccurate prediction and even make unsafe driving decisions. We also explore possible mitigation techniques via data augmentation and trajectory smoothing.
IEEE Communications Surveys & Tutorials
Connected and autonomous vehicles (CAVs) will form the backbone of future next-generation intelligent transportation systems (ITS) providing travel comfort, road safety, along with a number of value-added services. Such a transformation-which will be fuelled by concomitant advances in technologies for machine learning (ML) and wireless communications-will enable a future vehicular ecosystem that is better featured and more efficient. However, there are lurking security problems related to the use of ML in such a critical setting where an incorrect ML decision may not only be a nuisance but can lead to loss of precious lives. In this paper, we present an in-depth overview of the various challenges associated with the application of ML in vehicular networks. In addition, we formulate the ML pipeline of CAVs and present various potential security issues associated with the adoption of ML methods. In particular, we focus on the perspective of adversarial ML attacks on CAVs and outline a solution to defend against adversarial attacks in multiple settings. Index Terms-Connected and autonomous vehicles, machine/deep learning, adversarial machine learning, adversarial perturbation, perturbation detection, and robust machine learning. TABLE I: Comparison of this paper with existing survey and review papers on the security of machine learning (ML) and connected and autonomous vehicles (CAVs).
IEEE Communications Surveys & Tutorials, 2020
Connected and autonomous vehicles (CAVs) will form the backbone of future next-generation intelligent transportation systems (ITS) providing travel comfort, road safety, along with a number of value-added services. Such a transformation-which will be fuelled by concomitant advances in technologies for machine learning (ML) and wireless communications-will enable a future vehicular ecosystem that is better featured and more efficient. However, there are lurking security problems related to the use of ML in such a critical setting where an incorrect ML decision may not only be a nuisance but can lead to loss of precious lives. In this paper, we present an in-depth overview of the various challenges associated with the application of ML in vehicular networks. In addition, we formulate the ML pipeline of CAVs and present various potential security issues associated with the adoption of ML methods. In particular, we focus on the perspective of adversarial ML attacks on CAVs and outline a solution to defend against adversarial attacks in multiple settings. Index Terms-Connected and autonomous vehicles, machine/deep learning, adversarial machine learning, adversarial perturbation, perturbation detection, and robust machine learning. TABLE I: Comparison of this paper with existing survey and review papers on the security of machine learning (ML) and connected and autonomous vehicles (CAVs).
Reconstruction-Based Adversarial Attack Detection in Vision-Based Autonomous Driving Systems
Machine Learning and Knowledge Extraction
The perception system is a safety-critical component that directly impacts the overall safety of autonomous driving systems (ADSs). It is imperative to ensure the robustness of the deep-learning model used in the perception system. However, studies have shown that these models are highly vulnerable to the adversarial perturbation of input data. The existing works mainly focused on studying the impact of these adversarial attacks on classification rather than regression models. Therefore, this paper first introduces two generalized methods for perturbation-based attacks: (1) We used naturally occurring noises to create perturbations in the input data. (2) We introduce a modified square, HopSkipJump, and decision-based/boundary attack to attack the regression models used in ADSs. Then, we propose a deep-autoencoder-based adversarial attack detector. In addition to offline evaluation metrics (e.g., F1 score and precision, etc.), we introduce an online evaluation framework to evaluate t...
Adversarial Sensor Attack on LiDAR-based Perception in Autonomous Driving
Proceedings of the 2019 ACM SIGSAC Conference on Computer and Communications Security, 2019
In Autonomous Vehicles (AVs), one fundamental pillar is perception, which leverages sensors like cameras and LiDARs (Light Detection and Ranging) to understand the driving environment. Due to its direct impact on road safety, multiple prior efforts have been made to study its the security of perception systems. In contrast to prior work that concentrates on camera-based perception, in this work we perform the first security study of LiDAR-based perception in AV settings, which is highly important but unexplored. We consider LiDAR spoofing attacks as the threat model and set the attack goal as spoofing obstacles close to the front of a victim AV. We find that blindly applying LiDAR spoofing is insufficient to achieve this goal due to the machine learning-based object detection process. Thus, we then explore the possibility of strategically controlling the spoofed attack to fool the machine learning model. We formulate this task as an optimization problem and design modeling methods for the input perturbation function and the objective function. We also identify the inherent limitations of directly solving the problem using optimization and design an algorithm that combines optimization and global sampling, which improves the attack success rates to around 75%. As a case study to understand the attack impact at the AV driving decision level, we construct and evaluate two attack scenarios that may damage road safety and mobility. We also discuss defense directions at the AV system, sensor, and machine learning model levels. CCS CONCEPTS • Security and privacy → Domain-specific security and privacy architectures; • Computer systems organization → Neural networks.