On Adversarial Robustness of Trajectory Prediction for Autonomous Vehicles (original) (raw)
Related papers
ARC: Adversarially Robust Control Policies for Autonomous Vehicles
2021 IEEE International Intelligent Transportation Systems Conference (ITSC)
Deep neural networks have demonstrated their capability to learn control policies for a variety of tasks. However, these neural network-based policies have been shown to be susceptible to exploitation by adversarial agents. Therefore, there is a need to develop techniques to learn control policies that are robust against adversaries. We introduce Adversarially Robust Control (ARC), which trains the protagonist policy and the adversarial policy end-to-end on the same loss. The aim of the protagonist is to maximise this loss, whilst the adversary is attempting to minimise it. We demonstrate the proposed ARC training in a highway driving scenario, where the protagonist controls the follower vehicle whilst the adversary controls the lead vehicle. By training the protagonist against an ensemble of adversaries, it learns a significantly more robust control policy, which generalises to a variety of adversarial strategies. The approach is shown to reduce the amount of collisions against new adversaries by up to 90.25%, compared to the original policy. Moreover, by utilising an auxiliary distillation loss, we show that the fine-tuned control policy shows no drop in performance across its original training distribution.
2019 IEEE Security and Privacy Workshops (SPW), 2019
Deep Neural Networks (DNNs) have tremendous potential in advancing the vision for self-driving cars. However, the security of DNN models in this context leads to major safety implications and needs to be better understood. We consider the case study of steering angle prediction from camera images, using the dataset from the 2014 Udacity challenge. We demonstrate for the first time adversarial testing-time attacks for this application for both classification and regression settings. We show that minor modifications to the camera image (an L2 distance of 0.82 for one of the considered models) result in mis-classification of an image to any class of attacker's choice. Furthermore, our regression attack results in a significant increase in Mean Square Error (MSE)-by a factor of 69 in the worst case. 1
Vehicle trajectory prediction works, but not everywhere
2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
Vehicle trajectory prediction is nowadays a fundamental pillar of self-driving cars. Both the industry and research communities have acknowledged the need for such a pillar by running public benchmarks. While state-of-the-art methods are impressive, i.e., they have no off-road prediction, their generalization to cities outside of the benchmark is unknown. In this work, we show that those methods do not generalize to new scenes. We present a novel method that automatically generates realistic scenes that cause state-ofthe-art models go off-road. We frame the problem through the lens of adversarial scene generation. We promote a simple yet effective generative model based on atomic scene generation functions along with physical constraints. Our experiments show that more than 60% of the existing scenes from the current benchmarks can be modified in a way to make prediction methods fail (predicting off-road). We further show that (i) the generated scenes are realistic since they do exist in the real world, and (ii) can be used to make existing models robust by 30-40%. Code is available at https://s-attack.github.io/.
Adversarial Driving: Attacking End-to-End Autonomous Driving
2021
As research in deep neural networks has advanced, deep convolutional networks have become feasible for automated driving tasks. In particular, there is an emerging trend of employing end-to-end neural network models for the automation of driving tasks. However, previous research has shown that deep neural network classifiers are vulnerable to adversarial attacks. For regression tasks, however, the effect of adversarial attacks is not as well understood. In this paper, we devise two white-box targeted attacks against end-to-end autonomous driving systems. The driving systems use a regression model that takes an image as input and outputs a steering angle. Our attacks manipulate the behavior of the autonomous driving system by perturbing the input image. Both attacks can be initiated in real-time on CPUs without employing GPUs. The efficiency of the attacks is illustrated using experiments conducted in Udacity. Demo video: https://youtu.be/I0i8uN2oOP0y.
Attacks on Machine Learning: Adversarial Examples in Connected and Autonomous Vehicles
2019 IEEE International Symposium on Technologies for Homeland Security (HST), 2019
Connected and autonomous vehicles (CAV a.k.a. driverless cars) offset human response for transportation infrastructure, enhancing traffic efficiency, travel leisure, and road safety. Behind the wheels of these mobile robots lies machine learning (ML) to automate mundane driving tasks and make decisions from situational awareness. Attacking ML, the brain of driverless cars, can cause catastrophes. This paper proposes a novel approach to attack CAV by fooling its ML model. Using adversarial examples in CAVs, the work demonstrates how adversarial machine learning can generate attacks hardly detectable by current ML classifiers for CAV misbehavior detection. First, adversarial datasets are generated by a traditional attack engine, which CAV misbehavior detection ML models can easily detect. Building attack ML model takes two phases: training and testing. Using supervised learning, Phase I trains the model on the time-series data, converted from the adversarial datasets. Phase II tests the model, which leads, for the next round of model improvement. The initial round deploys K-Nearest Neighbor (KNN) and Random Forest (RF) algorithms, respectively. The next round, guided by deep learning (DL) models, uses Logistic Regression (LG) of neural network and Long Short-Term Memory (LSTM) of recurrent neural network. The results, in precision-recall (PR) and receiver operating characteristic (ROC) curves, validate the effectiveness of the proposed adversarial ML models. This work reveals the vulnerability in ML. At the same time, it shows the promise to protect critical infrastructure by studying the opponent strategies. Future work includes retraining the adversarial ML models with real-world datasets from pilot CAV sites.
Evaluating Adversarial Attacks on Driving Safety in Vision-Based Autonomous Vehicles
IEEE Internet of Things Journal, 2021
In recent years, many deep learning models have been adopted in autonomous driving. At the same time, these models introduce new vulnerabilities that may compromise the safety of autonomous vehicles. Specifically, recent studies have demonstrated that adversarial attacks can cause a significant decline in detection precision of deep learning-based 3D object detection models. Although driving safety is the ultimate concern for autonomous driving, there is no comprehensive study on the linkage between the performance of deep learning models and the driving safety of autonomous vehicles under adversarial attacks. In this paper, we investigate the impact of two primary types of adversarial attacks, perturbation attacks and patch attacks, on the driving safety of vision-based autonomous vehicles rather than the detection precision of deep learning models. In particular, we consider two state-of-the-art models in visionbased 3D object detection, Stereo R-CNN and DSGN. To evaluate driving safety, we propose an end-to-end evaluation framework with a set of driving safety performance metrics. By analyzing the results of our extensive evaluation experiments, we find that (1) the attack's impact on the driving safety of autonomous vehicles and the attack's impact on the precision of 3D object detectors are decoupled, and (2) the DSGN model demonstrates stronger robustness to adversarial attacks than the Stereo R-CNN model. In addition, we further investigate the causes behind the two findings with an ablation study. The findings of this paper provide a new perspective to evaluate adversarial attacks and guide the selection of deep learning models in autonomous driving.
2018 IEEE Intelligent Vehicles Symposium (IV), 2018
To improve efficiency and reduce failures in autonomous vehicles, research has focused on developing robust and safe learning methods that take into account disturbances in the environment. Existing literature in robust reinforcement learning poses the learning problem as a two player game between the autonomous system and disturbances. This paper examines two different algorithms to solve the game, Robust Adversarial Reinforcement Learning and Neural Fictitious Self Play, and compares performance on an autonomous driving scenario. We extend the game formulation to a semi-competitive setting and demonstrate that the resulting adversary better captures meaningful disturbances that lead to better overall performance. The resulting robust policy exhibits improved driving efficiency while effectively reducing collision rates compared to baseline control policies produced by traditional reinforcement learning methods.
Adversarial Attacks and Defense Technologies on Autonomous Vehicles: A Review
Applied Computer Systems
In recent years, various domains have been influenced by the rapid growth of machine learning. Autonomous driving is an area that has tremendously developed in parallel with the advancement of machine learning. In autonomous vehicles, various machine learning components are used such as traffic lights recognition, traffic sign recognition, limiting speed and pathfinding. For most of these components, computer vision technologies with deep learning such as object detection, semantic segmentation and image classification are used. However, these machine learning models are vulnerable to targeted tensor perturbations called adversarial attacks, which limit the performance of the applications. Therefore, implementing defense models against adversarial attacks has become an increasingly critical research area. The paper aims at summarising the latest adversarial attacks and defense models introduced in the field of autonomous driving with machine learning technologies up until mid-2021.
IEEE Communications Surveys & Tutorials
Connected and autonomous vehicles (CAVs) will form the backbone of future next-generation intelligent transportation systems (ITS) providing travel comfort, road safety, along with a number of value-added services. Such a transformation-which will be fuelled by concomitant advances in technologies for machine learning (ML) and wireless communications-will enable a future vehicular ecosystem that is better featured and more efficient. However, there are lurking security problems related to the use of ML in such a critical setting where an incorrect ML decision may not only be a nuisance but can lead to loss of precious lives. In this paper, we present an in-depth overview of the various challenges associated with the application of ML in vehicular networks. In addition, we formulate the ML pipeline of CAVs and present various potential security issues associated with the adoption of ML methods. In particular, we focus on the perspective of adversarial ML attacks on CAVs and outline a solution to defend against adversarial attacks in multiple settings. Index Terms-Connected and autonomous vehicles, machine/deep learning, adversarial machine learning, adversarial perturbation, perturbation detection, and robust machine learning. TABLE I: Comparison of this paper with existing survey and review papers on the security of machine learning (ML) and connected and autonomous vehicles (CAVs).
IEEE Communications Surveys & Tutorials, 2020
Connected and autonomous vehicles (CAVs) will form the backbone of future next-generation intelligent transportation systems (ITS) providing travel comfort, road safety, along with a number of value-added services. Such a transformation-which will be fuelled by concomitant advances in technologies for machine learning (ML) and wireless communications-will enable a future vehicular ecosystem that is better featured and more efficient. However, there are lurking security problems related to the use of ML in such a critical setting where an incorrect ML decision may not only be a nuisance but can lead to loss of precious lives. In this paper, we present an in-depth overview of the various challenges associated with the application of ML in vehicular networks. In addition, we formulate the ML pipeline of CAVs and present various potential security issues associated with the adoption of ML methods. In particular, we focus on the perspective of adversarial ML attacks on CAVs and outline a solution to defend against adversarial attacks in multiple settings. Index Terms-Connected and autonomous vehicles, machine/deep learning, adversarial machine learning, adversarial perturbation, perturbation detection, and robust machine learning. TABLE I: Comparison of this paper with existing survey and review papers on the security of machine learning (ML) and connected and autonomous vehicles (CAVs).