A Survey on Data-Driven Scenario Generation for Automated Vehicle Testing (original) (raw)
Related papers
Traffic Scenarios for Automated Vehicle Testing: A Review of Description Languages and Systems
Machines, 2021
Testing and validation of the functionalities and safety of automated vehicles shifted from a distance-based to a scenario-based method in the past decade. A number of domain-specific languages and systems were developed to support scenario-based testing. The aim of this paper is to review and compare the features and characteristics of the major scenario description languages and systems (SDLS). Each of them is designed for different purposes and with different goals; therefore, they have their strengths and weaknesses. Their characteristics are highlighted with an example nontrivial traffic scenario that we designed. We also discuss some directions for further development and research of these SDLS.
SceML: a graphical modeling framework for scenario-based testing of autonomous vehicles
2020
Ensuring the functional correctness and safety of autonomous vehicles is a major challenge for the automotive industry. However, exhaustive physical test drives are not feasible, as billions of driven kilometers would be required to obtain reliable results. Scenario-based testing is an approach to tackle this problem and reduce necessary test drives by replacing driven kilometers with simulations of relevant or interesting scenarios. These scenarios can be generated or extracted from recorded data with machine learning algorithms or created by experts. In this paper, we propose a novel graphical scenario modeling language. The graphical framework allows experts to create new scenarios or review ones designed by other experts or generated by machine learning algorithms. The scenario description is modeled as a graph and based on behavior trees. It supports different abstraction levels of scenario description during software and test development. Additionally, the graph-based structur...
Repurposing Microscopic Driver Modeling for Scenario Generation
When Elaine Herzberg was killed while crossing a road with her bicycle in Tempe, Arizona, confidence in autonomous vehicle technology was at an all-time high. In their accident report, the National Transportation Safety Board (NTSB) said that the vehicle in autonomous mode had failed to identify Herzberg until 1.2 seconds before impact. NTSB also said the system design did not include consideration for jaywalking pedestrians [1]. The accident created an uproar among the public and made the AV makers rethink testing beta-software on an actual road. Fast forward to 2021; there are announcements about deploying fully autonomous driving systems from big tech and car manufacturers. There is no government regulation or any other regulatory body to verify and validate the safety of these technologies. Since a market launch without safety assurance would not be acceptable by society or lawmakers, much time and resources have been invested into AV's safety assessment in recent years. There are several motivations behind the introduction of AVs, including road safety, driving comfort, energy efficiency, and broader mobility access [2]. According to the National Highway Traffic Safety Administration (NHTSA) report, 94% of serious vehicle crashes in the US are caused by human factors [3]. AVs promise to significantly reduce this public health crisis by removing many of the mistakes human drivers recurrently make. However, many challenges are creating bottlenecks, including the AV's perception system's performance, safety validation, legal and ethical issues, human-machine interaction, etc. Between September 2014 and January 2017, 11 suppliers and manufacturers reported 26 crashes while testing self-driving technology on public roads in California [4]. Before AVs can be deployed at scale, traffic safety issues related to automation need to be adequately addressed to avoid unacceptable situations like the one that happened in Arizona. The Society of Automotive Engineers (SAE) defines six levels of driving automation ranging from level 0 (fully manual) to level 5 (fully autonomous) [5]. "Driving automation" refers to both Advanced Driver Assistance Systems (ADAS) and Automated Driving Systems (ADS). ADAS encompasses features such as cruise control, adaptive cruise control. These systems support human drivers and enhance safety. Next-generation ADAS (traffic jam chauffeur, automated valet parking) and ADS may ultimately be able to operate a vehicle without human intervention. SAE classified ADAS features as level 0-2 and ADS as level 3-5. Verified testing methods accepted by all stakeholders for ADAS are already in place. The safety validation of next-generation ADAS to ADS (SAE level 3-5) in complex environments calls for newer approaches because of increased Operational Design Domain (ODD) 1 and less scope for human intervention. Due to the high degree of realism required for the measurement, conducting tests in the real road with other traffic users is the highest fidelity form of testing. However, testing AVs' response in unsafe situations in the real world is not feasible without placing the other road users, such as other human-driven vehicles, bicyclists, pedestrians, in danger. Moreover, these tests take a considerable amount of time before achieving a statistical conclusion regarding safety. Repeatability of the tests is also an issue. These factors make depending solely on real-world testing infeasible. Virtual testing can complement this real-world testing. Virtual testing has previously been used to verify and validate SAE level 0-2, and currently, companies like Waymo, Zoox, and Aurora are using virtual environments to train and test vehicle automation of SAE level 4-5. Waymo is using their virtual environment CarCraft to simulate 25000 AVs every day to drive 8 million miles in the virtual world [6]. Scenario-based testing is a promising method where individual traffic situations are tested using virtual simulation. These tests are repeatable, safe, and can be done in parallel, reducing the amount of time required for testing. Domain experts usually handcraft these scenarios to explore the underlying system's vulnerabilities. However, handcrafted critical scenarios from the domain experts suffer from limited variation. It only accounts for known-safe and known-unsafe situations and does not handle unknown-unsafe situations. State-of-the-art scenario-based validation focuses mainly on the variation of the static part of the scenarios, namely the environment, infrastructure, road network. These variations would have been sufficient for the previous generation driver autonomy features (e.g., automatic braking, lane following) because of limited ODD and scope for human intervention during critical situations. But newer technologies up to fully autonomous driving require the vehicle to perceive its surroundings with dynamic traffic participants (other human-driven vehicles, pedestrians, cyclists) and take decisions appropriately, with limited to no human intervention. To ensure the safety and appropriateness of these decisions, high/full autonomous driving systems need to be tested with a wide variety of combinations of the dynamic elements.
Applied Sciences
Safety validation of Autonomous Vehicles (AV) requires simulation. Automotive manufacturers need to generate scenarios used during this simulation-based validation process. Several approaches have been proposed to master scenario generation. However, none have proposed a method to measure the potential hazardousness of the scenarios with regard to the performance limitations of AV. In other words, there is no method offering a metric to guide the search for potentially critical scenarios within the infinite space of scenarios. However, designers have knowledge of the functional limitations of AV components depending on the situations encountered. The more sensitive the AV is to a situation, the more safety experts consider it to be critical. In this paper, we present a new method to help estimate the sensitivity of AV to logical situations and events before their use for the generation of concrete scenarios submitted to simulators. We propose a characterization of the inputs used fo...