Sensors Know When to Interrupt You in the Car (original) (raw)
Related papers
Driver Modeling for Detection & Assessment of Distraction : Examples from the UTDrive testbed
2017
Vehicle technologies have advanced significantly over the past twenty years, especially with respect to novel in-vehicle systems for route navigation, information access, infotainment, and connected vehicle advancements for vehicle-to-vehicle and vehicle-to-infrastructure connectivity/communications. While there is great interest in migrating to fully automated/self-driving vehicles, a number of factors such as: technology performance and cost barriers, interest in public safety, insurance issues, legal implications, government regulations, etc. all suggest it is more likely to have multi-functional vehicles, which allow for smooth transitions from complete human control towards semi-supervised/assisted, to fully automated. In this regard, next generation vehicles will need to be more active in assessing driver awareness, vehicle capabilities, traffic/environmental settings, and how these come together to determine a collaborative safe and effective driver-vehicle engagement for veh...
Leveraging Smartphone Sensors to Detect Distracted Driving Activities
IEEE Transactions on Intelligent Transportation Systems, 2018
In this paper, we explore the feasibility of leveraging the accelerometer and gyroscope sensors in modern smartphones to detect instances of distracting driving activities (e.g., calling, texting and reading while driving). To do so, we conducted an experiment with 16 subjects on a realistic driving simulator. As discussed later, the simulator is equipped with a realistic steering wheel, acceleration/ braking pedals, and a wide screen to visualize background vehicular traffic. It is also programmed to simulate multiple environmental conditions like day time, night time, fog and rain/ snow. Subjects were instructed to drive the simulator while performing a randomized sequence of activities that included texting, calling and reading from a phone while they were driving, during which the accelerometer and gyroscope in the phone were logging sensory data. By extracting features from this sensory data, we then implemented a machine learning technique based on Random Forests to detect distracted driving. Our technique achieves very good Precision, Recall and F-Measure across all environmental conditions we tested. We believe that our contributions in this paper can have significant impact for enhancing road safety.
IEEE Access
It is only a matter of time until autonomous vehicles become ubiquitous; however, human driving supervision will remain a necessity for decades. To assess the driver's ability to take control over the vehicle in critical scenarios, driver distractions can be monitored using wearable sensors or sensors that are embedded in the vehicle, such as video cameras. The types of driving distractions that can be sensed with various sensors is an open research question that this study attempts to answer. This study compared data from physiological sensors (palm electrodermal activity (pEDA), heart rate and breathing rate) and visual sensors (eye tracking, pupil diameter, nasal EDA (nEDA), emotional activation and facial action units (AUs)) for the detection of four types of distractions. The dataset was collected in a previous driving simulation study. The statistical tests showed that the most informative feature/modality for detecting driver distraction depends on the type of distraction, with emotional activation and AUs being the most promising. The experimental comparison of seven classical machine learning (ML) and seven end-to-end deep learning (DL) methods, which were evaluated on a separate test set of 10 subjects, showed that when classifying windows into distracted or not distracted, the highest F1-score of 79% was realized by the extreme gradient boosting (XGB) classifier using 60-second windows of AUs as input. When classifying complete driving sessions, XGB's F1-score was 94%. The best-performing DL model was a spectro-temporal ResNet, which realized an F1-score of 75% when classifying segments and an F1-score of 87% when classifying complete driving sessions. Finally, this study identified and discussed problems, such as label jitter, scenario overfitting and unsatisfactory generalization performance, that may adversely affect related ML approaches.
Measuring the impact of cognitive distractions on driving performance using time series analysis
17th International IEEE Conference on Intelligent Transportation Systems (ITSC), 2014
Using current sensing technology, a wealth of data on driving sessions is potentially available through a combination of vehicle sensors and drivers' physiology sensors (heart rate, breathing rate, skin temperature, etc.). Our hypothesis is that it should be possible to exploit the combination of time series produced by such multiple sensors during a driving session, in order to (i) learn models of normal driving behaviour, and (ii) use such models to detect important and potentially dangerous deviations from the norm in real-time, and thus enable the generation of appropriate alerts. Crucially, we believe that such models and interventions should and can be personalised and tailor-made for each individual driver. As an initial step towards this goal, in this paper we present techniques for assessing the impact of cognitive distraction on drivers, based on simple time series analysis. We have tested our method on a rich dataset of driving sessions, carried out in a professional simulator, involving a panel of volunteer drivers. Each session included a different type of cognitive distraction, and resulted in multiple time series from a variety of on-board sensors as well as sensors worn by the driver. Crucially, each driver also recorded an initial session with no distractions. In our model, such initial session provides the baseline times series that make it possible to quantitatively assess driver performance under distraction conditions.
Predicting human interruptibility with sensors
2005
Abstract A person seeking another person's attention is normally able to quickly assess how interruptible the other person currently is. Such assessments allow behavior that we consider natural, socially appropriate, or simply polite. This is in sharp contrast to current computer and communication systems, which are largely unaware of the social situations surrounding their usage and the impact that their actions have on these situations.
Using Machine Learning to Predict the Driving Context whilst Driving
This paper discusses how the driving context (driving events and distraction level) can be determined using a mobile phone equipped with several sensors. The majority of existing in-car communication systems (ICCS) available today are built-in and do not use the driving context. This creates two issues: firstly, the use of an ICCS is limited to specific cars and secondly, the driver's safety remains an issue, as the driving context is not taken into account. This paper discusses two experiments in which data are collected, trained and tested in order to create a model that predicts driving events and distraction level. A mobile, context-aware application was built using the MIMIC (Multimodal Interface for Mobile Info-communication with Context) Framework. The Inference Engine uses information from several sources, namely mobile sensors, GPS and weather information, to infer both the driving event and the distraction level. The results obtained showed that the driving events and the distraction level can be accurately predicted. The driving events were predicted using the IB1 technique with an accuracy of 92.25%. In the second experiment, the distraction level was predicted with 95.16% accuracy, using the KStar (decision tree) technique. An analysis of the decision tree showed that some variables were more important than others in predicting the driving context. These variables included the speed and direction, as well as acceleration, magnetic field and orientation.
Self-Interruptions of Non-Driving Related Tasks in Automated Vehicles: Mobile vs Head-Up Display
Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, 2020
Automated driving raises new human factors challenges. There is a paradox that allows drivers to perform non-driving related tasks (NDRTs), while benefiting from a driver who regularly attends to the driving task. Systems that aim to better manage a driver's attention, encouraging task switching and interleaving, may help address this paradox. However, a better understanding of how drivers self-interrupt while engaging in NDRTs is required to inform such systems. This paper presents a counterbalanced within-subject simulator study with N=42 participants experiencing automated driving in a familiar driving environment. Participants chose a TV show to watch on a HUD and mobile display during two 15min drives on the same route. Eye and head tracking data revealed more self-interruptions in the HUD condition, suggesting a higher likelihood of a higher situation awareness. Our results may benefit the design of future attention management systems by informing the visual and temporal integration of the driving and non-driving related task.
Real-Time Detection System of Driver Distraction Using Machine Learning
IEEE Transactions on Intelligent Transportation Systems, 2013
There is an accumulating evidence that driver's distraction is a leading cause of vehicle crashes and incidents. In particular, it has become an important and growing safety concern with the increasing use of the so-called In-Vehicle Information Systems (IVIS) and Partially Autonomous Driving Assistance Systems (PADAS). Thereby, the detection of the driver status is of paramount importance, in order to adapt IVIS and PADAS accordingly, so avoiding or mitigating their possible negative effects. The purpose of this paper is to illustrate a method for the non-intrusive and real-time detection of visual distraction, based on vehicle dynamics data and without using the eye-tracker data as inputs to classifiers. Specifically, we present and compare different models, based on well-known Machine Learning methods. Data for training the models were collected using a static driving simulator, with real human subjects performing a specific secondary task (SURT) while driving. Different training methods, model characteristics and feature selection criteria have been compared. Based on our results, SVM has outperformed all the other ML methods, providing the highest classification rate for most of the subjects. Potential applications of this research include the design of adaptive IVIS and of "smarter" PADAS.
Machine learning based classifier model for autonomous distracted driver detection and prevention
Recent researches and surveys have provided us with the evidence that distracted driver is a major cause of vehicle crashes all around the world. In-vehicle information systems (IVIS) have raised driver safety concern and thus, detecting distracted driver is of paramount importance. The project (or paper) shows a method of real-time distraction detection and initiates safety measures. In the realization of this project we have used Web-Cam, Raspberry Pi (a low cost, small size computing device), along with concepts of deep learning and convolutional neural networks. We classify drivers into multiple categories of distraction, some of them are texting, drinking, operating IVIS etc. Web-Cam feeds the classifier with real-time images of the driver of a particular vehicle. The system also constitutes a buzzer alarm which rings once the distraction is detected.