Deep Learning-Based Drivers Emotion Classification System in Time Series Data for Remote Applications (original) (raw)
Related papers
Driver’s Facial Expression Recognition in Real-Time for Safe Driving
Sensors
In recent years, researchers of deep neural networks (DNNs)-based facial expression recognition (FER) have reported results showing that these approaches overcome the limitations of conventional machine learning-based FER approaches. However, as DNN-based FER approaches require an excessive amount of memory and incur high processing costs, their application in various fields is very limited and depends on the hardware specifications. In this paper, we propose a fast FER algorithm for monitoring a driver’s emotions that is capable of operating in low specification devices installed in vehicles. For this purpose, a hierarchical weighted random forest (WRF) classifier that is trained based on the similarity of sample data, in order to improve its accuracy, is employed. In the first step, facial landmarks are detected from input images and geometric features are extracted, considering the spatial position between landmarks. These feature vectors are then implemented in the proposed hier...
Monitoring Driver’s Vigilance Level Using Real-Time Facial Expression and Deep Learning Techniques
Road accidents caused by human error are amongthe main causes of the death in the world. Specifically, drowsiness and unconsciousness while driving are responsible for many fatal accidents on highways. Accuracy and performance are key metrics related to many researched techniques for the detection of drivers’ drowsiness. To improve these metrics, in this paper,a new method based on image processing and deep learning is proposed. The proposed method is based on facial region diagnosing using the Haar-cascade method and convolutional neural network for drowsiness probability detection. Evaluation analysis of the proposed method on the UTA-RLDD dataset with stratified 5-fold cross-validation showed a high accuracy of 96.8% at a speed of 10 frames per second, which is higher than those that have previously been reported in the literature. For further investigation, a custom dataset including 10 participants in different light conditions was collected. The result of all experiments showe...
IEEE Access, 2020
Human drivers have different driving styles, experiences, and emotions due to unique driving characteristics, exhibiting their own driving behaviors and habits. Various research efforts have approached the problem of detecting abnormal human driver behavior with the aid of capturing and analyzing the face of driver and vehicle dynamics via image and video processing but the traditional methods are not capable of capturing complex temporal features of driving behaviors. However, with the advent of deep learning algorithms, a significant amount of research has also been conducted to predict and analyze driver's behavior or action related information using neural network algorithms. In this paper, we contribute to first classify and discuss Human Driver Inattentive Driving Behavior (HIDB) into two major categories, Driver Distraction (DD), Driver Fatigue (DF), or Drowsiness (DFD). Then we discuss the causes and effects of another human risky driving behavior called Aggressive Driving behavior (ADB). Aggressive driving Behavior (ADB) is a broad group of dangerous and aggressive driving styles that lead to severe accidents. Human abnormal driving behaviors DD, DFD, and ADB are affected by various factors including driver experience/inexperience of driving, age, and gender or illness. The study of the effects of these factors that may lead to deterioration in the driving skills and performance of a human driver is out of the scope of this paper. After describing the background of deep learning and its algorithms, we present an in-depth investigation of most recent deep learning-based systems, algorithms, and techniques for the detection of Distraction, Fatigue/Drowsiness, and Aggressiveness of a human driver. We attempt to achieve a comprehensive understanding of HIADB detection by presenting a detailed comparative analysis of all the recent techniques. Moreover, we highlight the fundamental requirements. Finally, we present and discuss some significant and essential open research challenges as future directions. INDEX TERMS Deep learning, human inattentive driving behavior, connected vehicles, road accident avoidance, abnormal behavior detection, distraction or aggressiveness detection, fatigue or drowsiness detection.
Development Of Facial Stress Level Detection System For Driving Safety Using Deep Learning
Science and Technology Publishing, 2022
In this research work, a deep learning approach using a YOLO convolutional neural network (YCNN) algorithm was used to determine the facial stress level of drivers for their overall safety and that of others. A camera is placed on the dashboard that continuously tracks the face of the driver's image at real time and the model extracts basic features that helps to determine if the driver is drowsy or distracted. An alarm is triggered that alerts the driver when his/her face is off the car screen. Eye aspect ratio is used to calculate when the driver is gradually sleeping off or when eyes are closed. 10,000 images of drivers were obtained and splitted for the training, testing and validation phases in the ratio of 60: 20: 20. The results obtained after testing indicates 94% accuracy of the model. The model has a wide application in the areas of human computer communication, facial expression recognition, driver fatigue determination and autonomous or self-driving vehicles.
FACIAL EMOTION RECOGNITION FOR EFFICIENT DRIVING
2017
Facial expression analysis is relevant in emerging fields such as interactive games, autonomous driving. This is mainly based on Machine learning. The aim of this project is to analyze the facial expression of drivers that would help in determining their stress level and such information could be used to alert drivers if they are stressed and in a state unsafe for driving. The methodology involves Face detection that uses Open Source Computer Vision Library (Open CV), Feature extraction which detects the location of eyes and mouth and finally Classification of facial expressions that is implemented using a shallow learning model that uses SVM (Support Vector Machines) implemented using the libraries in python The analysis results in finding the stressed situations of the driver and limits the vehicle’s speed by flashing a sound or an alarm to calm down the driver. The analysis also monitors the driver’s attention while driving and suggests him/her to focus on driving on account of any distraction.
Drive Safe: An Intelligent System for Monitoring Stress and Pain from Drivers’ Facial Expressions
Stress and abnormal pain experienced by drivers during driving is one of the major causes of road accidents. Most of the existing systems focus on drivers being drowsy and monitoring fatigue. In this paper, an effective intelligent system for monitoring drivers’ stress and pain from facial expressions is proposed. A novel method of detecting stress as well as pain from facial expressions is proposed by combining a CK data set and Pain dataset. Initially, AAM (Active Appearance Models) features are tracked from the face; using these features, the Euclidian distance between the normal face and the emotional face are calculated and normalized. From the normalized values, the facial expression is detected via trained models. It has been observed from the results of the experiment that the developed system works very well on simulated data. The proposed system will be implemented on a mobile platform soon and will be proposed for android automobiles.
Using Machine Learning to Determine the Motorist Somnolence
Journal of Informatics Electrical and Electronics Engineering (JIEEE), A 2 Z Journals, 2023
Traffic accidents pose an increasing threat to society, and researchers are dedicated to preventing accidents and reducing fatalities, as highlighted by the World Health Organization. One significant cause of accidents is drowsy driving, which often leads to severe injuries and loss of life. The objective of this research is to create a fatigue detection system that can effectively minimize accidents associated with exhaustion. The system utilizes facial recognition technology to identify drowsy drivers by analyzing eye patterns through video processing. When the level of fatigue surpasses a predetermined threshold, the system alerts the driver and adjusts the vehicle's acceleration accordingly. The implementation of OpenCv libraries, such as Haar-cascade, along with Raspberry Pi facilitates seamless integration of the system. This dissertation evaluates advancements in computational engineering for the development of a fatigue detection system to mitigate accidents caused by drowsiness. It offers valuable insights and recommendations to enhance comprehension and optimize the system's effectiveness, ultimately leading to safer road travel.
Early Identification and Detection of Driver Drowsiness by Hybrid Machine Learning
IEEE Access, 2021
Drunkenness or exhaustion is a leading cause of car accidents, with severe implications for road safety. More fatal accidents could be avoided if fatigued drivers were warned ahead of time. Several drowsiness detection technologies to monitor for signs of inattention while driving and notifying the driver can be adopted. Sensors in self-driving cars must detect if a driver is sleepy, angry, or experiencing extreme changes in their emotions, such as anger. These sensors must constantly monitor the driver's facial expressions and detect facial landmarks in order to extract the driver's state of expression presentation and determine whether they are driving safely. As soon as the system detects such changes, it takes control of the vehicle, immediately slows it down, and alerts the driver by sounding an alarm to make them aware of the situation. The proposed system will be integrated with the vehicle's electronics, tracking the vehicle's statistics and providing more accurate results. In this paper, we have implemented real-time image segmentation and drowsiness using machine learning methodologies. In the proposed work, an emotion detection method based on Support Vector Machines (SVM) has been implemented using facial expressions. The algorithm was tested under variable luminance conditions and outperformed current research in terms of accuracy. We have achieved 83.25 % to detect the facial expression change.
IRJET- Drowsy Driving Detection via Hybrid Deep Learning
IRJET, 2020
Automotive industry plays a vital role in the country's economic and industrial development. The rapid growth brought great convenience in day to day life as well as increase in the number of traffic accidents. Every year many people lose their lives due to fatal road accidents around the world and drowsy driving is one of the primary causes of road accidents and death. Fatigue driving and drowsiness are the major reasons for fatal accidents. The capability of the transportation system to detect the alertness of drivers is very essential to ensure road safety. Most of the traditional methods to detect drowsiness are based on behavioural aspects while some are intrusive and may distract drivers, while some require expensive sensors. The hybrid deep learning approach is used to detect driver drowsiness. The real time application of drowsiness detection is based on facial expression recognition in video sequences. Facial and eye behaviours such as yawning and eye-blink patterns are analyzed based on the live facial video focused on the driver's face. Since the facial expression recognition is challenging, deep temporal convolutional neural network are employed on video segments. Both the spatial and temporal convolutional neural networks features are retrieved and integrated to form a deep belief network. The system records the videos and detects the driver's face in every frame by employing deep learning techniques. Facial landmarks on the observed segments are pointed and the eye aspect ratios are computed and drowsiness is detected based on the adaptive thresholding ratio.
The Multimodal Driver Monitoring Database: A Naturalistic Corpus to Study Driver Attention
IEEE Transactions on Intelligent Transportation Systems, 2021
A smart vehicle should be able to monitor the actions and behaviors of the human driver to provide critical warnings or intervene when necessary. Recent advancements in deep learning and computer vision have shown great promise in monitoring human behavior and activities. While these algorithms work well in a controlled environment, naturalistic driving conditions add new challenges such as illumination variations, occlusions, and extreme head poses. A vast amount of in-domain data is required to train models that provide high performance in predicting driving related tasks to effectively monitor driver actions and behaviors. Toward building the required infrastructure, this paper presents the multimodal driver monitoring (MDM) dataset, which was collected with 59 subjects that were recorded performing various tasks. We use the Fi-Cap device that continuously tracks the head movement of the driver using fiducial markers, providing frame-based annotations to train head pose algorithms in naturalistic driving conditions. We ask the driver to look at predetermined gaze locations to obtain accurate correlation between the driver's facial image and visual attention. We also collect data when the driver performs common secondary activities such as navigation using a smart phone and operating the in-car infotainment system. All of the driver's activities are recorded with high definition RGB cameras and a time-of-flight depth camera. We also record the controller area network-bus (CAN-Bus), extracting important information. These high quality recordings serve as the ideal resource to train various efficient algorithms for monitoring the driver, providing further advancements in the field of in-vehicle safety systems.