Activity-Aware Fall Detection and Recognition Based on Wearable Sensors (original) (raw)
Related papers
An Efficient Data Imputation Technique for Human Activity Recognition
ArXiv, 2020
The tremendous applications of human activity recognition are surging its span from health monitoring systems to virtual reality applications. Thus, the automatic recognition of daily life activities has become significant for numerous applications. In recent years, many datasets have been proposed to train the machine learning models for efficient monitoring and recognition of human daily living activities. However, the performance of machine learning models in activity recognition is crucially affected when there are incomplete activities in a dataset, i.e., having missing samples in dataset captures. Therefore, in this work, we propose a methodology for extrapolating the missing samples of a dataset to better recognize the human daily living activities. The proposed method efficiently pre-processes the data captures and utilizes the k-Nearest Neighbors (KNN) imputation technique to extrapolate the missing samples in dataset captures. The proposed methodology elegantly extrapolate...
Improving Human Activity Monitoring by Imputation of Missing Sensory Data: Experimental Study
Future Internet
The automatic recognition of human activities with sensors available in off-the-shelf mobile devices has been the subject of different research studies in recent years. It may be useful for the monitoring of elderly people to present warning situations, monitoring the activity of sports people, and other possibilities. However, the acquisition of the data from different sensors may fail for different reasons, and the human activities are recognized with better accuracy if the different datasets are fulfilled. This paper focused on two stages of a system for the recognition of human activities: data imputation and data classification. Regarding the data imputation, a methodology for extrapolating the missing samples of a dataset to better recognize the human activities was proposed. The K-Nearest Neighbors (KNN) imputation technique was used to extrapolate the missing samples in dataset captures. Regarding the data classification, the accuracy of the previously implemented method, i....
A Method for Sensor-Based Activity Recognition in Missing Data Scenario
Sensors
Sensor-based human activity recognition has various applications in the arena of healthcare, elderly smart-home, sports, etc. There are numerous works in this field—to recognize various human activities from sensor data. However, those works are based on data patterns that are clean data and have almost no missing data, which is a genuine concern for real-life healthcare centers. Therefore, to address this problem, we explored the sensor-based activity recognition when some partial data were lost in a random pattern. In this paper, we propose a novel method to improve activity recognition while having missing data without any data recovery. For the missing data pattern, we considered data to be missing in a random pattern, which is a realistic missing pattern for sensor data collection. Initially, we created different percentages of random missing data only in the test data, while the training was performed on good quality data. In our proposed approach, we explicitly induce differe...
Physical Activity Recognition Based on Machine Learning
Proceedings of the 29th Minisymposium, 2022
The following paper presents a comparison study of various machine learning techniques in recognition of activities of daily living (ADL), with special attention being given to movements during human falling and the distinction among various types of falls. The motivation for the development of physical activity recognition algorithm includes keeping track of users' activities in real-time, and possible diagnostics of unwanted and unexpected movements and/or events. The activities recorded and processed in this study include various types of daily activities, such as walking, running, etc., while fall activities include falling forward, falling backward, falling left and right (front fall, back fall and side fall). The algorithm was trained on two publicly available datasets containing signals from an accelerometer, a magnetometer and a gyroscope.
A Novel Approach for Machine Learning-Based Identification of Human Activities
IRJET, 2023
Human activity recognition (HAR) is a rapidly growing field of research that uses machine learning to automatically identify and classify human activities from sensor data. This data can be collected from a variety of sources, such as wearable sensors, smartphones, and video cameras. HAR has a wide range of potential applications, including healthcare, sports, and security. In this paper, we present a comprehensive overview of the state-of-the-art in HAR using machine learning based on datasets. We discuss the various feature extraction techniques that can be applied, and the different machine learning algorithms that can be used for model training. We also present a survey of the recent literature on HAR using machine learning, and we discuss the challenges and opportunities that lie ahead in this field. Our findings suggest that HAR using machine learning based on datasets is a promising approach for a variety of applications. However, there are still a number of challenges that need to be addressed in order to improve the accuracy and robustness of HAR systems. These challenges include the need for more accurate and efficient feature extraction techniques, the development of more powerful machine learning algorithms, and the creation of larger and more diverse datasets. We believe that this paper provides a valuable contribution to the field of HAR using machine learning. It provides a comprehensive overview of the stateof-the-art, and it identifies the challenges and opportunities that lie ahead. We hope that this paper will help to accelerate the development of more accurate and reliable HAR systems that can be used to improve the lives of people in a variety of ways.
UniMiB SHAR: a new dataset for human activity recognition using acceleration data from smartphones
arXiv (Cornell University), 2016
Smartphones, smartwatches, fitness trackers, and ad-hoc wearable devices are being increasingly used to monitor human activities. Data acquired by the hosted sensors are usually processed by machine-learning-based algorithms to classify human activities. The success of those algorithms mostly depends on the availability of training (labeled) data that, if made publicly available, would allow researchers to make objective comparisons between techniques. Nowadays, publicly available data sets are few, often contain samples from subjects with too similar characteristics, and very often lack of specific information so that is not possible to select subsets of samples according to specific criteria. In this article, we present a new dataset of acceleration samples acquired with an Android smartphone designed for human activity recognition and fall detection. The dataset includes 11,771 samples of both human activities and falls performed by 30 subjects of ages ranging from 18 to 60 years. Samples are divided in 17 fine grained classes grouped in two coarse grained classes: one containing samples of 9 types of activities of daily living (ADL) and the other containing samples of 8 types of falls. The dataset has been stored to include all the information useful to select samples according to different criteria, such as the type of ADL performed, the age, the gender, and so on. Finally, the dataset has been benchmarked with four different classifiers and with two different feature vectors. We evaluated four different classification tasks: fall vs no fall, 9 activities, 8 falls, 17 activities and falls. For each classification task we performed a subject-dependent (5-fold cross validation) and a subject-independent (leave-subject-out) evaluation. The major findings of the evaluation are the following: i) it is more difficult to distinguish between types of falls than types of activities; ii) subject-dependent evaluation outperforms the subject-independent one. The database will allow researchers to work to more robust features and classification schemes that permit to deal with different types of falls and with effect of data personalization.
Wearable Sensor Data Based Human Activity Recognition using Machine Learning: A new approach
2019
Recent years have witnessed the rapid development of human activity recognition (HAR) based on wearable sensor data. One can find many practical applications in this area, especially in the field of health care. Many machine learning algorithms such as Decision Trees, Support Vector Machine, Naive Bayes, K-Nearest Neighbor, and Multilayer Perceptron are successfully used in HAR. Although these methods are fast and easy for implementation, they still have some limitations due to poor performance in a number of situations. In this paper, we propose a novel method based on the ensemble learning to boost the performance of these machine learning methods for HAR.
Physical Human Activity Recognition Using Wearable Sensors
Sensors, 2015
This paper presents a review of different classification techniques used to recognize human activities from wearable inertial sensor data. Three inertial sensor units were used in this study and were worn by healthy subjects at key points of upper/lower body limbs (chest, right thigh and left ankle). Three main steps describe the activity recognition process: sensors' placement, data pre-processing and data classification. Four supervised classification techniques namely, k-Nearest Neighbor (k-NN), Support Vector Machines (SVM), Gaussian Mixture Models (GMM), and Random Forest (RF) as well as three unsupervised classification techniques namely, k-Means, Gaussian mixture models (GMM) and Hidden Markov Model (HMM), are compared in terms of correct classification rate, F-measure, recall, precision, and specificity. Raw data and extracted features are used separately as inputs of each classifier. The feature selection is performed using a wrapper approach based on the RF algorithm. Based on our experiments, the results obtained show that the k-NN classifier provides the best performance compared to other supervised classification algorithms, whereas the HMM classifier is the one that gives the best results among unsupervised classification algorithms. This comparison highlights which approach gives better performance in both supervised and unsupervised contexts. It should be noted that the obtained results are limited to the context of this study, which concerns the classification of the main daily living human activities using three wearable accelerometers placed at the chest, right shank and left ankle of the subject.
Efficient Activity Recognition and Fall Detection Using Accelerometers
Communications in Computer and Information Science, 2013
Ambient assisted living (AAL) systems need to understand the user's situation, which makes activity recognition an important component. Falls are one of the most critical problems of the elderly, so AAL systems often incorporate fall detection. We present an activity recognition (AR) and fall detection (FD) system aiming to provide robust real-time performance. It uses two wearable accelerometers, since this is probably the most mature technology for such purpose. For the AR, we developed an architecture that combines rules to recognize postures, which ensures that the behavior of the system is predictable and robust, and classifiers trained with machine learning algorithms, which provide maximum accuracy in the cases that cannot be handled by the rules. For the FD, rules are used that take into account high accelerations associated with falls and the recognized horizontal orientation (e.g., falling is often followed by lying). The system was tested on a dataset containing a wide range of activities, two different types of falls and two events easily mistaken for falls. The Fmeasure of the AR was 99 %, even though it was never tested on the same persons it was trained on. The F-measure of the FD was 78 % due to the difficulty of the events to be recognized and the need for real-time performance, which made it impossible to rely on the recognition of long lying after a fall.