Deep learning and model personalization in sensor-based human activity recognition (original) (raw)
Related papers
2020
In the recent years there has been a growing interest in techniques able to automatically recognize activities performed by people. This field is known as Human Activity recognition (HAR). HAR can be crucial in monitoring the wellbeing of the people, with special regard to the elder population and those people affected by degenerative conditions. One of the main challenges concerns the population diversity problem, that is, the natural differences between users’ activity patterns, which implies that executions of the same activity performed by different people are different. Previous experiments have shown that personalization based on similarity between subjects and signals can increase the accuracy of recognition models of human activities obtained by traditional machine learning techniques. In this article, we investigate whether personalization applied to deep learning techniques can lead to more accurate models with respect to those obtained both by applying personalization to ...
International Journal of Machine Learning and Computing
Human activity recognition (HAR) has been a popular fields of research in recent times. Many approaches have been implemented in literature with the aim of recognizing and analyzing human activity. Classical machine learning approaches use hand-crafted feature extraction and are based on classification technique, however of late, deep learning approaches have shown greater success in recognition accuracy with increased performance. With the current, wide popularity of mobile phones and various sensors such as accelerometers, gyroscopes, and cameras that are already installed on mobile phones, the activity recognition using the accumulating data from mobile phones has been a significant area of research in HAR. In this paper, we investigate the HAR based on the data collected through the accelerometer sensor of mobile devices. We employ different machine learning (ML) classifiers, algorithms, and deep learning (DL) models across different benchmark datasets. The experimental results from this study provide a comparative performance analysis based on accuracy, performance, and the costs of different ML algorithms and DL algorithms, based on recurrent neural network (RNN) and convolutional neural network (CNN) models for activity recognition.
Real Time Human Activity Recognition: Smartphone Sensor based-using Deep Learning
2021
Human Activity Recognition is an emerging field of study with a lot of innovations and applications. With digitalization, mobile development and advancement in technology taking over mankind, Smartphones have become an integral part of our life. We’ve been so dependent on Science and its innovations, that living without mobile phones is nearly impossible. With advancement in technology, comes the responsibility of providing mankind with efficient, conventional and sustainable resources. Our project aims to implement the idea of “Technology at your fingertips”. The number of elderly people is predicted to elevate over the years, “aging in place” (living at home regardless of age factors and other aspects) is becoming an important topic in the area of ambient assisted living (AAL). Therefore, we have proposed a human activity recognition system based on data collected from smartphone motion sensors for daily physical activity monitoring. The proposed approach implies developing a pred...
Human Activity Recognition using Deep Learning Models on Smartphones and Smartwatches Sensor Data
2021
In recent years, human activity recognition has garnered considerable attention both in industrial and academic research because of the wide deployment of sensors, such as accelerometers and gyroscopes, in products such as smartphones and smartwatches. Activity recognition is currently applied in various fields where valuable information about an individual’s functional ability and lifestyle is needed. In this study, we used the popular WISDM dataset for activity recognition. Using multivariate analysis of covariance (MANCOVA), we established a statistically significant difference (p < 0.05) between the data generated from the sensors embedded in smartphones and smartwatches. By doing this, we show that smartphones and smartwatches don’t capture data in the same way due to the location where they are worn. We deployed several neural network architectures to classify 15 different hand and non-hand oriented activities. These models include Long short-term memory (LSTM), Bi-directio...
Deep Architectures for Human Activity Recognition using Sensors
3C Tecnología_Glosas de innovación aplicadas a la pyme, 2019
Human activity recognition (HAR) is a renowned research field in recent years due to its applications such as physical fitness monitoring, assisted living, elderly-care, biometric authentication and many more. The ubiquitous nature of sensors makes them a good choice to use for activity recognition. The latest smart gadgets are equipped with most of the wearable sensors i.e. accelerometer, gyroscope, GPS, compass, camera, microphone etc. These sensors measure various aspects of an object, and are easy to use with less cost. The use of sensors in the field of HAR opens new avenues for machine learning (ML) researchers to accurately recognize human activities. Deep learning (DL) is becoming popular among HAR researchers due to its outstanding performance over conventional ML techniques. In this paper, we have reviewed recent research studies on deep models for sensor-based human activity recognition. The aim of this article is to identify recent trends and challenges in HAR.
Deep learning approaches for human activity recognition using wearable technology
Medicinski podmladak
The need for long-term monitoring of individuals in their natural environment has initiated the development of a various number of wearable healthcare sensors for a wide range of applications: medical monitoring in clinical or home environments, physical activity assessment of athletes and recreators, baby monitoring in maternity hospitals and homes etc. Neural networks (NN) are data-driven type of modelling. Neural networks learn from experience, without knowledge about the model of phenomenon, but knowing the desired "output" data for the training "input" data. The most promising concept of machine learning that involves NN is the deep learning (DL) approach. The focus of this review is on approaches of DL for physiological activity recognition or human movement analysis purposes, using wearable technologies. This review shows that deep learning techniques are useful tools for health condition prediction or overall monitoring of data, streamed by wearable systems. Despite the considerable progress and wide field of applications, there are still some limitations and room for improvement of DL approaches for wearable healthcare systems, which may lead to more robust and reliable technology for personalized healthcare.
A robust human activity recognition system using smartphone sensors and deep learning
Future Generation Computer Systems, 2018
In last few decades, human activity recognition grabbed considerable research attentions from a wide range of pattern recognition and human-computer interaction researchers due to its prominent applications such as smart home health care. For instance, activity recognition systems can be adopted in a smart home health care system to improve their rehabilitation processes of patients. There are various ways of using different sensors for human activity recognition in a smartly controlled environment. Among which, physical human activity recognition through wearable sensors provides valuable information about an individual's degree of functional ability and lifestyle. In this paper, we present a smartphone inertial sensors-based approach for human activity recognition. Efficient features are first extracted from raw data. The features include mean, median, autoregressive coefficients, etc. The features are further processed by a kernel principal component analysis (KPCA) and linear discriminant analysis (LDA) to make them more robust. Finally, the features are trained with a Deep Belief Network (DBN) for successful activity recognition. The proposed approach was compared with traditional expression recognition approaches such as typical multiclass Support Vector Machine (SVM) and Artificial Neural Network (ANN) where it outperformed them.
Human activity recognition with smartphone sensors using deep learning neural networks
Human activities are inherently translation invariant and hierarchical. Human activity recognition (HAR), a field that has garnered a lot of attention in recent years due to its high demand in various application domains, makes use of time-series sensor data to infer activities. In this paper, a deep convolutional neural network (convnet) is proposed to perform efficient and effective HAR using smartphone sensors by exploiting the inherent characteristics of activities and 1D time-series signals, at the same time providing a way to automatically and data-adaptively extract robust features from raw data. Experiments show that convnets indeed derive relevant and more complex features with every additional layer, although difference of feature complexity level decreases with every additional layer. A wider time span of temporal local correlation can be exploited (1 × 9-1 × 14) and a low pooling size (1 × 2-1 × 3) is shown to be beneficial. Convnets also achieved an almost perfect classification on moving activities, especially very similar ones which were previously perceived to be very difficult to classify. Lastly, convnets outperform other state-of-the-art data mining techniques in HAR for the benchmark dataset collected from 30 volunteer subjects, achieving an overall performance of 94.79% on the test set with raw sensor data, and 95.75% with additional information of temporal fast Fourier transform of the HAR data set.
Deep Learning for Sensor-based Human Activity Recognition
ACM Computing Surveys, 2022
The vast proliferation of sensor devices and Internet of Things enables the applications of sensor-based activity recognition. However, there exist substantial challenges that could influence the performance of the recognition system in practical scenarios. Recently, as deep learning has demonstrated its effectiveness in many areas, plenty of deep methods have been investigated to address the challenges in activity recognition. In this study, we present a survey of the state-of-the-art deep learning methods for sensor-based human activity recognition. We first introduce the multi-modality of the sensory data and provide information for public datasets that can be used for evaluation in different challenge tasks. We then propose a new taxonomy to structure the deep methods by challenges. Challenges and challenge-related deep methods are summarized and analyzed to form an overview of the current research progress. At the end of this work, we discuss the open issues and provide some in...