Comparison of Feature Learning Methods for Human Activity Recognition Using Wearable Sensors (original) (raw)
Related papers
Performance Analysis of Deep Learning based Human Activity Recognition Methods
Journal of Applied Science & Process Engineering, 2022
Human Activity Recognition (HAR) is one of the most important branches of human-centered research activities. Along with the development of artificial intelligence, deep learning techniques have gained remarkable success in computer vision. In recent years, there is a growing interest in Human Activity Recognition systems applied in healthcare, security surveillance, and human motion-based activities. A HAR system is essentially made of a wearable device equipped with a set of sensors (like accelerometers, gyroscopes, magnetometers, heart-rate sensors, etc.). Different methods are being applied for improving the accuracy and performance of the HAR system. In this paper, we implement Artificial Neural Network (ANN), and Convolutional Neural Network (CNN) in combination with Long Short-term Memory (LSTM) methods with different layers and compare their outputs towards the accuracy in the HAR system. We compare the accuracy of different HAR methods and observed that the performance of our proposed model of CNN 2 layers with LSTM 1 layer is the best.
Human Activity Recognition using Wearable Sensors by Deep Convolutional Neural Networks
Human physical activity recognition based on wearable sensors has applications relevant to our daily life such as healthcare. How to achieve high recognition accuracy with low computational cost is an important issue in the ubiquitous computing. Rather than exploring handcrafted features from time-series sensor signals, we assemble signal sequences of accelerometers and gyroscopes into a novel activity image, which enables Deep Convolutional Neural Networks (DCNN) to automatically learn the optimal features from the activity image for the activity recognition task. Our proposed approach is evaluated on three public datasets and it outperforms state-of-the-arts in terms of recognition accuracy and computational cost.
Human Activity Recognition using Deep Learning Models on Smartphones and Smartwatches Sensor Data
2021
In recent years, human activity recognition has garnered considerable attention both in industrial and academic research because of the wide deployment of sensors, such as accelerometers and gyroscopes, in products such as smartphones and smartwatches. Activity recognition is currently applied in various fields where valuable information about an individual’s functional ability and lifestyle is needed. In this study, we used the popular WISDM dataset for activity recognition. Using multivariate analysis of covariance (MANCOVA), we established a statistically significant difference (p < 0.05) between the data generated from the sensors embedded in smartphones and smartwatches. By doing this, we show that smartphones and smartwatches don’t capture data in the same way due to the location where they are worn. We deployed several neural network architectures to classify 15 different hand and non-hand oriented activities. These models include Long short-term memory (LSTM), Bi-directio...
Deep Learning for Sensor-based Human Activity Recognition
ACM Computing Surveys, 2022
The vast proliferation of sensor devices and Internet of Things enables the applications of sensor-based activity recognition. However, there exist substantial challenges that could influence the performance of the recognition system in practical scenarios. Recently, as deep learning has demonstrated its effectiveness in many areas, plenty of deep methods have been investigated to address the challenges in activity recognition. In this study, we present a survey of the state-of-the-art deep learning methods for sensor-based human activity recognition. We first introduce the multi-modality of the sensory data and provide information for public datasets that can be used for evaluation in different challenge tasks. We then propose a new taxonomy to structure the deep methods by challenges. Challenges and challenge-related deep methods are summarized and analyzed to form an overview of the current research progress. At the end of this work, we discuss the open issues and provide some in...
A Deep Learning Method for Complex Human Activity Recognition Using Virtual Wearable Sensors
Spatial Data and Intelligence, 2021
Sensor-based human activity recognition (HAR) is now a research hotspot in multiple application areas. With the rise of smart wearable devices equipped with inertial measurement units (IMUs), researchers begin to utilize IMU data for HAR. By employing machine learning algorithms, early IMU-based research for HAR can achieve accurate classification results on traditional classical HAR datasets, containing only simple and repetitive daily activities. However, these datasets rarely display a rich diversity of information in real-scene. In this paper, we propose a novel method based on deep learning for complex HAR in the real-scene. Specially, in the off-line training stage, the AMASS dataset, containing abundant human poses and virtual IMU data, is innovatively adopted for enhancing the variety and diversity. Moreover, a deep convolutional neural network with an unsupervised penalty is proposed to automatically extract the features of AMASS and improve the robustness. In the on-line testing stage, by leveraging advantages of the transfer learning, we obtain the final result by fine-tuning the partial neural network (optimizing the parameters in the fully-connected layers) using the real IMU data. The experimental results show that the proposed method can surprisingly converge in a few iterations and achieve an accuracy of 91.15% on a real IMU dataset, demonstrating the efficiency and effectiveness of the proposed method.
Deep learning approaches for human activity recognition using wearable technology
Medicinski podmladak
The need for long-term monitoring of individuals in their natural environment has initiated the development of a various number of wearable healthcare sensors for a wide range of applications: medical monitoring in clinical or home environments, physical activity assessment of athletes and recreators, baby monitoring in maternity hospitals and homes etc. Neural networks (NN) are data-driven type of modelling. Neural networks learn from experience, without knowledge about the model of phenomenon, but knowing the desired "output" data for the training "input" data. The most promising concept of machine learning that involves NN is the deep learning (DL) approach. The focus of this review is on approaches of DL for physiological activity recognition or human movement analysis purposes, using wearable technologies. This review shows that deep learning techniques are useful tools for health condition prediction or overall monitoring of data, streamed by wearable systems. Despite the considerable progress and wide field of applications, there are still some limitations and room for improvement of DL approaches for wearable healthcare systems, which may lead to more robust and reliable technology for personalized healthcare.
Basic Activity Recognition from Wearable Sensors Using a Lightweight Deep Neural Network
Journal of ICT Standardization
The field of human activity recognition has undergone a great development, making its presence felt in various sectors such as healthcare and supervision. The identification of fundamental behaviours that occur regularly in our everyday lives can be extremely useful in the development of systems that aid the elderly, as well as opening the door to the detection of more complicated activities in a Smart home environment. Recently, the use of deep learning techniques allowed the extraction of features from sensor’s readings automatically, in a hierarchical way through non-linear transformations. In this study, we propose a deep learning model that can work with raw data without any pre-processing. Several human activities can be recognized by our stacked LSTM network. We demonstrate that our outcomes are comparable to or better than those obtained by traditional feature engineering approaches. Furthermore, our model is lightweight and can be applied on edge devices. Based on our exper...
IEEE Sensors Journal, 2021
Deep neural network is an effective choice to automatically recognize human actions utilizing data from various wearable sensors. These networks automate the process of feature extraction relying completely on data. However, various noises in time series data with complex intermodal relationships among sensors make this process more complicated. In this paper, we have proposed a novel multi-stage training approach that increases diversity in this feature extraction process to make accurate recognition of actions by combining varieties of features extracted from diverse perspectives. Initially, instead of using single type of transformation, numerous transformations are employed on time series data to obtain variegated representations of the features encoded in raw data. An efficient deep CNN architecture is proposed that can be individually trained to extract features from different transformed spaces. Later, these CNN feature extractors are merged into an optimal architecture finely tuned for optimizing diversified extracted features through a combined training stage or multiple sequential training stages. This approach offers the opportunity to explore the encoded features in raw sensor data utilizing multifarious observation windows with immense scope for efficient selection of features for final convergence. Extensive experimentations have been carried out in three publicly available datasets that provide outstanding performance consistently with average five-fold cross-validation accuracy of 99.29% on UCI HAR database, 99.02% on USC HAR database, and 97.21% on SKODA database outperforming other state-of-the-art approaches.
2016 IEEE 13th International Conference on Wearable and Implantable Body Sensor Networks (BSN), 2016
Human Activity Recognition provides valuable contextual information for wellbeing, healthcare, and sport applications. Over the past decades, many machine learning approaches have been proposed to identify activities from inertial sensor data for specific applications. Most methods, however, are designed for offline processing rather than processing on the sensor node. In this paper, a human activity recognition technique based on a deep learning methodology is designed to enable accurate and real-time classification for low-power wearable devices. To obtain invariance against changes in sensor orientation, sensor placement, and in sensor acquisition rates, we design a feature generation process that is applied to the spectral domain of the inertial data. Specifically, the proposed method uses sums of temporal convolutions of the transformed input. Accuracy of the proposed approach is evaluated against the current state-of-the-art methods using both laboratory and real world activity datasets. A systematic analysis of the feature generation parameters and a comparison of activity recognition computation times on mobile devices and sensor nodes are also presented.
A Hybrid Deep Neural Network for Human Activity Recognition based on IoT Sensors
International Journal of Advanced Computer Science and Applications, 2021
Internet of things (IOT) sensors, has received a lot of interest in recent years due to the rise of application demands in domains like ubiquitous and context-aware computing, activity surveillance, ambient assistive living and more specifically in Human activity recognition. The recent development in deep learning allows to extract high-level features automatically, and eliminates the reliance on traditional machine learning techniques, which depended heavily on hand crafted features. In this paper, we introduce a network that can identify a variety of everyday human actions that can be carried out in a smart home environment, by using raw signals generated from Internet of Thing's motion sensors. We design our architecture basing on a combination of convolutional neural network (CNN) and Gated recurrent unit (GRU) layers. The CNN is first deployed to extract local and scale-invariance features, then the GRU layers are used to extract sequential temporal dependencies. We tested our model called (CNGRU) on three public datasets. It achieves an accuracy better or comparable to existing state of the art models.