Human activity recognition with smartphone sensors using deep learning neural networks (original) (raw)

A Deep Learning Framework for Human Activity Recognition Using Smartphone Data

iJARCEE, 2021

Smart phones are the most helpful apparatuses of our day-by-day life and with the propelling innovation; they get fit systematically to address user's issues and desires. To make these contraptions more useful and amazing, originators add new modules and gadgets to the equipment. Sensors has a major part in making cell phones more practical and mindful of the climate subsequently most smart phones accompany distinctive inserted sensors and this makes it conceivable to gather tremendous measures of data about the client's day by day life and exercises. The goal of Human Activity Recognition is to identify the activities performed by an individual from a given set of the information about him/her and his general environment. A great deal of exploration is being done in the field of Human Activity Recognition which human conduct is deciphered by reasoning highlights got from development, place, physiological signs and data from environments. The propose system presents a deep learning framework for human activity recognition using a smartphone data. The dataset was downloaded from kaggle.com. The dataset contains accelerometer and gyroscope data gotten from a Samsung Galaxy S2 smartphone. The accelerometer and gyroscope data is made up of different activities performed by an individual. The propose system uses keras framework and Theano as backend in build our model. After successful training, the proposed method had an accuracy of 99.06% on the 120 th epoch.

Smartphone Sensor-Based Activity Recognition by Using Machine Learning and Deep Learning Algorithms

International Journal of Machine Learning and Computing, 2018

Smartphones are widely used today, and it becomes possible to detect the user's environmental changes by using the smartphone sensors, as demonstrated in this paper where we propose a method to identify human activities with reasonably high accuracy by using smartphone sensor data. First, the raw smartphone sensor data are collected from two categories of human activity: motion-based, e.g., walking and running; and phone movement-based, e.g., left-right, up-down, clockwise and counterclockwise movement. Firstly, two types of features extraction are designed from the raw sensor data, and activity recognition is analyzed using machine learning classification models based on these features. Secondly, the activity recognition performance is analyzed through the Convolutional Neural Network (CNN) model using only the raw data. Our experiments show substantial improvement in the result with the addition of features and the use of CNN model based on smartphone sensor data with judicious learning techniques and good feature designs.

A robust human activity recognition system using smartphone sensors and deep learning

Future Generation Computer Systems, 2018

In last few decades, human activity recognition grabbed considerable research attentions from a wide range of pattern recognition and human-computer interaction researchers due to its prominent applications such as smart home health care. For instance, activity recognition systems can be adopted in a smart home health care system to improve their rehabilitation processes of patients. There are various ways of using different sensors for human activity recognition in a smartly controlled environment. Among which, physical human activity recognition through wearable sensors provides valuable information about an individual's degree of functional ability and lifestyle. In this paper, we present a smartphone inertial sensors-based approach for human activity recognition. Efficient features are first extracted from raw data. The features include mean, median, autoregressive coefficients, etc. The features are further processed by a kernel principal component analysis (KPCA) and linear discriminant analysis (LDA) to make them more robust. Finally, the features are trained with a Deep Belief Network (DBN) for successful activity recognition. The proposed approach was compared with traditional expression recognition approaches such as typical multiclass Support Vector Machine (SVM) and Artificial Neural Network (ANN) where it outperformed them.

An Efficient and Lightweight Deep Learning Model for Human Activity Recognition Using Smartphones

Sensors

Traditional pattern recognition approaches have gained a lot of popularity. However, these are largely dependent upon manual feature extraction, which makes the generalized model obscure. The sequences of accelerometer data recorded can be classified by specialized smartphones into well known movements that can be done with human activity recognition. With the high success and wide adaptation of deep learning approaches for the recognition of human activities, these techniques are widely used in wearable devices and smartphones to recognize the human activities. In this paper, convolutional layers are combined with long short-term memory (LSTM), along with the deep learning neural network for human activities recognition (HAR). The proposed model extracts the features in an automated way and categorizes them with some model attributes. In general, LSTM is alternative form of recurrent neural network (RNN) which is famous for temporal sequences’ processing. In the proposed architectu...

Convolutional Neural Networks for Human Activity Recognition in Time and Frequency-Domain

Advances in Intelligent Systems and Computing, 2018

Human activity recognition (HAR) is an important technology in pervasive computing because it can be applied to many real-life, human-centric problems such as eldercare and healthcare. Successful research has so far focused on recognizing activities using time-series sensor data. In this paper, we propose a multi-scale deep convolutional neural network (CNN) to perform efficient HAR recognition using smartphone sensors. Experiments show how a variation in the network parameters results in a better extraction of low and mid-level features. Also, an analysis of feature representations in the 1 st layer gives us insights about the nature of physical movements. Our approach outperforms other datamining techniques in HAR for the UniMiB SHAR benchmark dataset, achieving an overall performance of 88.23% on the test set.

Human Activity Recognition Using Cell Phone-Based Accelerometer and Convolutional Neural Network

Applied Sciences, 2021

Human Activity Recognition (HAR) has become an active field of research in the computer vision community. Recognizing the basic activities of human beings with the help of computers and mobile sensors can be beneficial for numerous real-life applications. The main objective of this paper is to recognize six basic human activities, viz., jogging, sitting, standing, walking and whether a person is going upstairs or downstairs. This paper focuses on predicting the activities using a deep learning technique called Convolutional Neural Network (CNN) and the accelerometer present in smartphones. Furthermore, the methodology proposed in this paper focuses on grouping the data in the form of nodes and dividing the nodes into three major layers of the CNN after which the outcome is predicted in the output layer. This work also supports the evaluation of testing and training of the two-dimensional CNN model. Finally, it was observed that the model was able to give a good prediction of the act...

Real Time Human Activity Recognition: Smartphone Sensor based-using Deep Learning

2021

Human Activity Recognition is an emerging field of study with a lot of innovations and applications. With digitalization, mobile development and advancement in technology taking over mankind, Smartphones have become an integral part of our life. We’ve been so dependent on Science and its innovations, that living without mobile phones is nearly impossible. With advancement in technology, comes the responsibility of providing mankind with efficient, conventional and sustainable resources. Our project aims to implement the idea of “Technology at your fingertips”. The number of elderly people is predicted to elevate over the years, “aging in place” (living at home regardless of age factors and other aspects) is becoming an important topic in the area of ambient assisted living (AAL). Therefore, we have proposed a human activity recognition system based on data collected from smartphone motion sensors for daily physical activity monitoring. The proposed approach implies developing a pred...

Comparative Study of Machine Learning and Deep Learning Architecture for Human Activity Recognition Using Accelerometer Data

International Journal of Machine Learning and Computing

Human activity recognition (HAR) has been a popular fields of research in recent times. Many approaches have been implemented in literature with the aim of recognizing and analyzing human activity. Classical machine learning approaches use hand-crafted feature extraction and are based on classification technique, however of late, deep learning approaches have shown greater success in recognition accuracy with increased performance. With the current, wide popularity of mobile phones and various sensors such as accelerometers, gyroscopes, and cameras that are already installed on mobile phones, the activity recognition using the accumulating data from mobile phones has been a significant area of research in HAR. In this paper, we investigate the HAR based on the data collected through the accelerometer sensor of mobile devices. We employ different machine learning (ML) classifiers, algorithms, and deep learning (DL) models across different benchmark datasets. The experimental results from this study provide a comparative performance analysis based on accuracy, performance, and the costs of different ML algorithms and DL algorithms, based on recurrent neural network (RNN) and convolutional neural network (CNN) models for activity recognition.

ConvAE-LSTM: Convolutional Autoencoder Long Short-Term Memory Network for Smartphone-Based Human Activity Recognition

IEEE Access, 2022

The self-regulated recognition of human activities from time-series smartphone sensor data is a growing research area in smart and intelligent health care. Deep learning (DL) approaches have exhibited improvements over traditional machine learning (ML) models in various domains, including human activity recognition (HAR). Several issues are involved with traditional ML approaches; these include handcrafted feature extraction, which is a tedious and complex task involving expert domain knowledge, and the use of a separate dimensionality reduction module to overcome overfitting problems and hence provide model generalization. In this article, we propose a DL-based approach for activity recognition with smartphone sensor data, i.e., accelerometer and gyroscope data. Convolutional neural networks (CNNs), autoencoders (AEs), and long short-term memory (LSTM) possess complementary modeling capabilities, as CNNs are good at automatic feature extraction, AEs are used for dimensionality reduction and LSTMs are adept at temporal modeling. In this study, we take advantage of the complementarity of CNNs, AEs, and LSTMs by combining them into a unified architecture. We explore the proposed architecture, namely, ''ConvAE-LSTM'', on four different standard public datasets (WISDM, UCI, PAMAP2, and OPPORTUNITY). The experimental results indicate that our novel approach is practical and provides relative smartphone-based HAR solution performance improvements in terms of computational time, accuracy, F1-score, precision, and recall over existing state-of-the-art methods.

Human Activity Recognition Using Tools of Convolutional Neural Networks: A State of the Art Review, Data Sets, Challenges and Future Prospects

2022

Human Activity Recognition (HAR) plays a significant role in the everyday life of people because of its ability to learn extensive high-level information about human activity from wearable or stationary devices. A substantial amount of research has been conducted on HAR and numerous approaches based on deep learning and machine learning have been exploited by the research community to classify human activities. The main goal of this review is to summarize recent works based on a wide range of deep neural networks architecture, namely convolutional neural networks (CNNs) for human activity recognition. The reviewed systems are clustered into four categories depending on the use of input devices like multimodal sensing devices, smartphones, radar, and vision devices. This review describes the performances, strengths, weaknesses, and the used hyperparameters of CNN architectures for each reviewed system with an overview of available public data sources. In addition, a discussion with t...