Evaluation of 2D and 3D posture for human activity recognition (original) (raw)

Human Activity Recognition Process Using 3-D Posture Data

IEEE Transactions on Human-Machine Systems, 2014

In this paper, we present a method for recognizing human activities using information sensed by an RGB-D camera, namely the Microsoft Kinect. Our approach is based on the estimation of some relevant joints of the human body by means of the Kinect; three different machine learning techniques, i.e., K-means clustering, Support Vector Machines, Hidden Markov Models, are combined to detect the postures involved while performing an activity, to classify them, and to model each activity as a spatio-temporal evolution of known postures. Experiments were performed on KARD, a new dataset, and on CAD-60, a public dataset. Experimental results show that our solution outperforms four relevant works based on RGB-D image fusion, hierarchical Maximum Entropy Markov Model, Markov Random Fields and Eigenjoints respectively. The performance we achieved, i.e., precision/recall of 77.3% and 76.7%, and the ability to recognize the activities in real-time, show promise for applied use.

Recognizing Human Activity with using Machine Learning Algorithm

International Research Journal Of Modernization In Engineering Technology And Science, 2024

In recent years, yoga has become part of life for many people across the world. Due to this there is the need of scientific analysis of y postures. It has been observed that pose detection techniques can be used to identify the postures and also to assist the people to perform yoga more accurately. Recognition of posture is a challenging task due to the lack availability of dataset and also to detect posture on real-time bases. To overcome this problem a large dataset has been created which contain at least 5500 images of ten different yoga pose and used a tf-pose estimation Algorithm which draws a skeleton of a human body on the real-time bases. Angles of the joints in the human body are extracted using the tf-pose skeleton and used them as a feature to implement various machine learning models. 80% of the dataset has been used for training purpose and 20% of the dataset has been used for testing. This dataset is tested on different Machine learning classification models and achieves an accuracy of 99.04% by using a Random Forest Classifier.

Development of a Human Posture Recognition System for Surveillance Application

International Journal of Computing and Digital Systems, 2021

A method of recognizing human posture in traditional camera images for surveillance application is presented in this paper. Recognition of human posture using a camera has been considered as a cue for modelling human activity in automated surveillance systems. The aim of this study is to analyze the use of joint angles between key body points and machine learning algorithms to classify human posture into three categories; Standing, Sitting, and Lying. Positions of key body points obtained from a deep convolutional neural network were used. The novelty of this approach is in the use of existing traditional cameras without depth sensors. This overcomes the limitations of joint tracking using depth sensors such as Kinect. Distances measured between two key body points, hip and knee, of persons in 2D images were also used for posture recognition. The result shows that 2D information of angles between certain joints can be used to recognize human posture. This approach achieved higher accuracy than simple distance measurement between joints and is computationally efficient. Our approach can be adopted using security cameras and computer hardware already in place.

Probabilistic Posture Classification for Human-Behavior Analysis

IEEE Transactions on Systems, Man, and Cybernetics - Part A: Systems and Humans, 2005

Computer vision and ubiquitous multimedia access nowadays make feasible the development of a mostly automated system for human-behavior analysis. In this context, our proposal is to analyze human behaviors by classifying the posture of the monitored person and, consequently, detecting corresponding events and alarm situations, like a fall. To this aim, our approach can be divided in two phases: for each frame, the projection histograms of each person are computed and compared with the probabilistic projection maps stored for each posture during the training phase; then, the obtained posture is further validated exploiting the information extracted by a tracking module in order to take into account the reliability of the classification of the first phase. Moreover, the tracking algorithm is used to handle occlusions, making the system particularly robust even in indoors environments. Extensive experimental results demonstrate a promising average accuracy of more than 95% in correctly classifying human postures, even in the case of challenging conditions.

Human Activity Recognition Using Pose Estimation and Machine Learning Algorithm

2021

Human Activity Recognition is becoming a popular field of research in the last two decades. Understanding human behavior in images gives useful information for a large number of computer vision problems and has many applications like scene recognition and pose estimation. There are various methods present for activity recognition; every technique has its advantages and disadvantages. Despite being a lot of research work, recognizing activity is still a complex and challenging task. In this work, we proposed an approach for human activity recognition and classification using a person's pose skeleton in images. This work is divided into two parts; a single person poses estimation and activity classification using pose. Pose Estimation consists of the recognition of 18 body key points and joints locations. We have used the OpenPose library for pose estimation work. And the activity classification task is performed by using multiple logistic regression. We have also shown a comparis...

A machine learning approach for human posture detection in domotics applications

12th International Conference on Image Analysis and Processing, 2003.Proceedings.

This paper describes an approach for human posture classification that has been devised for indoor surveillance in domotic applications. The approach was initially inspired to a previous works of Haritaoglou et al. [2] that uses histogram projections to classify people's posture. We modify and improve the generality of the approach by adding a machine learning phase in order to generate probability maps. A statistic classifier has then defined that compares the probability maps and the histogram profiles extracted from each moving people. The approach results to be very robust if the initial constraints are satisfied and exhibits a very low computational time so that it can be used to process live videos with standard platforms.

Inference of Human Postures by Classification of 3D Human Body Shape

Analysis and Modeling of Faces and Gestures, 2003

In this paper we describe an approach for inferring the body posture using a 3D visual-hull constructed from a set of silhouettes. We introduce an appearance-based, view-independent, 3D shape description for classifying and identifying human posture using a support vector machine. The proposed global shape description is invariant to rotation, scale and translation and varies continuously with 3D shape variations.

Classification of posture and activities by using decision trees

Conference proceedings : ... Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual Conference, 2012

Obesity prevention and treatment as well as healthy life style recommendation requires the estimation of everyday physical activity. Monitoring posture allocations and activities with sensor systems is an effective method to achieve the goal. However, at present, most devices available rely on multiple sensors distributed on the body, which might be too obtrusive for everyday use. In this study, data was collected from a wearable shoe sensor system (SmartShoe) and a decision tree algorithm was applied for classification with high computational accuracy. The dataset was collected from 9 individual subjects performing 6 different activities--sitting, standing, walking, cycling, and stairs ascent/descent. Statistical features were calculated and the classification with decision tree classifier was performed, after which, advanced boosting algorithm was applied. The computational accuracy is as high as 98.85% without boosting, and 98.90% after boosting. Additionally, the simple tree str...

A Comparison of Posture Recognition using Supervised and Unsupervised Learning Algorithms

… on Computer and …, 2010

Recognition of human posture is one step in the process of analyzing human behaviour. However, it is an ill-defined problem due to the high degree of freedom exhibited by the human body. In this paper, we study both supervised and unsupervised learning algorithms to recognise human posture in image sequences. In particular, we are interested in a specific set of postures which are representative of typical applications found in video analytics. The algorithms chosen for this paper are Kmeans, artificial neural network, self organizing maps and particle swarm optimization. Experimental results have shown that the supervised learning algorithms outperform the unsupervised learning algorithms in terms of the number of correctly classified postures. Our future work will focus on detecting abnormal behaviour based on these recognised static postures.

Human Activity Recognition Using The Human Skeleton Provided by Kinect

Iraqi Journal for Electrical and Electronic Engineering, 2021

In this paper, a new method is proposed for people tracking using the human skeleton provided by the Kinect sensor, Our method is based on skeleton data, which includes the coordinate value of each joint in the human body. For data classification, the Support Vector Machine (SVM) and Random Forest techniques are used. To achieve this goal, 14 classes of movements are defined, using the Kinect Sensor to extract data containing 46 features and then using them to train the classification models. The system was tested on 12 subjects, each of whom performed 14 movements in each experiment. Experiment results show that the best average accuracy is 90.2 % for the SVM model and 99 % for the Random forest model. From the experiments, we concluded that the best distance between the Kinect sensor and the human body is one meter.