Transforming Gait: Video-Based Spatiotemporal Gait Analysis (original) (raw)
Related papers
Portable in-clinic video-based gait analysis: validation study on prosthetic users
Despite the common focus of gait in rehabilitation, there are few tools that allow quantitatively characterizing gait in the clinic. We recently described an algorithm, trained on a large dataset from our clinical gait analysis laboratory, which produces accurate cycle-by-cycle estimates of spatiotemporal gait parameters including step timing and walking velocity. Here, we demonstrate this system generalizes well to clinical care with a validation study on prosthetic users seen in therapy and outpatient clinic. Specifically, estimated walking velocity was similar to annotated 10-meter walking velocities, and cadence and foot contact times closely mirrored our wearable sensor measurements. Additionally, we found that a 2D keypoint detector pre-trained on largely able-bodied individuals struggles to localize prosthetic joints, particularly for those individuals with more proximal or bilateral amputations, but it is possible to train a prosthetic-specific joint detector. Further work i...
A Spatiotemporal Deep Learning Approach for Automatic Pathological Gait Classification
Sensors
Human motion analysis provides useful information for the diagnosis and recovery assessment of people suffering from pathologies, such as those affecting the way of walking, i.e., gait. With recent developments in deep learning, state-of-the-art performance can now be achieved using a single 2D-RGB-camera-based gait analysis system, offering an objective assessment of gait-related pathologies. Such systems provide a valuable complement/alternative to the current standard practice of subjective assessment. Most 2D-RGB-camera-based gait analysis approaches rely on compact gait representations, such as the gait energy image, which summarize the characteristics of a walking sequence into one single image. However, such compact representations do not fully capture the temporal information and dependencies between successive gait movements. This limitation is addressed by proposing a spatiotemporal deep learning approach that uses a selection of key frames to represent a gait cycle. Convo...
Human Pose Estimation-Based Real-Time Gait Analysis Using Convolutional Neural Network
IEEE Access, 2020
Gait analysis is widely used in clinical practice to help in understanding the gait abnormalities and its association with a certain underlying medical condition for better diagnosis and prognosis. Several technologies embedded in the specialized devices such as computer-interfaced video cameras to measure patient motion, electrodes placed on the surface of the skin to appreciate muscle activity, force platforms embedded in a walkway to monitor the forces and torques produced between the ambulatory patient and the ground, Inertial Measurement Unit (IMU) sensors, and wearable devices are being used for this purpose. All of these technologies require an expert to translate the data recorded by the said embedded specialized devices, which is typically done by a medical expert but with the recent improvements in the field of Artificial Intelligence (AI), especially in deep learning, it is possible now to create a mechanism where the translation of the data can be performed by a deep learning tool such as Convolutional Neural Network (CNN). Therefore, this work presents an approach where human pose estimation is combined with a CNN for classification between normal and abnormal gait of a human with an ability to provide information about the detected abnormalities form an extracted skeletal image in real-time. INDEX TERMS Convolutional neural network, deep learning, gait analysis, pose estimation.
Background: Spatiotemporal parameters can characterize the gait patterns of individuals, allowing assessment of their health status and detection of clinically meaningful changes in their gait. Video-based markerless motion capture is a user-friendly, inexpensive, and widely applicable technology that could reduce the barriers to measuring spatiotemporal gait parameters in clinical and more diverse settings.Research Question: Do spatiotemporal gait parameters measured using Theia3D markerless motion capture demonstrate concurrent validity with those measured using marker-based motion capture?Methods: 30 healthy adult participants performed treadmill walking at self-selected speeds while 2D video and marker-based motion capture data were collected simultaneously. Kinematic-based gait events were used to measure nine spatiotemporal gait parameters from both systems independently. The parameters were compared using their group means, Bland-Altman methods, Pearson correlation coefficien...
Deep Learning for Monitoring of Human Gait: A Review
IEEE Sensors Journal, 2019
The essential human gait parameters are briefly reviewed, followed by a detailed review of the state of the art in deep learning for the human gait analysis. The modalities for capturing the gait data are grouped according to the sensing technology: video sequences, wearable sensors, and floor sensors, as well as the publicly available datasets. The established artificial neural network architectures for deep learning are reviewed for each group, and their performance are compared with particular emphasis on the spatiotemporal character of gait data and the motivation for multi-sensor, multi-modality fusion. It is shown that by most of the essential metrics, deep learning convolutional neural networks typically outperform shallow learning models. In the light of the discussed character of gait data, this is attributed to the possibility to extract the gait features automatically in deep learning as opposed to the shallow learning from the handcrafted gait features.
Foot2hip: A deep neural network model for predicting lower limb kinematics from foot measurements
2022
Objective: This study aims to develop a neural network (foot2hip) for long-term recording of gait kinematics with improved user comfort. Methods: Foot2hip predicts ankle, knee, and hip joint angle profiles in the sagittal plane using foot kinematics and kinetics during walking. Foot2hip consists of three convolution, two max-pooling, two LSTM and three dense layers. An indigenously developed insole and an outsole were used to measure the kinetics and kinematics of the foot, respectively. Seven healthy participants were recruited to follow an experimental protocol consisting of six walking conditions: slow, medium, fast walking speed, rearfoot, flatfoot, and forefoot landing pattern. Results: When tested for leave-one-out and nested cross-validation, foot2hip obtained 3.04° ±0.20 RMSE and 0.97±0.01 correlation coefficient for knee joint, 1.7°± 0.09 RMSE and 0.95±0.01 correlation coefficient for hip joint, and 1.32°±0.08 RMSE and 0.91±0.02 correlation coefficient for ankle joint (aver...
Gait Estimation and Analysis from Noisy Observations
EasyChair Preprints, 2018
People's walking style - gait - can be an indicator of their health as it is affected by pain, illness, weakness, and aging. Gait analysis aims to detect gait variations. It is usually performed by an experienced observer with the help of cameras, sensors, or other devices. Frequent gait analysis, to observe changes over time, is costly and impractical. Here, we first discuss estimating gait movements from predicted 2D joint locations that represent selected body parts from videos. Then, we use a long-short term memory (LSTM) regression model to predict 3D (Vicon) data, which was recorded simultaneously with the videos as ground truth. Feet movements estimated from video correlate highly with the Vicon data, enabling gait analysis by measuring selected spatial gait parameters (step and cadence length, and walk base) from estimated movements. Using inexpensive and reliable cameras to record, estimate and analyse a person's gait can be helpful; early detection of its changes f...
Digital Biomarkers, 2022
Recent advancements in deep learning have produced significant progress in markerless human pose estimation, making it possible to estimate human kinematics from single camera videos without the need for reflective markers and specialized labs equipped with motion capture systems. Such algorithms have the potential to enable the quantification of clinical metrics from videos recorded with a handheld camera. Here we used DeepLabCut, an open-source framework for markerless pose estimation, to fine-tune a deep network to track 5 body keypoints (hip, knee, ankle, heel, and toe) in 82 below-waist videos of 8 patients with stroke performing overground walking during clinical assessments. We trained the pose estimation model by labeling the keypoints in 2 frames per video and then trained a convolutional neural network to estimate 5 clinically relevant gait parameters (cadence, double support time, swing time, stance time, and walking speed) from the trajectory of these keypoints. These re...