Asanka Perera - Academia.edu (original) (raw)

Papers by Asanka Perera

Research paper thumbnail of Legal framework for plant breeder's rights in Sri Lanka: a review

Legal framework for plant breeder's rights in Sri Lanka: a review

European Intellectual Property Review, 2017

Research paper thumbnail of Non-contact automatic vital signs monitoring of neonates in NICU using video camera imaging

Non-contact automatic vital signs monitoring of neonates in NICU using video camera imaging

Computer Methods in Biomechanics and Biomedical Engineering: Imaging & Visualization

Research paper thumbnail of Characteristics of optical flow from aerial thermal imaging, “thermal flow”

Characteristics of optical flow from aerial thermal imaging, “thermal flow”

Journal of Field Robotics, 2022

Research paper thumbnail of UAV-GESTURE: A Dataset for UAV Control and Gesture Recognition

Current UAV-recorded datasets are mostly limited to action recognition and object tracking, where... more Current UAV-recorded datasets are mostly limited to action recognition and object tracking, whereas the gesture signals datasets were mostly recorded in indoor spaces. Currently, there is no outdoor recorded public video dataset for UAV commanding signals. Gesture signals can be effectively used with UAVs by leveraging the UAVs visual sensors and operational simplicity. To fill this gap and enable research in wider application areas, we present a UAV gesture signals dataset recorded in an outdoor setting. We selected 13 gestures suitable for basic UAV navigation and command from general aircraft handling and helicopter handling signals. We provide 119 high-definition video clips consisting of 37151 frames. The overall baseline gesture recognition performance computed using Pose-based Convolutional Neural Network (P-CNN) is 91.9%. All the frames are annotated with body joints and gesture classes in order to extend the dataset’s applicability to a wider research area including gesture...

Research paper thumbnail of Drones—healthcare, humanitarian efforts and recreational use

Drones—healthcare, humanitarian efforts and recreational use

Drone Law and Policy, 2021

Research paper thumbnail of Drones—healthcare, humanitarian efforts and recreational use

Drones—healthcare, humanitarian efforts and recreational use

Drone Law and Policy, 2021

Research paper thumbnail of Non-Contact Automatic Vital Signs Monitoring of Infants in a Neonatal Intensive Care Unit Based on Neural Networks

Journal of Imaging, 2021

Infants with fragile skin are patients who would benefit from non-contact vital sign monitoring d... more Infants with fragile skin are patients who would benefit from non-contact vital sign monitoring due to the avoidance of potentially harmful adhesive electrodes and cables. Non-contact vital signs monitoring has been studied in clinical settings in recent decades. However, studies on infants in the Neonatal Intensive Care Unit (NICU) are still limited. Therefore, we conducted a single-center study to remotely monitor the heart rate (HR) and respiratory rate (RR) of seven infants in NICU using a digital camera. The region of interest (ROI) was automatically selected using a convolutional neural network and signal decomposition was used to minimize the noise artefacts. The experimental results have been validated with the reference data obtained from an ECG monitor. They showed a strong correlation using the Pearson correlation coefficients (PCC) of 0.9864 and 0.9453 for HR and RR, respectively, and a lower error rate with RMSE 2.23 beats/min and 2.69 breaths/min between measured data ...

Research paper thumbnail of Cross-correlation-based robust object tracking in aerial videos

Cross-correlation-based robust object tracking in aerial videos

Research paper thumbnail of Noncontact Sensing of Contagion

Journal of Imaging, 2021

The World Health Organization (WHO) has declared COVID-19 a pandemic. We review and reduce the cl... more The World Health Organization (WHO) has declared COVID-19 a pandemic. We review and reduce the clinical literature on diagnosis of COVID-19 through symptoms that might be remotely detected as of early May 2020. Vital signs associated with respiratory distress and fever, coughing, and visible infections have been reported. Fever screening by temperature monitoring is currently popular. However, improved noncontact detection is sought. Vital signs including heart rate and respiratory rate are affected by the condition. Cough, fatigue, and visible infections are also reported as common symptoms. There are non-contact methods for measuring vital signs remotely that have been shown to have acceptable accuracy, reliability, and practicality in some settings. Each has its pros and cons and may perform well in some challenges but be inadequate in others. Our review shows that visible spectrum and thermal spectrum cameras offer the best options for truly noncontact sensing of those studied t...

Research paper thumbnail of VisDrone-SOT2018: The Vision Meets Drone Single-Object Tracking Challenge Results

Lecture Notes in Computer Science, 2019

Single-object tracking, also known as visual tracking, on the drone platform attracts much attent... more Single-object tracking, also known as visual tracking, on the drone platform attracts much attention recently with various applications in computer vision, such as filming and surveillance. However, the lack of commonly accepted annotated datasets and standard evaluation platform prevent the developments of algorithms. To address this issue, the Vision Meets Drone Single-Object Tracking (VisDrone-SOT2018) Challenge workshop was organized in conjunction with the 15th European Conference on Computer Vision (ECCV 2018) to track and advance the technologies in such field. Specifically, we collect a dataset, including 132 video sequences divided into three non-overlapping sets, i.e., training (86 sequences with 69, 941 frames), validation (11 sequences with 7, 046 frames), and testing (35 sequences with 29, 367 frames) sets. We provide fully annotated bounding boxes of the targets as well as several useful attributes, e.g., occlusion, background clutter, and camera motion. The tracking targets in these sequences include pedestrians, cars, buses, and animals. The dataset is extremely challenging due to various factors, such as occlusion, large scale, pose variation, and fast motion. We present the evaluation protocol of the VisDrone-SOT2018 challenge and the results of a comparison of 22 trackers on the benchmark dataset, which are publicly available on the challenge website: http://www.aiskyeye.com/. We hope this challenge largely boosts the research and development in single object tracking on drone platforms.

Research paper thumbnail of A Low Redundancy Wavelet Entropy Edge Detection Algorithm

Journal of Imaging, 2021

Fast edge detection of images can be useful for many real-world applications. Edge detection is n... more Fast edge detection of images can be useful for many real-world applications. Edge detection is not an end application but often the first step of a computer vision application. Therefore, fast and simple edge detection techniques are important for efficient image processing. In this work, we propose a new edge detection algorithm using a combination of the wavelet transform, Shannon Entropy and thresholding. The new algorithm is based on the concept that each Wavelet decomposition level has an assumed level of structure that enables the use of Shannon entropy as a measure of global image structure. The proposed algorithm is developed mathematically and compared to five popular edge detection algorithms. The results show that our solution is low redundancy, noise resilient, and well suited to real-time image processing applications.

Research paper thumbnail of The Sixth Visual Object Tracking VOT2018 Challenge Results

Lecture Notes in Computer Science, 2019

The Visual Object Tracking challenge VOT2018 is the sixth annual tracker benchmarking activity or... more The Visual Object Tracking challenge VOT2018 is the sixth annual tracker benchmarking activity organized by the VOT initiative. Results of over eighty trackers are presented; many are state-of-the-art trackers published at major computer vision conferences or in journals in the recent years. The evaluation included the standard VOT and other popular methodologies for short-term tracking analysis and a "real-time" experiment simulating a situation where a tracker processes images as if provided by a continuously running sensor. A long-term tracking subchallenge has been introduced to the set of standard VOT sub-challenges. The new subchallenge focuses on long-term tracking properties, namely coping with target disappearance and reappearance. A new dataset has been compiled and a performance evaluation methodology that focuses on long-term tracking capabilities has been adopted. The VOT toolkit has been updated to support both standard short-term and the new longterm tracking subchallenges. Performance of the tested trackers typically by far exceeds standard baselines. The source code for most of the trackers is publicly available from the VOT page. The dataset, the evaluation kit and the results are publicly available at the challenge website 60 .

Research paper thumbnail of Remote measurement of cardiopulmonary signal using an unmanned aerial vehicle

IOP Conference Series: Materials Science and Engineering, 2018

This study proposes a computer vision-based system to reveal the cardiopulmonary signal of humans... more This study proposes a computer vision-based system to reveal the cardiopulmonary signal of humans in both static and dynamic scenarios without the need for restricting or contact. The proposed system extracts the signal based on the optical properties of skin colour variations in the facial region using image sequence analysis captured by a hovering drone while being convenient, safe and also cost-effective. The experimental results showed very good agreement, strong correlation coefficients and acceptable error ratios in comparison to reference instruments (finger pulse oximeter and Piezo respiratory belt). Therefore, the proposed system in this paper may be suitable to be applied to detect human physiological parameters in war zones and natural disasters when the contact with the subject is difficult or impossible.

Research paper thumbnail of Detection and Localisation of Life Signs from the Air Using Image Registration and Spatio-Temporal Filtering

Remote Sensing, 2020

In search and rescue operations, it is crucial to rapidly identify those people who are alive fro... more In search and rescue operations, it is crucial to rapidly identify those people who are alive from those who are not. If this information is known, emergency teams can prioritize their operations to save more lives. However, in some natural disasters the people may be lying on the ground covered with dust, debris, or ashes making them difficult to detect by video analysis that is tuned to human shapes. We present a novel method to estimate the locations of people from aerial video using image and signal processing designed to detect breathing movements. We have shown that this method can successfully detect clearly visible people and people who are fully occluded by debris. First, the aerial videos were stabilized using the key points of adjacent image frames. Next, the stabilized video was decomposed into tile videos and the temporal frequency bands of interest were motion magnified while the other frequencies were suppressed. Image differencing and temporal filtering were performe...

Research paper thumbnail of Drone-Action: An Outdoor Recorded Drone Video Dataset for Action Recognition

Drones, 2019

Aerial human action recognition is an emerging topic in drone applications. Commercial drone plat... more Aerial human action recognition is an emerging topic in drone applications. Commercial drone platforms capable of detecting basic human actions such as hand gestures have been developed. However, a limited number of aerial video datasets are available to support increased research into aerial human action analysis. Most of the datasets are confined to indoor scenes or object tracking and many outdoor datasets do not have sufficient human body details to apply state-of-the-art machine learning techniques. To fill this gap and enable research in wider application areas, we present an action recognition dataset recorded in an outdoor setting. A free flying drone was used to record 13 dynamic human actions. The dataset contains 240 high-definition video clips consisting of 66,919 frames. All of the videos were recorded from low-altitude and at low speed to capture the maximum human pose details with relatively high resolution. This dataset should be useful to many research areas, includ...

Research paper thumbnail of Human Detection and Motion Analysis from a Quadrotor UAV

IOP Conference Series: Materials Science and Engineering, 2018

This work focuses on detecting humans and estimating their pose and trajectory from an umnanned a... more This work focuses on detecting humans and estimating their pose and trajectory from an umnanned aerial vehicle (UAV). In our framework, a human detection model is trained using a Region-based Convolutional Neural Network (R-CNN). Each video frame is corrected for perspective using projective transformation. Using Histogram Oriented Gradients (HOG) of the silhouettes as features, the detected human figures are then classified for their pose. A dynamic classifier is developed to estimate forward walking and a turning gait sequence. The estimated poses are used to estimate the shape of the trajectory traversed by the human subject. An average precision of 98% has been achieved for the detector. Experiments conducted on aerial videos confirm our solution can achieve accurate pose and trajectory estimation for different kinds of perspective-distorted videos. For example, for a video recorded at 40m above ground, the perspective correction improves accuracy by 37.1% and 17.8% in pose and viewpoint estimation respectively.

Research paper thumbnail of Life Signs Detector Using a Drone in Disaster Zones

Remote Sensing, 2019

In the aftermath of a disaster, such as earthquake, flood, or avalanche, ground search for surviv... more In the aftermath of a disaster, such as earthquake, flood, or avalanche, ground search for survivors is usually hampered by unstable surfaces and difficult terrain. Drones now play an important role in these situations, allowing rescuers to locate survivors and allocate resources to saving those who can be helped. The aim of this study was to explore the utility of a drone equipped for human life detection with a novel computer vision system. The proposed system uses image sequences captured by a drone camera to remotely detect the cardiopulmonary motion caused by periodic chest movement of survivors. The results of eight human subjects and one mannequin in different poses shows that motion detection on the body surface of the survivors is likely to be useful to detect life signs without any physical contact. The results presented in this study may lead to a new approach to life detection and remote life sensing assessment of survivors.

Research paper thumbnail of Human motion analysis from UAV video

Human motion analysis from UAV video

International Journal of Intelligent Unmanned Systems, 2018

Purpose The purpose of this paper is to present a preliminary solution to address the problem of ... more Purpose The purpose of this paper is to present a preliminary solution to address the problem of estimating human pose and trajectory by an aerial robot with a monocular camera in near real time. Design/methodology/approach The distinguishing feature of the solution is a dynamic classifier selection architecture. Each video frame is corrected for perspective using projective transformation. Then, a silhouette is extracted as a Histogram of Oriented Gradients (HOG). The HOG is then classified using a dynamic classifier. A class is defined as a pose-viewpoint pair, and a total of 64 classes are defined to represent a forward walking and turning gait sequence. The dynamic classifier consists of a Support Vector Machine (SVM) classifier C64 that recognizes all 64 classes, and 64 SVM classifiers that recognize four classes each – these four classes are chosen based on the temporal relationship between them, dictated by the gait sequence. Findings The solution provides three main advantag...

Research paper thumbnail of Human Pose and Path Estimation from Aerial Video Using Dynamic Classifier Selection

Cognitive Computation, 2018

Background / introduction-We consider the problem of estimating human pose and trajectory by an a... more Background / introduction-We consider the problem of estimating human pose and trajectory by an aerial robot with a monocular camera in near real time. We present a preliminary solution whose distinguishing feature is a dynamic classifier selection architecture. Methods-In our solution, each video frame is corrected for perspective using projective transformation. Then, two alternative feature sets are used: (i) Histogram of Oriented Gradients (HOG) of the silhouette, (ii) Convolutional Neural Network (CNN) features of the RGB image. The features (HOG or CNN) are classified using a dynamic classifier. A class is defined as a pose-viewpoint pair, and a total of 64 classes are defined to represent a forward walking and turning gait sequence. Results-Our solution provides three main advantages: (i) Classification is efficient due to dynamic selection (4-class vs. 64-class classification). (ii) Classification errors are confined to neighbors of the true viewpoints. (iii) The robust temporal relationship between poses is used to resolve the left-right ambiguities of human silhouettes. Conclusions-Experiments conducted on both frontoparallel videos and aerial videos confirm our solution can achieve accurate pose and trajectory estimation for

Research paper thumbnail of Remote monitoring of cardiorespiratory signals from a hovering unmanned aerial vehicle

BioMedical Engineering OnLine, 2017

Background Unmanned aerial vehicles (UAVs) or drones, particularly small UAVs capable of hover ar... more Background Unmanned aerial vehicles (UAVs) or drones, particularly small UAVs capable of hover are a rapidly maturing technology with increasing numbers of innovative applications. The ability of a UAV to detect and measure the vital signs of humans can have many applications, including: triage of disaster victims, detection of security threats and deepening the context of human to machine interactions.

Research paper thumbnail of Legal framework for plant breeder's rights in Sri Lanka: a review

Legal framework for plant breeder's rights in Sri Lanka: a review

European Intellectual Property Review, 2017

Research paper thumbnail of Non-contact automatic vital signs monitoring of neonates in NICU using video camera imaging

Non-contact automatic vital signs monitoring of neonates in NICU using video camera imaging

Computer Methods in Biomechanics and Biomedical Engineering: Imaging & Visualization

Research paper thumbnail of Characteristics of optical flow from aerial thermal imaging, “thermal flow”

Characteristics of optical flow from aerial thermal imaging, “thermal flow”

Journal of Field Robotics, 2022

Research paper thumbnail of UAV-GESTURE: A Dataset for UAV Control and Gesture Recognition

Current UAV-recorded datasets are mostly limited to action recognition and object tracking, where... more Current UAV-recorded datasets are mostly limited to action recognition and object tracking, whereas the gesture signals datasets were mostly recorded in indoor spaces. Currently, there is no outdoor recorded public video dataset for UAV commanding signals. Gesture signals can be effectively used with UAVs by leveraging the UAVs visual sensors and operational simplicity. To fill this gap and enable research in wider application areas, we present a UAV gesture signals dataset recorded in an outdoor setting. We selected 13 gestures suitable for basic UAV navigation and command from general aircraft handling and helicopter handling signals. We provide 119 high-definition video clips consisting of 37151 frames. The overall baseline gesture recognition performance computed using Pose-based Convolutional Neural Network (P-CNN) is 91.9%. All the frames are annotated with body joints and gesture classes in order to extend the dataset’s applicability to a wider research area including gesture...

Research paper thumbnail of Drones—healthcare, humanitarian efforts and recreational use

Drones—healthcare, humanitarian efforts and recreational use

Drone Law and Policy, 2021

Research paper thumbnail of Drones—healthcare, humanitarian efforts and recreational use

Drones—healthcare, humanitarian efforts and recreational use

Drone Law and Policy, 2021

Research paper thumbnail of Non-Contact Automatic Vital Signs Monitoring of Infants in a Neonatal Intensive Care Unit Based on Neural Networks

Journal of Imaging, 2021

Infants with fragile skin are patients who would benefit from non-contact vital sign monitoring d... more Infants with fragile skin are patients who would benefit from non-contact vital sign monitoring due to the avoidance of potentially harmful adhesive electrodes and cables. Non-contact vital signs monitoring has been studied in clinical settings in recent decades. However, studies on infants in the Neonatal Intensive Care Unit (NICU) are still limited. Therefore, we conducted a single-center study to remotely monitor the heart rate (HR) and respiratory rate (RR) of seven infants in NICU using a digital camera. The region of interest (ROI) was automatically selected using a convolutional neural network and signal decomposition was used to minimize the noise artefacts. The experimental results have been validated with the reference data obtained from an ECG monitor. They showed a strong correlation using the Pearson correlation coefficients (PCC) of 0.9864 and 0.9453 for HR and RR, respectively, and a lower error rate with RMSE 2.23 beats/min and 2.69 breaths/min between measured data ...

Research paper thumbnail of Cross-correlation-based robust object tracking in aerial videos

Cross-correlation-based robust object tracking in aerial videos

Research paper thumbnail of Noncontact Sensing of Contagion

Journal of Imaging, 2021

The World Health Organization (WHO) has declared COVID-19 a pandemic. We review and reduce the cl... more The World Health Organization (WHO) has declared COVID-19 a pandemic. We review and reduce the clinical literature on diagnosis of COVID-19 through symptoms that might be remotely detected as of early May 2020. Vital signs associated with respiratory distress and fever, coughing, and visible infections have been reported. Fever screening by temperature monitoring is currently popular. However, improved noncontact detection is sought. Vital signs including heart rate and respiratory rate are affected by the condition. Cough, fatigue, and visible infections are also reported as common symptoms. There are non-contact methods for measuring vital signs remotely that have been shown to have acceptable accuracy, reliability, and practicality in some settings. Each has its pros and cons and may perform well in some challenges but be inadequate in others. Our review shows that visible spectrum and thermal spectrum cameras offer the best options for truly noncontact sensing of those studied t...

Research paper thumbnail of VisDrone-SOT2018: The Vision Meets Drone Single-Object Tracking Challenge Results

Lecture Notes in Computer Science, 2019

Single-object tracking, also known as visual tracking, on the drone platform attracts much attent... more Single-object tracking, also known as visual tracking, on the drone platform attracts much attention recently with various applications in computer vision, such as filming and surveillance. However, the lack of commonly accepted annotated datasets and standard evaluation platform prevent the developments of algorithms. To address this issue, the Vision Meets Drone Single-Object Tracking (VisDrone-SOT2018) Challenge workshop was organized in conjunction with the 15th European Conference on Computer Vision (ECCV 2018) to track and advance the technologies in such field. Specifically, we collect a dataset, including 132 video sequences divided into three non-overlapping sets, i.e., training (86 sequences with 69, 941 frames), validation (11 sequences with 7, 046 frames), and testing (35 sequences with 29, 367 frames) sets. We provide fully annotated bounding boxes of the targets as well as several useful attributes, e.g., occlusion, background clutter, and camera motion. The tracking targets in these sequences include pedestrians, cars, buses, and animals. The dataset is extremely challenging due to various factors, such as occlusion, large scale, pose variation, and fast motion. We present the evaluation protocol of the VisDrone-SOT2018 challenge and the results of a comparison of 22 trackers on the benchmark dataset, which are publicly available on the challenge website: http://www.aiskyeye.com/. We hope this challenge largely boosts the research and development in single object tracking on drone platforms.

Research paper thumbnail of A Low Redundancy Wavelet Entropy Edge Detection Algorithm

Journal of Imaging, 2021

Fast edge detection of images can be useful for many real-world applications. Edge detection is n... more Fast edge detection of images can be useful for many real-world applications. Edge detection is not an end application but often the first step of a computer vision application. Therefore, fast and simple edge detection techniques are important for efficient image processing. In this work, we propose a new edge detection algorithm using a combination of the wavelet transform, Shannon Entropy and thresholding. The new algorithm is based on the concept that each Wavelet decomposition level has an assumed level of structure that enables the use of Shannon entropy as a measure of global image structure. The proposed algorithm is developed mathematically and compared to five popular edge detection algorithms. The results show that our solution is low redundancy, noise resilient, and well suited to real-time image processing applications.

Research paper thumbnail of The Sixth Visual Object Tracking VOT2018 Challenge Results

Lecture Notes in Computer Science, 2019

The Visual Object Tracking challenge VOT2018 is the sixth annual tracker benchmarking activity or... more The Visual Object Tracking challenge VOT2018 is the sixth annual tracker benchmarking activity organized by the VOT initiative. Results of over eighty trackers are presented; many are state-of-the-art trackers published at major computer vision conferences or in journals in the recent years. The evaluation included the standard VOT and other popular methodologies for short-term tracking analysis and a "real-time" experiment simulating a situation where a tracker processes images as if provided by a continuously running sensor. A long-term tracking subchallenge has been introduced to the set of standard VOT sub-challenges. The new subchallenge focuses on long-term tracking properties, namely coping with target disappearance and reappearance. A new dataset has been compiled and a performance evaluation methodology that focuses on long-term tracking capabilities has been adopted. The VOT toolkit has been updated to support both standard short-term and the new longterm tracking subchallenges. Performance of the tested trackers typically by far exceeds standard baselines. The source code for most of the trackers is publicly available from the VOT page. The dataset, the evaluation kit and the results are publicly available at the challenge website 60 .

Research paper thumbnail of Remote measurement of cardiopulmonary signal using an unmanned aerial vehicle

IOP Conference Series: Materials Science and Engineering, 2018

This study proposes a computer vision-based system to reveal the cardiopulmonary signal of humans... more This study proposes a computer vision-based system to reveal the cardiopulmonary signal of humans in both static and dynamic scenarios without the need for restricting or contact. The proposed system extracts the signal based on the optical properties of skin colour variations in the facial region using image sequence analysis captured by a hovering drone while being convenient, safe and also cost-effective. The experimental results showed very good agreement, strong correlation coefficients and acceptable error ratios in comparison to reference instruments (finger pulse oximeter and Piezo respiratory belt). Therefore, the proposed system in this paper may be suitable to be applied to detect human physiological parameters in war zones and natural disasters when the contact with the subject is difficult or impossible.

Research paper thumbnail of Detection and Localisation of Life Signs from the Air Using Image Registration and Spatio-Temporal Filtering

Remote Sensing, 2020

In search and rescue operations, it is crucial to rapidly identify those people who are alive fro... more In search and rescue operations, it is crucial to rapidly identify those people who are alive from those who are not. If this information is known, emergency teams can prioritize their operations to save more lives. However, in some natural disasters the people may be lying on the ground covered with dust, debris, or ashes making them difficult to detect by video analysis that is tuned to human shapes. We present a novel method to estimate the locations of people from aerial video using image and signal processing designed to detect breathing movements. We have shown that this method can successfully detect clearly visible people and people who are fully occluded by debris. First, the aerial videos were stabilized using the key points of adjacent image frames. Next, the stabilized video was decomposed into tile videos and the temporal frequency bands of interest were motion magnified while the other frequencies were suppressed. Image differencing and temporal filtering were performe...

Research paper thumbnail of Drone-Action: An Outdoor Recorded Drone Video Dataset for Action Recognition

Drones, 2019

Aerial human action recognition is an emerging topic in drone applications. Commercial drone plat... more Aerial human action recognition is an emerging topic in drone applications. Commercial drone platforms capable of detecting basic human actions such as hand gestures have been developed. However, a limited number of aerial video datasets are available to support increased research into aerial human action analysis. Most of the datasets are confined to indoor scenes or object tracking and many outdoor datasets do not have sufficient human body details to apply state-of-the-art machine learning techniques. To fill this gap and enable research in wider application areas, we present an action recognition dataset recorded in an outdoor setting. A free flying drone was used to record 13 dynamic human actions. The dataset contains 240 high-definition video clips consisting of 66,919 frames. All of the videos were recorded from low-altitude and at low speed to capture the maximum human pose details with relatively high resolution. This dataset should be useful to many research areas, includ...

Research paper thumbnail of Human Detection and Motion Analysis from a Quadrotor UAV

IOP Conference Series: Materials Science and Engineering, 2018

This work focuses on detecting humans and estimating their pose and trajectory from an umnanned a... more This work focuses on detecting humans and estimating their pose and trajectory from an umnanned aerial vehicle (UAV). In our framework, a human detection model is trained using a Region-based Convolutional Neural Network (R-CNN). Each video frame is corrected for perspective using projective transformation. Using Histogram Oriented Gradients (HOG) of the silhouettes as features, the detected human figures are then classified for their pose. A dynamic classifier is developed to estimate forward walking and a turning gait sequence. The estimated poses are used to estimate the shape of the trajectory traversed by the human subject. An average precision of 98% has been achieved for the detector. Experiments conducted on aerial videos confirm our solution can achieve accurate pose and trajectory estimation for different kinds of perspective-distorted videos. For example, for a video recorded at 40m above ground, the perspective correction improves accuracy by 37.1% and 17.8% in pose and viewpoint estimation respectively.

Research paper thumbnail of Life Signs Detector Using a Drone in Disaster Zones

Remote Sensing, 2019

In the aftermath of a disaster, such as earthquake, flood, or avalanche, ground search for surviv... more In the aftermath of a disaster, such as earthquake, flood, or avalanche, ground search for survivors is usually hampered by unstable surfaces and difficult terrain. Drones now play an important role in these situations, allowing rescuers to locate survivors and allocate resources to saving those who can be helped. The aim of this study was to explore the utility of a drone equipped for human life detection with a novel computer vision system. The proposed system uses image sequences captured by a drone camera to remotely detect the cardiopulmonary motion caused by periodic chest movement of survivors. The results of eight human subjects and one mannequin in different poses shows that motion detection on the body surface of the survivors is likely to be useful to detect life signs without any physical contact. The results presented in this study may lead to a new approach to life detection and remote life sensing assessment of survivors.

Research paper thumbnail of Human motion analysis from UAV video

Human motion analysis from UAV video

International Journal of Intelligent Unmanned Systems, 2018

Purpose The purpose of this paper is to present a preliminary solution to address the problem of ... more Purpose The purpose of this paper is to present a preliminary solution to address the problem of estimating human pose and trajectory by an aerial robot with a monocular camera in near real time. Design/methodology/approach The distinguishing feature of the solution is a dynamic classifier selection architecture. Each video frame is corrected for perspective using projective transformation. Then, a silhouette is extracted as a Histogram of Oriented Gradients (HOG). The HOG is then classified using a dynamic classifier. A class is defined as a pose-viewpoint pair, and a total of 64 classes are defined to represent a forward walking and turning gait sequence. The dynamic classifier consists of a Support Vector Machine (SVM) classifier C64 that recognizes all 64 classes, and 64 SVM classifiers that recognize four classes each – these four classes are chosen based on the temporal relationship between them, dictated by the gait sequence. Findings The solution provides three main advantag...

Research paper thumbnail of Human Pose and Path Estimation from Aerial Video Using Dynamic Classifier Selection

Cognitive Computation, 2018

Background / introduction-We consider the problem of estimating human pose and trajectory by an a... more Background / introduction-We consider the problem of estimating human pose and trajectory by an aerial robot with a monocular camera in near real time. We present a preliminary solution whose distinguishing feature is a dynamic classifier selection architecture. Methods-In our solution, each video frame is corrected for perspective using projective transformation. Then, two alternative feature sets are used: (i) Histogram of Oriented Gradients (HOG) of the silhouette, (ii) Convolutional Neural Network (CNN) features of the RGB image. The features (HOG or CNN) are classified using a dynamic classifier. A class is defined as a pose-viewpoint pair, and a total of 64 classes are defined to represent a forward walking and turning gait sequence. Results-Our solution provides three main advantages: (i) Classification is efficient due to dynamic selection (4-class vs. 64-class classification). (ii) Classification errors are confined to neighbors of the true viewpoints. (iii) The robust temporal relationship between poses is used to resolve the left-right ambiguities of human silhouettes. Conclusions-Experiments conducted on both frontoparallel videos and aerial videos confirm our solution can achieve accurate pose and trajectory estimation for

Research paper thumbnail of Remote monitoring of cardiorespiratory signals from a hovering unmanned aerial vehicle

BioMedical Engineering OnLine, 2017

Background Unmanned aerial vehicles (UAVs) or drones, particularly small UAVs capable of hover ar... more Background Unmanned aerial vehicles (UAVs) or drones, particularly small UAVs capable of hover are a rapidly maturing technology with increasing numbers of innovative applications. The ability of a UAV to detect and measure the vital signs of humans can have many applications, including: triage of disaster victims, detection of security threats and deepening the context of human to machine interactions.