A computational model of the infant pain impressions with Gaussian and Nearest Mean Classifier (original) (raw)
Related papers
2017
Infants are unable to communicate pain, they cry to express their pain. In this paper we describe the most effective feature for infant facial pain classification. The image dataset was classified by medical doctors and nurses based on cortisol hormone difference and FLACC (Face, Legs, Activity, Cry, Consolability) measurement. In this paper we try a number of features based on Action Unit (AU) for infant facial pain classification and discover that the best features are combination between geometrical and textural features. We trained our own Active Shape Model (ASM) and extracted the geometrical features based on landmark points found by our ASM. The textural features are extracted using Local Binary Patterns (LBP) from multiple facial patches. We also experiment with two stage pain classification preceded by a cry detection system, and concluded that this scenario combined with geometrical and textural feature produce a very high F1 score for infant facial pain classification.
Applied Computing and Informatics, 2019
Diagnosing pain in neonates is difficult but critical. Although approximately thirty manual pain instruments have been developed for neonatal pain diagnosis, most are complex, multifactorial, and geared toward research. The goals of this work are twofold: 1) to develop a new video dataset for automatic neonatal pain detection called iCOPEvid (infant Classification Of Pain Expressions videos), and 2) to present a classification system that sets a challenging comparison performance on this dataset. The iCOPEvid dataset contains 234 videos of 49 neonates experiencing a set of noxious stimuli, a period of rest, and an acute pain stimulus. From these videos 20 s segments are extracted and grouped into two classes: pain (49) and nopain (185), with the nopain video segments handpicked to produce a highly challenging dataset. An ensemble of twelve global and local descriptors with a Bag-of-Features approach is utilized to improve the performance of some new descriptors based on Gaussian of ...
Pain detection through shape and appearance features
2013 IEEE International Conference on Multimedia and Expo (ICME), 2013
In this paper we are proposing a novel computer vision system that can recognize expression of pain in videos by analyzing facial features. Usually pain is reported and recorded manually and thus carry lot of subjectivity. Manual monitoring of pain makes difficult for the medical practitioners to respond quickly in critical situations. Thus, it is desirable to design such a system that can automate this task. With our proposed model pain monitoring can be done in real-time without any human intervention. We propose to extract shape information using pyramid histogram of orientation gradients (PHOG) and appearance information using pyramid local binary pattern (PLBP) in order to get discriminative representation of face. We tested our proposed model on UNBC-McMaster Shoulder Pain Expression Archive Database and recorded results that exceeds state-of-the-art.
Neonatal Facial Pain Detection Using NNSOA and LSVM
2008
We report classification experiments using the pilot Infant COPE database of neonatal facial expressions. Two sets of DCT coeffiecents were used to train a neural network simultaneous algorithm (NNSOA) and a linear support vector machine (LSVM) to classify neonatal expressions into the two categories of pain and nonpain. In the first set (VAR) only 80 of the coefficients with the highest variance were included. In the second set (SFFS), 15 DCT coefficients were selected by applying Sequential Forward Floating Selection (SFFS) [1]. We found that NNSOA+VAR produced the best classification score of 95.38% accuracy, but with no statistical difference compared with the DCT sets. However, NNSOA using the DCT coefficients outperformed with statistical significance previous experiments reported in [2] that used PCA components. It is surmised that NNSOA, an algorithm that eliminates unnecessary weights, is more stable than LSVM and may be better than SFFS at identifying relevant features.
Pain Level Detection From Facial Image Captured by Smartphone
Journal of Information Processing, 2016
Accurate symptom of cancer patient in regular basis is highly concern to the medical service provider for clinical decision making such as adjustment of medication. Since patients have limitations to provide self-reported symptoms, we have investigated how mobile phone application can play the vital role to help the patients in this case. We have used facial images captured by smart phone to detect pain level accurately. In this pain detection process, existing algorithms and infrastructure are used for cancer patients to make cost low and user-friendly. The pain management solution is the first mobile-based study as far as we found today. The proposed algorithm has been used to classify faces, which is represented as a weighted combination of Eigenfaces. Here, angular distance, and support vector machines (SVMs) are used for the classification system. In this study, longitudinal data was collected for six months in Bangladesh. Again, cross-sectional pain images were collected from three different countries: Bangladesh, Nepal and the United States. In this study, we found that personalized model for pain assessment performs better for automatic pain assessment. We also got that the training set should contain varying levels of pain in each group: low, medium and high.
Machine-based Multimodal Pain Assessment Tool for Infants: A Review
arXiv (Cornell University), 2016
Bedside caregivers assess infants' pain at constant intervals by observing specific behavioral and physiological signs of pain. This standard has two main limitations. The first limitation is the intermittent assessment of pain, which might lead to missing pain when the infants are left unattended. Second, it is inconsistent since it depends on the observer's subjective judgment and differs between observers. The intermittent and inconsistent assessment can induce poor treatment and, therefore, cause serious long-term consequences. To mitigate these limitations, the current standard can be augmented by an automated system that monitors infants continuously and provides quantitative and consistent assessment of pain. Several automated methods have been introduced to assess infants' pain automatically based on analysis of behavioral or physiological pain indicators. This paper comprehensively reviews the automated approaches (i.e., approaches to feature extraction) for analyzing infants' pain and the current efforts in automatic pain recognition. In addition, it reviews the databases available to the research community and discusses the current limitations of the automated pain assessment. Index Terms-Neonatal pain assessment, automated pain recognition, pain databases, facial expression, crying sound, physiological indicators.