Jeff Cohn | University of Pittsburgh (original) (raw)
Papers by Jeff Cohn
Biological Psychiatry, May 1, 2018
Background: Recent work highlights an immune-based component of psychiatric disorder etiology, pa... more Background: Recent work highlights an immune-based component of psychiatric disorder etiology, particularly schizophrenia. We evaluated two putative biomarkers of neuroinflammationddiffusion MRI free water (FW) and 1H-MRS Glutathione (GSH)din a first episode schizophrenia (SZ) sample. Furthermore, we developed a non-human primate (NHP) model of maternal immune activation (MIA) to test that maternal immune response contributes to parallel brain changes in developing NHP offspring. Methods: The human study consisted of thirty-six SZ participants and forty age/gender-matched controls (HC). The NHP study consisted of fourteen pregnant rhesus monkeys who received polyICLC and fourteen pregnant control monkeys. All subjects underwent parallel multi-shell diffusion MRI and 1H-MRS GSH-optimized MEGA-PRESS scans (Siemens 3T). Human data were collected within two years of psychosis onset and NHP data were collected longitudinally (6-month data currently presented). Results: SZ participants demonstrated significantly elevated FW in whole-brain gray (p<.05) with no difference in white matter (p¼.06) versus HC. There was a significant negative correlation between DLPFC GSH and both gray and white matter FW in SZ (r¼-.44 and-.37, respectively; both p<.05). While 6-month-old MIA-exposed rhesus offspring showed no whole-brain gray/white matter FW increase (both p>.09), frontal gray FW was elevated (p¼.013). GSH levels did not differ in any comparison (all p > .2). Conclusions: These data provide compelling convergent evidence for the presence of neuroinflammatory processes in SZ, particularly given the inverse relationship between GSH and FW. Prefrontal gray matter FW increases in MIA-exposed NHP offspring complement the human schizophrenia literature and provide a more mechanistic understanding of neuroimmune involvement in psychiatric disorders.
2009 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops, 2009
Recent psychological research suggests that facial movements are a reliable measure of pain. Auto... more Recent psychological research suggests that facial movements are a reliable measure of pain. Automatic detection of facial movements associated with pain would contribute to patient care but is technically challenging. Facial movements may be subtle and accompanied by abrupt changes in head orientation. Active appearance models (AAM) have proven robust to naturally occurring facial behavior, yet AAM-based efforts to automatically detect action units (AUs) are few. Using image data from patients with rotator-cuff injuries, we describe an AAM-based automatic system that decouples shape and appearance to detect AUs on a frame-by-frame basis. Most current approaches to AU detection use only appearance features. We explored the relative efficacy of shape and appearance for AU detection. Consistent with the experience of human observers, we found specific relationships between action units and types of facial features. Several AU (e.g. AU4, 12, and 43) were more discriminable by shape than by appearance, whilst the opposite pattern was found for others (e.g. AU6, 7 and 10). AU-specific feature sets may yield optimal results.
2008 IEEE Conference on Computer Vision and Pattern Recognition, 2008
In this paper, we present an approach we refer to as "least squares congealing" which provides a ... more In this paper, we present an approach we refer to as "least squares congealing" which provides a solution to the problem of aligning an ensemble of images in an unsupervised manner. Our approach circumvents many of the limitations existing in the canonical "congealing" algorithm. Specifically, we present an algorithm that:-(i) is able to simultaneously, rather than sequentially, estimate warp parameter updates, (ii) exhibits fast convergence and (iii) requires no pre-defined step size. We present alignment results which show an improvement in performance for the removal of unwanted spatial variation when compared with the related work of Learned-Miller on two datasets, the MNIST hand written digit database and the MultiPIE face database.
2009 IEEE 12th International Conference on Computer Vision, 2009
In this paper we pursue the task of aligning an ensemble of images in an unsupervised manner. Thi... more In this paper we pursue the task of aligning an ensemble of images in an unsupervised manner. This task has been commonly referred to as "congealing" in literature. A form of congealing, using a least-squares criteria, has been recently demonstrated to have desirable properties over conventional congealing. Least-squares congealing can be viewed as an extension of the Lucas & Kanade (LK) image alignment algorithm. It is well understood that the alignment performance for the LK algorithm, when aligning a single image with another, is theoretically and empirically equivalent for additive and compositional warps. In this paper we: (i) demonstrate that this equivalence does not hold for the extended case of congealing, (ii) characterize the inherent drawbacks associated with least-squares congealing when dealing with large numbers of images, and (iii) propose a novel method for circumventing these limitations through the application of an inverse-compositional strategy that maintains the attractive properties of the original method while being able to handle very large numbers of images.
Face and Gesture 2011, 2011
This paper presents a novel framework for recognition of facial action unit (AU) combinations by ... more This paper presents a novel framework for recognition of facial action unit (AU) combinations by viewing the classification as a sparse representation problem. Based on this framework, we represent a facial image exhibiting the combination of AUs as a sparse linear combination of basis constituting an overcomplete dictionary. We build an overcomplete dictionary whose main elements are mean Gabor features of AU combinations under examination. The other elements of the dictionary are randomly sampled from a distribution (e.g., Gaussian distribution) that guarantees sparse signal recovery. Afterwards, by solving L 1-norm minimization, a facial image is represented as a sparse vector which is used to distinguish various AU patterns. After calculating the sparse representation, the classification problem is simply viewed as a rank maximal problem. The index of the maximal value of the sparse vector is regarded as the class label of the facial image under test. Extensive experiments on the Cohn-Kanade facial expressions database demonstrate that this sparse learning framework is promising for recognition of AU combinations.
2009 3rd International Conference on Affective Computing and Intelligent Interaction and Workshops, 2009
This paper presents a framework to automatically estimate the gaze direction of an infant in an i... more This paper presents a framework to automatically estimate the gaze direction of an infant in an infant-parent face-to-face interaction. Commercial devices are sometimes used to produce automated measurement of the subjects' gaze direction. This approach is intrusive, requiring cooperation from the participants, and cannot be employed in interactive face-to-face communication scenarios between a parent and their infant. Alternately, the infant gazes that are at and away from the parent's face may be manually coded from captured videos by a human expert. However, this approach is labor intensive. A preferred alternative would be to automatically estimate the gaze direction of participants from captured videos. The realization of a such a system will help psychological scientists to readily study and understand the early attention of infants. One of the problems in eye region image analysis is the large dimensionality of the visual data. We address this problem by employing the spectral regression technique to project high dimensionality eye region images into a low dimensional sub-space. Represented eye region images in the low dimensional sub-space are utilized to train a Support Vector Machine (SVM) classifier to predict the gaze direction (i.e., either looking at parent's face or looking away from parent's face). The analysis of more than 39,000 video frames of naturalistic gaze shifts of multiple infants demonstrates significant agreement between a human coder and our approach. These results indicate that the proposed system provides an efficient approach to automating the estimation of gaze direction of naturalistic gaze shifts.
2007 IEEE Conference on Computer Vision and Pattern Recognition, 2007
Appearance Models (AM) are commonly used to model appearance and shape variation of objects in im... more Appearance Models (AM) are commonly used to model appearance and shape variation of objects in images. In particular, they have proven useful to detection, tracking, and synthesis of people's faces from video. While AM have numerous advantages relative to alternative approaches, they have at least two important drawbacks. First, they are especially prone to local minima in fitting; this problem becomes increasingly problematic as the number of parameters to estimate grows. Second, often few if any of the local minima correspond to the correct location of the model error. To address these problems, we propose Filtered Component Analysis (FCA), an extension of traditional Principal Component Analysis (PCA). FCA learns an optimal set of filters with which to build a multi-band representation of the object. FCA representations were found to be more robust than either grayscale or Gabor filters to problems of local minima. The effectiveness and robustness of the proposed algorithm is demonstrated in both synthetic and real data.
2007 IEEE 11th International Conference on Computer Vision, 2007
Temporal segmentation of facial gestures in spontaneous facial behavior recorded in real-world se... more Temporal segmentation of facial gestures in spontaneous facial behavior recorded in real-world settings is an important, unsolved, and relatively unexplored problem in facial image analysis. Several issues contribute to the challenge of this task. These include non-frontal pose, moderate to large out-of-plane head motion, large variability in the temporal scale of facial gestures, and the exponential nature of possible facial action combinations. To address these challenges, we propose a two-step approach to temporally segment facial behavior. The first step uses spectral graph techniques to cluster shape and appearance features invariant to some geometric transformations. The second step groups the clusters into temporally coherent facial gestures. We evaluated this method in facial behavior recorded during face-toface interactions. The video data were originally collected to answer substantive questions in psychology without concern for algorithm development. The method achieved moderate convergent validity with manual FACS (Facial Action Coding System) annotation. Further, when used to preprocess video for manual FACS annotation, the method significantly improves productivity, thus addressing the need for ground-truth data for facial image analysis. Moreover, we were also able to detect unusual facial behavior.
New Directions for Child and Adolescent Development, 1986
Journal of the American Academy of Child & Adolescent Psychiatry, 1989
Mother-infant face-to-face interaction is central to infant socioemotional development. Little ha... more Mother-infant face-to-face interaction is central to infant socioemotional development. Little has been known about the mechanisms that mediate the mother's influence. Findings are reviewed from a series of laboratory studies that suggest the major functional components of a mother's behavior are its affective quality and its contingent relationship to her baby's behavior. Quality of mother's affective expression accounted for individual differences in the behavior of thirteen 7-month-old infants living in multiproblem families. Infants' response was specific to the type of affective expression mothers displayed. Flat, withdrawn maternal affective expression was associated with infant distress. Intrusive maternal expression was associated with increased gaze aversion. Lack of contingent responsiveness was common to all but four mothers. Findings suggest that withdrawn or intrusive maternal affective expression, together with lack of contingent responsiveness, may in part be responsible for the risk-status of infants in multiproblem families.
Journal of Experimental Psychology: Human Perception and Performance, 2011
During conversation, women tend to nod their heads more frequently and more vigorously than men. ... more During conversation, women tend to nod their heads more frequently and more vigorously than men. An individual speaking with a woman tends to nod his or her head more than when speaking with a man. Is this due to social expectation or due to coupled motion dynamics between the speakers? We present a novel methodology that allows us to randomly assign apparent identity during free conversation in a videoconference, thereby dissociating apparent sex from motion dynamics. The method uses motion-tracked synthesized avatars that are accepted by naive participants as being live video. We find that 1) motion dynamics affect head movements but that apparent sex does not; 2) judgments of sex are driven almost entirely by appearance; and 3) ratings of masculinity and femininity rely on a combination of both appearance and dynamics. Together, these findings are consistent with the hypothesis of separate perceptual streams for appearance and biological motion. In addition, our results are consistent with a view that head movements in conversation form a low level perception and action system that can operate independently from top-down social expectations.
International Journal of Psychophysiology, 2004
A variety of procedures have been proposed to correct ocular artifacts in the electroencephalogra... more A variety of procedures have been proposed to correct ocular artifacts in the electroencephalogram (EEG), including methods based on regression, principal components analysis (PCA) and independent component analysis (ICA). The current study compared these three methods, and it evaluated a modified regression approach using Bayesian adaptive regression splines to filter the electrooculogram (EOG) before computing correction factors. We applied each artifact correction procedure to real and simulated EEG data of varying epoch lengths and then quantified the impact of correction on spectral parameters of the EEG. We found that the adaptive filter improved regression-based artifact correction. An automated PCA method effectively reduced ocular artifacts and resulted in minimal spectral distortion, whereas ICA correction appeared to distort power between 5 and 20 Hz. In general, reducing the epoch length improved the accuracy of estimating spectral power in the alpha (7.5-12.5 Hz) and beta (12.5-19.5 Hz) bands, but it worsened the accuracy for power in the theta (3.5-7.5 Hz) band and distorted time domain features. Results supported the use of regression-based and PCA-based ocular artifact correction and suggested a need for further studies examining possible spectral distortion from ICA-based correction procedures.
Infancy, 2005
Adults' perceptions provide information about the emotional meaning of infant facial expressions.... more Adults' perceptions provide information about the emotional meaning of infant facial expressions. This study asks whether similar facial movements influence adult perceptions of emotional intensity in both infant positive (smile) and negative (cry face) facial expressions. Ninety-five college students rated a series of naturally occurring and digitally edited images of infant facial expressions. Naturally occurring smiles and cry faces involving the co-occurrence of greater lip movement, mouth opening, and eye constriction, were rated as expressing stronger positive and negative emotion, respectively, than expressions without these 3 features. Ratings of digitally edited expressions indicated that eye constriction contributed to higher ratings of positive emotion in smiles (i.e., in Duchenne smiles) and greater eye constriction contributed to higher ratings of negative emotion in cry faces. Stronger mouth opening contributed to higher ratings of arousal in both smiles and cry faces. These findings indicate a set of similar facial movements are linked to perceptions of greater emotional intensity, whether the movements occur in positive or negative infant emotional expressions. This proposal is discussed with reference to discrete, componential, and dynamic systems theories of emotion.
Image and Vision Computing, 2010
A close relationship exists between the advancement of face recognition algorithms and the availa... more A close relationship exists between the advancement of face recognition algorithms and the availability of face databases varying factors that affect facial appearance in a controlled manner. The CMU PIE database has been very influential in advancing research in face recognition across pose and illumination. Despite its success the PIE database has several shortcomings: a limited number of subjects, single recording session and only few expressions captured. To address these issues we collected the CMU Multi-PIE database. It contains 337 subjects, imaged under 15 view points and 19 illumination conditions in up to four recording sessions. In this paper we introduce the database and describe the recording procedure. We furthermore present results from baseline experiments using PCA and LDA classifiers to highlight similarities and differences between PIE and Multi-PIE. This document gives an overview of the Multi-PIE database and provides results of baseline face recognition experiments. Section 2 describes the hardware setup used during the
Image and Vision Computing, 2009
Pain is typically assessed by patient self-report. Self-reported pain, however, is difficult to i... more Pain is typically assessed by patient self-report. Self-reported pain, however, is difficult to interpret and may be impaired or in some circumstances (i.e., young children and the severely ill) not even possible. To circumvent these problems behavioral scientists have identified reliable and valid facial indicators of pain. Hitherto, these methods have required manual measurement by highly skilled human observers. In this paper we explore an approach for automatically recognizing acute pain without the need for human observers. Specifically, our study was restricted to automatically detecting pain in adult patients with rotator cuff injuries. The system employed video input of the patients as they moved their affected and unaffected shoulder. Two types of ground truth were considered. Sequence-level ground truth consisted of Likert-type ratings by skilled observers. Frame-level ground truth was calculated from presence/absence and intensity of facial actions previously associated with pain. Active appearance models (AAM) were used to decouple shape and appearance in the digitized face images. Support vector machines (SVM) were compared for several representations from the AAM and of ground truth of varying granularity. We explored two questions pertinent to the construction, design and development of automatic pain detection systems. First, at what level (i.e., sequence-or frame-level) should datasets be labeled in order to obtain satisfactory automatic pain detection performance? Second, how important is it, at both levels of labeling, that we non-rigidly register the face?
Developmental Psychology, 1988
Abstract 1. Fogel's comment (1988) raises important questions about discrete versus scal... more Abstract 1. Fogel's comment (1988) raises important questions about discrete versus scaling approaches to the description and analysis of mother–infant face-to-face interaction and proposes a theoretical perspective consistent with our findings. We respond to his concerns about the validity and preferred uses of scaled monadic phases and introduce a note of caution about prematurely concluding that stochastic organization alone is of significance to development.(PsycINFO Database Record (c) 2012 APA, all rights reserved)
Developmental Psychology, 1999
Developmental Psychology, 1988
During mother-infant face-to-face interactions, bidirectional influence could be achieved through... more During mother-infant face-to-face interactions, bidirectional influence could be achieved through either the entraining of periodic cycles in the behavior of each partner or through the stochastic organization of behaviors. To determine whether and how bidirectional influence occurs, we used both time-and frequency-domain techniques to study the interactions of 54 mother-infant pairs, 18 each at 3, 6, and 9 months of age. Behavioral descriptors for each mother and infant were scaled to reflect levels of affective involvement during each second oftbe interaction. Periodic cycles were found in infants' expressive behavior only at 3 months and not in mothers' behavior. Nonperiodic cycles, which were found in some mothers' and infants' behavior at each age, were more common. At no age was the occurrence of cycles in mothers' or infants' behavior related to the achievement of bidirectional influence. Similar proportions of mothers and infants were responsive to momentto-moment changes in the other's behavior, except at 6 months when the proportion of mothers was higher. Bidirectional influence was brought about by the stochastic organization of behaviors rather than through the mutual entraining of periodic cycles. Early mother-infant face-to-face interactions have a conversation-like pattern in which each partner appears to be responsive to the other. The assumption that this pattern is actually achieved by bidirectional influence has been seriously questioned in a series of papers (Gottman & Ringland, 1981; Thomas & Malone, 1979; Thomas & Martin, 1976). Few studies have rigorously tested the null hypothesis that during faceto-face interactions moment-to-moment changes in the infant's behavior are independent of changes in the mother's behavior. Three studies that did test the null hypothesis (Gottman & Ringland, 1981; Hayes, 1984; Thomas & Malone, 1979) failed to reject it. Two types of organization of the infants' behavior, periodic or stochastic, would permit the mother to create the semblance of bidirectional influence. Periodic events cycle on and off at regular, precise intervals, permitting highly accurate prediction of the timing of future events. A periodic cycle is deterministic in that the frequency, phase, and amplitude do not vary over time (Gottman, 1981). Alternatively, stochastic events are autocorrelated over short intervals; that is, sequences occur nonrandomly (e.g., smiles following the onset of visual regard; Kaye & Fogel, 1980). Depending on the type of autocorrelation, sequences may also be cyclic, but not periodic. Cohn and Tronick
Developmental Psychology, 1990
Depression's influence on mother-infant interactious at 2 months postpartum was studied in 24 dep... more Depression's influence on mother-infant interactious at 2 months postpartum was studied in 24 depressed and 22 nondepressed mothex-infant dyads. Depression was diagnosed using the SADS-L and RDC. In S's homes, structured interactions of 3 min duration were videotaped and later coded using behavioral descriptors and a l-s time base. Unstructured interactions were described using rating scales. During structured interactions, depressed mothers were more negative and their babies were less positive than were nondepressed dyads. The reduced positivity of depressed dyads was achieved through contingent resixmfiveness. Ratings from unstructured interactions were consistent with these findings. Results support the hypothesis that depression negatively influences motherinfant behaviol; but indicate that influence may vary with development, chronicity, and presence of other risk factors. Ten to 15% of postpartum women develop a moderate, clinically significant depressive reaction that is more prolonged than the "blues" and serious enough to interfere with daily functioning (O'Hara, Neunaber, & Zekosld, 1984). Postpartum depression is thus a potentially important mental health problem for families with young infants. Studies of depression in the postpartum period, however, have focused almost exclusively on the woman herself (see Hopkins, Marcus, & Campbell, 1984, for a review) and have ignored the possible negative effects depression may have on her relationship with her infant and her infant's development. Postpartum depression would be expected to interfere with optimal mothering. The importance of positive expression and responsive caretaking in early infancy has been well documented in numerous studies (e.g., Ainsworth, Blehar, Waters, & Wall, 1978; Belsky, Rovine, & Taylor, 1984), whereas maternal insensitivity and unavailability have been associated with a range of difficulties in adaptation in infancy and early childhood (e.g., Sronfe, 1983). Recent studies have confirmed earlier clinical observations by Weissman and Paykel (1974) about the negative impact of maternal depression on young children's development. Research on the offspring of women with major depressive disorders has indicated a range of cognitive and social problems in toddlers, preschoolers, and school age children (Beardslee,
The Cleft Palate-Craniofacial Journal, 2006
Objective To examine and compare social acceptance, social behavior, and facial movements of chil... more Objective To examine and compare social acceptance, social behavior, and facial movements of children with and without oral clefts in an experimental setting. Design Two groups of children (with and without oral clefts) were videotaped in a structured social interaction with a peer confederate, when listening to emotional stories, and when told to pose specific facial expressions. Participants Twenty-four children and adolescents ages 7 to 161/2 years with oral clefts were group matched for gender, grade, and socioeconomic status with 25 noncleft controls. Main Outcome Measures Specific social and facial behaviors coded from videotapes; Harter Self-Perception Profile, Social Acceptance subscale. Results Significant between-group differences were obtained. Children in the cleft group more often displayed “Tongue Out,” “Eye Contact,” “Mimicry,” and “Initiates Conversation.” For the cleft group, “Gaze Avoidance” was significantly negatively correlated with social acceptance scores. The...
Biological Psychiatry, May 1, 2018
Background: Recent work highlights an immune-based component of psychiatric disorder etiology, pa... more Background: Recent work highlights an immune-based component of psychiatric disorder etiology, particularly schizophrenia. We evaluated two putative biomarkers of neuroinflammationddiffusion MRI free water (FW) and 1H-MRS Glutathione (GSH)din a first episode schizophrenia (SZ) sample. Furthermore, we developed a non-human primate (NHP) model of maternal immune activation (MIA) to test that maternal immune response contributes to parallel brain changes in developing NHP offspring. Methods: The human study consisted of thirty-six SZ participants and forty age/gender-matched controls (HC). The NHP study consisted of fourteen pregnant rhesus monkeys who received polyICLC and fourteen pregnant control monkeys. All subjects underwent parallel multi-shell diffusion MRI and 1H-MRS GSH-optimized MEGA-PRESS scans (Siemens 3T). Human data were collected within two years of psychosis onset and NHP data were collected longitudinally (6-month data currently presented). Results: SZ participants demonstrated significantly elevated FW in whole-brain gray (p<.05) with no difference in white matter (p¼.06) versus HC. There was a significant negative correlation between DLPFC GSH and both gray and white matter FW in SZ (r¼-.44 and-.37, respectively; both p<.05). While 6-month-old MIA-exposed rhesus offspring showed no whole-brain gray/white matter FW increase (both p>.09), frontal gray FW was elevated (p¼.013). GSH levels did not differ in any comparison (all p > .2). Conclusions: These data provide compelling convergent evidence for the presence of neuroinflammatory processes in SZ, particularly given the inverse relationship between GSH and FW. Prefrontal gray matter FW increases in MIA-exposed NHP offspring complement the human schizophrenia literature and provide a more mechanistic understanding of neuroimmune involvement in psychiatric disorders.
2009 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops, 2009
Recent psychological research suggests that facial movements are a reliable measure of pain. Auto... more Recent psychological research suggests that facial movements are a reliable measure of pain. Automatic detection of facial movements associated with pain would contribute to patient care but is technically challenging. Facial movements may be subtle and accompanied by abrupt changes in head orientation. Active appearance models (AAM) have proven robust to naturally occurring facial behavior, yet AAM-based efforts to automatically detect action units (AUs) are few. Using image data from patients with rotator-cuff injuries, we describe an AAM-based automatic system that decouples shape and appearance to detect AUs on a frame-by-frame basis. Most current approaches to AU detection use only appearance features. We explored the relative efficacy of shape and appearance for AU detection. Consistent with the experience of human observers, we found specific relationships between action units and types of facial features. Several AU (e.g. AU4, 12, and 43) were more discriminable by shape than by appearance, whilst the opposite pattern was found for others (e.g. AU6, 7 and 10). AU-specific feature sets may yield optimal results.
2008 IEEE Conference on Computer Vision and Pattern Recognition, 2008
In this paper, we present an approach we refer to as "least squares congealing" which provides a ... more In this paper, we present an approach we refer to as "least squares congealing" which provides a solution to the problem of aligning an ensemble of images in an unsupervised manner. Our approach circumvents many of the limitations existing in the canonical "congealing" algorithm. Specifically, we present an algorithm that:-(i) is able to simultaneously, rather than sequentially, estimate warp parameter updates, (ii) exhibits fast convergence and (iii) requires no pre-defined step size. We present alignment results which show an improvement in performance for the removal of unwanted spatial variation when compared with the related work of Learned-Miller on two datasets, the MNIST hand written digit database and the MultiPIE face database.
2009 IEEE 12th International Conference on Computer Vision, 2009
In this paper we pursue the task of aligning an ensemble of images in an unsupervised manner. Thi... more In this paper we pursue the task of aligning an ensemble of images in an unsupervised manner. This task has been commonly referred to as "congealing" in literature. A form of congealing, using a least-squares criteria, has been recently demonstrated to have desirable properties over conventional congealing. Least-squares congealing can be viewed as an extension of the Lucas & Kanade (LK) image alignment algorithm. It is well understood that the alignment performance for the LK algorithm, when aligning a single image with another, is theoretically and empirically equivalent for additive and compositional warps. In this paper we: (i) demonstrate that this equivalence does not hold for the extended case of congealing, (ii) characterize the inherent drawbacks associated with least-squares congealing when dealing with large numbers of images, and (iii) propose a novel method for circumventing these limitations through the application of an inverse-compositional strategy that maintains the attractive properties of the original method while being able to handle very large numbers of images.
Face and Gesture 2011, 2011
This paper presents a novel framework for recognition of facial action unit (AU) combinations by ... more This paper presents a novel framework for recognition of facial action unit (AU) combinations by viewing the classification as a sparse representation problem. Based on this framework, we represent a facial image exhibiting the combination of AUs as a sparse linear combination of basis constituting an overcomplete dictionary. We build an overcomplete dictionary whose main elements are mean Gabor features of AU combinations under examination. The other elements of the dictionary are randomly sampled from a distribution (e.g., Gaussian distribution) that guarantees sparse signal recovery. Afterwards, by solving L 1-norm minimization, a facial image is represented as a sparse vector which is used to distinguish various AU patterns. After calculating the sparse representation, the classification problem is simply viewed as a rank maximal problem. The index of the maximal value of the sparse vector is regarded as the class label of the facial image under test. Extensive experiments on the Cohn-Kanade facial expressions database demonstrate that this sparse learning framework is promising for recognition of AU combinations.
2009 3rd International Conference on Affective Computing and Intelligent Interaction and Workshops, 2009
This paper presents a framework to automatically estimate the gaze direction of an infant in an i... more This paper presents a framework to automatically estimate the gaze direction of an infant in an infant-parent face-to-face interaction. Commercial devices are sometimes used to produce automated measurement of the subjects' gaze direction. This approach is intrusive, requiring cooperation from the participants, and cannot be employed in interactive face-to-face communication scenarios between a parent and their infant. Alternately, the infant gazes that are at and away from the parent's face may be manually coded from captured videos by a human expert. However, this approach is labor intensive. A preferred alternative would be to automatically estimate the gaze direction of participants from captured videos. The realization of a such a system will help psychological scientists to readily study and understand the early attention of infants. One of the problems in eye region image analysis is the large dimensionality of the visual data. We address this problem by employing the spectral regression technique to project high dimensionality eye region images into a low dimensional sub-space. Represented eye region images in the low dimensional sub-space are utilized to train a Support Vector Machine (SVM) classifier to predict the gaze direction (i.e., either looking at parent's face or looking away from parent's face). The analysis of more than 39,000 video frames of naturalistic gaze shifts of multiple infants demonstrates significant agreement between a human coder and our approach. These results indicate that the proposed system provides an efficient approach to automating the estimation of gaze direction of naturalistic gaze shifts.
2007 IEEE Conference on Computer Vision and Pattern Recognition, 2007
Appearance Models (AM) are commonly used to model appearance and shape variation of objects in im... more Appearance Models (AM) are commonly used to model appearance and shape variation of objects in images. In particular, they have proven useful to detection, tracking, and synthesis of people's faces from video. While AM have numerous advantages relative to alternative approaches, they have at least two important drawbacks. First, they are especially prone to local minima in fitting; this problem becomes increasingly problematic as the number of parameters to estimate grows. Second, often few if any of the local minima correspond to the correct location of the model error. To address these problems, we propose Filtered Component Analysis (FCA), an extension of traditional Principal Component Analysis (PCA). FCA learns an optimal set of filters with which to build a multi-band representation of the object. FCA representations were found to be more robust than either grayscale or Gabor filters to problems of local minima. The effectiveness and robustness of the proposed algorithm is demonstrated in both synthetic and real data.
2007 IEEE 11th International Conference on Computer Vision, 2007
Temporal segmentation of facial gestures in spontaneous facial behavior recorded in real-world se... more Temporal segmentation of facial gestures in spontaneous facial behavior recorded in real-world settings is an important, unsolved, and relatively unexplored problem in facial image analysis. Several issues contribute to the challenge of this task. These include non-frontal pose, moderate to large out-of-plane head motion, large variability in the temporal scale of facial gestures, and the exponential nature of possible facial action combinations. To address these challenges, we propose a two-step approach to temporally segment facial behavior. The first step uses spectral graph techniques to cluster shape and appearance features invariant to some geometric transformations. The second step groups the clusters into temporally coherent facial gestures. We evaluated this method in facial behavior recorded during face-toface interactions. The video data were originally collected to answer substantive questions in psychology without concern for algorithm development. The method achieved moderate convergent validity with manual FACS (Facial Action Coding System) annotation. Further, when used to preprocess video for manual FACS annotation, the method significantly improves productivity, thus addressing the need for ground-truth data for facial image analysis. Moreover, we were also able to detect unusual facial behavior.
New Directions for Child and Adolescent Development, 1986
Journal of the American Academy of Child & Adolescent Psychiatry, 1989
Mother-infant face-to-face interaction is central to infant socioemotional development. Little ha... more Mother-infant face-to-face interaction is central to infant socioemotional development. Little has been known about the mechanisms that mediate the mother's influence. Findings are reviewed from a series of laboratory studies that suggest the major functional components of a mother's behavior are its affective quality and its contingent relationship to her baby's behavior. Quality of mother's affective expression accounted for individual differences in the behavior of thirteen 7-month-old infants living in multiproblem families. Infants' response was specific to the type of affective expression mothers displayed. Flat, withdrawn maternal affective expression was associated with infant distress. Intrusive maternal expression was associated with increased gaze aversion. Lack of contingent responsiveness was common to all but four mothers. Findings suggest that withdrawn or intrusive maternal affective expression, together with lack of contingent responsiveness, may in part be responsible for the risk-status of infants in multiproblem families.
Journal of Experimental Psychology: Human Perception and Performance, 2011
During conversation, women tend to nod their heads more frequently and more vigorously than men. ... more During conversation, women tend to nod their heads more frequently and more vigorously than men. An individual speaking with a woman tends to nod his or her head more than when speaking with a man. Is this due to social expectation or due to coupled motion dynamics between the speakers? We present a novel methodology that allows us to randomly assign apparent identity during free conversation in a videoconference, thereby dissociating apparent sex from motion dynamics. The method uses motion-tracked synthesized avatars that are accepted by naive participants as being live video. We find that 1) motion dynamics affect head movements but that apparent sex does not; 2) judgments of sex are driven almost entirely by appearance; and 3) ratings of masculinity and femininity rely on a combination of both appearance and dynamics. Together, these findings are consistent with the hypothesis of separate perceptual streams for appearance and biological motion. In addition, our results are consistent with a view that head movements in conversation form a low level perception and action system that can operate independently from top-down social expectations.
International Journal of Psychophysiology, 2004
A variety of procedures have been proposed to correct ocular artifacts in the electroencephalogra... more A variety of procedures have been proposed to correct ocular artifacts in the electroencephalogram (EEG), including methods based on regression, principal components analysis (PCA) and independent component analysis (ICA). The current study compared these three methods, and it evaluated a modified regression approach using Bayesian adaptive regression splines to filter the electrooculogram (EOG) before computing correction factors. We applied each artifact correction procedure to real and simulated EEG data of varying epoch lengths and then quantified the impact of correction on spectral parameters of the EEG. We found that the adaptive filter improved regression-based artifact correction. An automated PCA method effectively reduced ocular artifacts and resulted in minimal spectral distortion, whereas ICA correction appeared to distort power between 5 and 20 Hz. In general, reducing the epoch length improved the accuracy of estimating spectral power in the alpha (7.5-12.5 Hz) and beta (12.5-19.5 Hz) bands, but it worsened the accuracy for power in the theta (3.5-7.5 Hz) band and distorted time domain features. Results supported the use of regression-based and PCA-based ocular artifact correction and suggested a need for further studies examining possible spectral distortion from ICA-based correction procedures.
Infancy, 2005
Adults' perceptions provide information about the emotional meaning of infant facial expressions.... more Adults' perceptions provide information about the emotional meaning of infant facial expressions. This study asks whether similar facial movements influence adult perceptions of emotional intensity in both infant positive (smile) and negative (cry face) facial expressions. Ninety-five college students rated a series of naturally occurring and digitally edited images of infant facial expressions. Naturally occurring smiles and cry faces involving the co-occurrence of greater lip movement, mouth opening, and eye constriction, were rated as expressing stronger positive and negative emotion, respectively, than expressions without these 3 features. Ratings of digitally edited expressions indicated that eye constriction contributed to higher ratings of positive emotion in smiles (i.e., in Duchenne smiles) and greater eye constriction contributed to higher ratings of negative emotion in cry faces. Stronger mouth opening contributed to higher ratings of arousal in both smiles and cry faces. These findings indicate a set of similar facial movements are linked to perceptions of greater emotional intensity, whether the movements occur in positive or negative infant emotional expressions. This proposal is discussed with reference to discrete, componential, and dynamic systems theories of emotion.
Image and Vision Computing, 2010
A close relationship exists between the advancement of face recognition algorithms and the availa... more A close relationship exists between the advancement of face recognition algorithms and the availability of face databases varying factors that affect facial appearance in a controlled manner. The CMU PIE database has been very influential in advancing research in face recognition across pose and illumination. Despite its success the PIE database has several shortcomings: a limited number of subjects, single recording session and only few expressions captured. To address these issues we collected the CMU Multi-PIE database. It contains 337 subjects, imaged under 15 view points and 19 illumination conditions in up to four recording sessions. In this paper we introduce the database and describe the recording procedure. We furthermore present results from baseline experiments using PCA and LDA classifiers to highlight similarities and differences between PIE and Multi-PIE. This document gives an overview of the Multi-PIE database and provides results of baseline face recognition experiments. Section 2 describes the hardware setup used during the
Image and Vision Computing, 2009
Pain is typically assessed by patient self-report. Self-reported pain, however, is difficult to i... more Pain is typically assessed by patient self-report. Self-reported pain, however, is difficult to interpret and may be impaired or in some circumstances (i.e., young children and the severely ill) not even possible. To circumvent these problems behavioral scientists have identified reliable and valid facial indicators of pain. Hitherto, these methods have required manual measurement by highly skilled human observers. In this paper we explore an approach for automatically recognizing acute pain without the need for human observers. Specifically, our study was restricted to automatically detecting pain in adult patients with rotator cuff injuries. The system employed video input of the patients as they moved their affected and unaffected shoulder. Two types of ground truth were considered. Sequence-level ground truth consisted of Likert-type ratings by skilled observers. Frame-level ground truth was calculated from presence/absence and intensity of facial actions previously associated with pain. Active appearance models (AAM) were used to decouple shape and appearance in the digitized face images. Support vector machines (SVM) were compared for several representations from the AAM and of ground truth of varying granularity. We explored two questions pertinent to the construction, design and development of automatic pain detection systems. First, at what level (i.e., sequence-or frame-level) should datasets be labeled in order to obtain satisfactory automatic pain detection performance? Second, how important is it, at both levels of labeling, that we non-rigidly register the face?
Developmental Psychology, 1988
Abstract 1. Fogel's comment (1988) raises important questions about discrete versus scal... more Abstract 1. Fogel's comment (1988) raises important questions about discrete versus scaling approaches to the description and analysis of mother–infant face-to-face interaction and proposes a theoretical perspective consistent with our findings. We respond to his concerns about the validity and preferred uses of scaled monadic phases and introduce a note of caution about prematurely concluding that stochastic organization alone is of significance to development.(PsycINFO Database Record (c) 2012 APA, all rights reserved)
Developmental Psychology, 1999
Developmental Psychology, 1988
During mother-infant face-to-face interactions, bidirectional influence could be achieved through... more During mother-infant face-to-face interactions, bidirectional influence could be achieved through either the entraining of periodic cycles in the behavior of each partner or through the stochastic organization of behaviors. To determine whether and how bidirectional influence occurs, we used both time-and frequency-domain techniques to study the interactions of 54 mother-infant pairs, 18 each at 3, 6, and 9 months of age. Behavioral descriptors for each mother and infant were scaled to reflect levels of affective involvement during each second oftbe interaction. Periodic cycles were found in infants' expressive behavior only at 3 months and not in mothers' behavior. Nonperiodic cycles, which were found in some mothers' and infants' behavior at each age, were more common. At no age was the occurrence of cycles in mothers' or infants' behavior related to the achievement of bidirectional influence. Similar proportions of mothers and infants were responsive to momentto-moment changes in the other's behavior, except at 6 months when the proportion of mothers was higher. Bidirectional influence was brought about by the stochastic organization of behaviors rather than through the mutual entraining of periodic cycles. Early mother-infant face-to-face interactions have a conversation-like pattern in which each partner appears to be responsive to the other. The assumption that this pattern is actually achieved by bidirectional influence has been seriously questioned in a series of papers (Gottman & Ringland, 1981; Thomas & Malone, 1979; Thomas & Martin, 1976). Few studies have rigorously tested the null hypothesis that during faceto-face interactions moment-to-moment changes in the infant's behavior are independent of changes in the mother's behavior. Three studies that did test the null hypothesis (Gottman & Ringland, 1981; Hayes, 1984; Thomas & Malone, 1979) failed to reject it. Two types of organization of the infants' behavior, periodic or stochastic, would permit the mother to create the semblance of bidirectional influence. Periodic events cycle on and off at regular, precise intervals, permitting highly accurate prediction of the timing of future events. A periodic cycle is deterministic in that the frequency, phase, and amplitude do not vary over time (Gottman, 1981). Alternatively, stochastic events are autocorrelated over short intervals; that is, sequences occur nonrandomly (e.g., smiles following the onset of visual regard; Kaye & Fogel, 1980). Depending on the type of autocorrelation, sequences may also be cyclic, but not periodic. Cohn and Tronick
Developmental Psychology, 1990
Depression's influence on mother-infant interactious at 2 months postpartum was studied in 24 dep... more Depression's influence on mother-infant interactious at 2 months postpartum was studied in 24 depressed and 22 nondepressed mothex-infant dyads. Depression was diagnosed using the SADS-L and RDC. In S's homes, structured interactions of 3 min duration were videotaped and later coded using behavioral descriptors and a l-s time base. Unstructured interactions were described using rating scales. During structured interactions, depressed mothers were more negative and their babies were less positive than were nondepressed dyads. The reduced positivity of depressed dyads was achieved through contingent resixmfiveness. Ratings from unstructured interactions were consistent with these findings. Results support the hypothesis that depression negatively influences motherinfant behaviol; but indicate that influence may vary with development, chronicity, and presence of other risk factors. Ten to 15% of postpartum women develop a moderate, clinically significant depressive reaction that is more prolonged than the "blues" and serious enough to interfere with daily functioning (O'Hara, Neunaber, & Zekosld, 1984). Postpartum depression is thus a potentially important mental health problem for families with young infants. Studies of depression in the postpartum period, however, have focused almost exclusively on the woman herself (see Hopkins, Marcus, & Campbell, 1984, for a review) and have ignored the possible negative effects depression may have on her relationship with her infant and her infant's development. Postpartum depression would be expected to interfere with optimal mothering. The importance of positive expression and responsive caretaking in early infancy has been well documented in numerous studies (e.g., Ainsworth, Blehar, Waters, & Wall, 1978; Belsky, Rovine, & Taylor, 1984), whereas maternal insensitivity and unavailability have been associated with a range of difficulties in adaptation in infancy and early childhood (e.g., Sronfe, 1983). Recent studies have confirmed earlier clinical observations by Weissman and Paykel (1974) about the negative impact of maternal depression on young children's development. Research on the offspring of women with major depressive disorders has indicated a range of cognitive and social problems in toddlers, preschoolers, and school age children (Beardslee,
The Cleft Palate-Craniofacial Journal, 2006
Objective To examine and compare social acceptance, social behavior, and facial movements of chil... more Objective To examine and compare social acceptance, social behavior, and facial movements of children with and without oral clefts in an experimental setting. Design Two groups of children (with and without oral clefts) were videotaped in a structured social interaction with a peer confederate, when listening to emotional stories, and when told to pose specific facial expressions. Participants Twenty-four children and adolescents ages 7 to 161/2 years with oral clefts were group matched for gender, grade, and socioeconomic status with 25 noncleft controls. Main Outcome Measures Specific social and facial behaviors coded from videotapes; Harter Self-Perception Profile, Social Acceptance subscale. Results Significant between-group differences were obtained. Children in the cleft group more often displayed “Tongue Out,” “Eye Contact,” “Mimicry,” and “Initiates Conversation.” For the cleft group, “Gaze Avoidance” was significantly negatively correlated with social acceptance scores. The...