Philippe Schyns - Academia.edu (original) (raw)
Papers by Philippe Schyns
Psychologica Belgica, 1995
... (Eds.), Similarity and analogial reasoning. Cambridge: Cambridge University Press. ... The lo... more ... (Eds.), Similarity and analogial reasoning. Cambridge: Cambridge University Press. ... The lobsters are composed of the same set of features which changed shape across stimuli. ... Figure 5: These pictures present exemplars of the Martian Landscape stimuli used in an ...
Behavioral and Brain Sciences, 1998
Abstract The origin of features from nonfeatural information is a problem that should concern all... more Abstract The origin of features from nonfeatural information is a problem that should concern all theories of object categorization and recognition, not just the flexible feature approach. In contrast to the idea that new features must originate from combinations of simpler fixed features, we argue that holistic features can be created from a direct imprinting on the visual medium. Furthermore, featural descriptions can emerge from processes that by themselves do not operate on feature detectors. Once acquired, features can be decomposed into ...
Central to all human interaction is the mutual understanding of emotions, achieved primarily by a... more Central to all human interaction is the mutual understanding of emotions, achieved primarily by a set of biologically rooted social signals evolved for this purpose-facial expressions of emotion. Although facial expressions are widely considered to be the universal language of emotion [1-3], some negative facial expressions consistently elicit lower recognition levels among Eastern compared to Western groups (see for a meta-analysis and [5, 6] for review). Here, focusing on the decoding of facial expression signals, we merge behavioral and computational analyses with novel spatiotemporal analyses of eye movements, showing that Eastern observers use a culture-specific decoding strategy that is inadequate to reliably distinguish universal facial expressions of ''fear'' and ''disgust.'' Rather than distributing their fixations evenly across the face as Westerners do, Eastern observers persistently fixate the eye region. Using a model information sampler, we demonstrate that by persistently fixating the eyes, Eastern observers sample ambiguous information, thus causing significant confusion. Our results question the universality of human facial expressions of emotion, highlighting their true complexity, with critical consequences for crosscultural communication and globalization.
Current Biology, 2015
Humans show a remarkable ability to understand continuous speech even under adverse listening con... more Humans show a remarkable ability to understand continuous speech even under adverse listening conditions. This ability critically relies on dynamically updated predictions of incoming sensory information, but exactly how top-down predictions improve speech processing is still unclear. Brain oscillations are a likely mechanism for these top-down predictions [1, 2]. Quasi-rhythmic components in speech are known to entrain low-frequency oscillations in auditory areas [3, 4], and this entrainment increases with intelligibility [5]. We hypothesize that top-down signals from frontal brain areas causally modulate the phase of brain oscillations in auditory cortex. We use magnetoencephalography (MEG) to monitor brain oscillations in 22 participants during continuous speech perception. We characterize prominent spectral components of speech-brain coupling in auditory cortex and use causal connectivity analysis (transfer entropy) to identify the top-down signals driving this coupling more strongly during intelligible speech than during unintelligible speech. We report three main findings. First, frontal and motor cortices significantly modulate the phase of speech-coupled low-frequency oscillations in auditory cortex, and this effect depends on intelligibility of speech. Second, top-down signals are significantly stronger for left auditory cortex than for right auditory cortex. Third, speech-auditory cortex coupling is enhanced as a function of stronger top-down signals. Together, our results suggest that low-frequency brain oscillations play a role in implementing predictive top-down control during continuous speech perception and that top-down control is largely directed at left auditory cortex. This suggests a close relationship between (left-lateralized) speech production areas and the implementation of top-down control in continuous speech perception.
An important goal of functional neuroimaging has been to localize stimulus-specific processes in ... more An important goal of functional neuroimaging has been to localize stimulus-specific processes in the brain. Numerous studies have revealed particular patterns of brain activity in different cortical areas in response to different object categories such as faces, body parts, places, words, letters and so forth. However, quite different patterns of activation have been given a similar interpretation in terms of category or domain specificity. Other characteristics than the response to the target category have sometimes been used to address whether a cortical brain area is functionally specialized for a given stimulus category, such as automatic processing [e.g. Joseph, J., Cerullo, M., Farley, A., Steinmetz, N., Mier, C., 2006. fMRI correlates of cortical specialization and generalization for letter processing. NeuroImage 32, 806-820] or assemblage [Haxby, J.V., Gobbini, M.I., Furey, M.L., Ishai, A., Schouten, J.L., Pietrini, P., 2001. Distributed and overlapping representations of faces and objects in ventral temporal cortex. Science 293, 2425-2430]. Here we frame the debate around the notions of category specificity as defined by Fodor [Fodor, J., 1983. The modularity of Mind. MIT Press, Cambridge, MA., Fodor, J., 2001.
Journal of experimental psychology. General, 2012
Facial expressions have long been considered the "universal language of emotion." Yet c... more Facial expressions have long been considered the "universal language of emotion." Yet consistent cultural differences in the recognition of facial expressions contradict such notions (e.g., R. E. Jack, C. Blais, C. Scheepers, P. G. Schyns, & R. Caldara, 2009). Rather, culture--as an intricate system of social concepts and beliefs--could generate different expectations (i.e., internal representations) of facial expression signals. To investigate, they used a powerful psychophysical technique (reverse correlation) to estimate the observer-specific internal representations of the 6 basic facial expressions of emotion (i.e., happy, surprise, fear, disgust, anger, and sad) in two culturally distinct groups (i.e., Western Caucasian [WC] and East Asian [EA]). Using complementary statistical image analyses, cultural specificity was directly revealed in these representations. Specifically, whereas WC internal representations predominantly featured the eyebrows and mouth, EA interna...
Vision research, 2004
Murray and Gold discuss two "shortcomings" of the Bubbles method [Vision Research 41 (2... more Murray and Gold discuss two "shortcomings" of the Bubbles method [Vision Research 41 (2001) 2261]. The first one is theoretical: Bubbles would not fully characterize the LAM (Linear Amplifier Model) observer, whereas reverse correlation would. The second "shortcoming" is practical: the apertures that partly reveal information in a typical Bubbles experiment would induce atypical strategies in human observers, whereas the additive Gaussian white noise used by Murray and Gold (and others) in conjunction with reverse correlation would not. Here, we show that these claims are unfounded.
Vision research, 2001
Everyday, people flexibly perform different categorizations of common faces, objects and scenes. ... more Everyday, people flexibly perform different categorizations of common faces, objects and scenes. Intuition and scattered evidence suggest that these categorizations require the use of different visual information from the input. However, there is no unifying method, based on the categorization performance of subjects, that can isolate the information used. To this end, we developed Bubbles, a general technique that can assign the credit of human categorization performance to specific visual information. To illustrate the technique, we applied Bubbles on three categorization tasks (gender, expressive or not and identity) on the same set of faces, with human and ideal observers to compare the features they used.
Cortex, 2015
The human face transmits a wealth of signals that readily provide crucial information for social ... more The human face transmits a wealth of signals that readily provide crucial information for social interactions, such as facial identity and emotional expression. Yet, a fundamental question remains unresolved: does the face information for identity and emotional expression categorization tap into common or distinct representational systems? To address this question we tested PS, a pure case of acquired prosopagnosia with bilateral occipitotemporal lesions anatomically sparing the regions that are assumed to contribute to facial expression (de)coding (i.e., the amygdala, the insula and the posterior superior temporal sulcus e pSTS). We previously demonstrated that PS does not use information from the eye region to identify faces, but relies on the suboptimal mouth region. PS's abnormal information use for identity, coupled with her neural dissociation, provides a unique opportunity to probe the existence of a dichotomy in the face representational system. To reconstruct the mental models of the six basic facial expressions of emotion in PS and age-matched healthy observers, we used a novel reverse correlation technique tracking information use on dynamic faces. PS was comparable to controls, using all facial features to (de)code facial expressions with the exception of fear. PS's normal (de)coding of dynamic facial expressions suggests that the face system relies either on distinct representational systems for identity and expression, or dissociable cortical pathways to access them. Interestingly, PS showed a selective impairment for categorizing many static facial expressions, which could be accounted for by her lesion in the right inferior occipital gyrus. PS's advantage for dynamic facial expressions might instead relate to a functionally distinct and sufficient cortical pathway directly connecting the early visual cortex to the spared pSTS. Altogether, our data provide critical insights on the healthy and impaired face systems, question evidence of deficits obtained from patients by using static images of facial expressions, and offer novel routes for patient rehabilitation.
To account for the non-independence of individual observations within a year, and the large diffe... more To account for the non-independence of individual observations within a year, and the large differences in sample size among groups, analyses were, where possible, performed on yearly means for the relevant sub-group. When appropriate, yearly means were squareroot or arc-sine transformed before analysis 24 . All presented means and parameter estimates are back-transformed.
Fifth International Conference on Graphic and Image Processing (ICGIP 2013), 2014
Facial expressions reflect internal emotional states of a character or in response to social comm... more Facial expressions reflect internal emotional states of a character or in response to social communications. Though much effort has been taken to generate realistic facial expressions, it still remains a challenging topic due to human being's sensitivity to subtle facial movements. In this paper, we present a method for facial animation generation, which reflects true facial muscle movements with high fidelity. An intermediate model space is introduced to transfer captured static AU peak frames based on FACS to the conformed target face. And then dynamic parameters derived using a psychophysics method is integrated to generate facial animation, which is assumed to represent natural correlation of multiple AUs. Finally, the animation sequence in the intermediate model space is mapped to the target face to produce final animation.
Neuroimage, 2007
An important goal of functional neuroimaging has been to localize stimulus-specific processes in ... more An important goal of functional neuroimaging has been to localize stimulus-specific processes in the brain. Numerous studies have revealed particular patterns of brain activity in different cortical areas in response to different object categories such as faces, body parts, places, words, letters and so forth. However, quite different patterns of activation have been given a similar interpretation in terms of category or domain specificity. Other characteristics than the response to the target category have sometimes been used to address whether a cortical brain area is functionally specialized for a given stimulus category, such as automatic processing [e.g. Joseph, J., Cerullo, M., Farley, A., Steinmetz, N., Mier, C., 2006. fMRI correlates of cortical specialization and generalization for letter processing. NeuroImage 32, 806-820] or assemblage [Haxby, J.V., Gobbini, M.I., Furey, M.L., Ishai, A., Schouten, J.L., Pietrini, P., 2001. Distributed and overlapping representations of faces and objects in ventral temporal cortex. Science 293, 2425-2430]. Here we frame the debate around the notions of category specificity as defined by Fodor [Fodor, J., 1983. The modularity of Mind. MIT Press, Cambridge, MA., Fodor, J., 2001.
Visual Cognition, 2005
Recent evidence suggests that spatial frequency (SF) processing of simple and complex visual patt... more Recent evidence suggests that spatial frequency (SF) processing of simple and complex visual patterns is flexible. The use of spatial scale in scene perception seems to be influenced by people's expectations. However as yet there is no direct evidence for top-down attentional effects on flexible scale use in scene perception.
Visual Cognition, 2005
We examined the effects of colour cues on the express categorization of natural scenes. Using a g... more We examined the effects of colour cues on the express categorization of natural scenes. Using a go/no-go paradigm sensitive to fast recognition processes, we measured early event-related potential (ERP) correlates of scene categorization to elucidate the processing stage at which colour contributes to scene recognition. Observers were presented with scenes belonging to four colour-diagnostic categories (desert, forest, canyon and coastline). Scenes were presented in one of three forms: Diagnostically coloured, nondiagnostically coloured, or greyscale images. In a verification task, observers were instructed to respond whenever the presented stimulus matched a previously presented category name. Reaction times and accuracy were optimal when the stimuli were presented as their original
Vision Research, 2006
There is behavioral evidence that diVerent visual categorization tasks on various types of stimul... more There is behavioral evidence that diVerent visual categorization tasks on various types of stimuli (e.g., faces) are sensitive to distinct visual characteristics of the same image, for example, spatial frequencies. However, it has been more diYcult to address the question of how early in the processing stream this sensitivity to the information relevant to the categorization task emerges. The current study uses scalp event-related potentials recorded in humans to examine how and when information diagnostic to a particular task is processed during that task versus during a task for which it is not diagnostic. Subjects were shown diagnostic and anti-diagnostic face images for both expression and gender decisions (created using Gosselin and Schyns' Bubbles technique), and asked to perform both tasks on all stimuli. Behaviorally, there was a larger advantage of diagnostic over anti-diagnostic facial images when images designed to be diagnostic for a particular task were shown when performing that task, as compared to performing the other task. Most importantly, this interaction was seen in the amplitude of the occipito-temporal N170, a visual component reXecting a perceptual stage of processing associated with the categorization of faces. When participants performed the gender categorization task, the N170 amplitude was larger when they were presented with gender diagnostic images than with expression-diagnostic images, relative to their respective non-diagnostic stimuli. However, categorizing faces according to their facial expression was not signiWcantly associated with a larger N170 when subjects categorized expression diagnostic cues relative to gender-diagnostic cues. These results show that the inXuence of higher-level task-oriented processing may take place at the level of visual categorization stages for faces, at least for processes relying on shared diagnostic features with facial identity judgments, such as gender cues.
Vision Research, 2006
Observers can use spatial scale information flexibly depending on categorisation task and on thei... more Observers can use spatial scale information flexibly depending on categorisation task and on their prior sensitisation. Here, we explore whether attentional modulation of spatial frequency processing at early stages of visual analysis may be responsible. In three experiments, we find that observersÕ perception of spatial frequency (SF) band-limited scene stimuli is determined by the SF content of images previously experienced at that location during a sensitisation phase. We conclude that these findings are consistent with the involvement of relatively early, retinotopically mapped, stages of visual analysis, supporting the attentional modulation of spatial frequency channels account of sensitisation effects.
Trends in Cognitive Sciences, 2006
Vision provides us with an ever-changing neural representation of the world from which we must ex... more Vision provides us with an ever-changing neural representation of the world from which we must extract stable object categorizations. We argue that visual analysis involves a fundamental interaction between the observer's top-down categorization goals and the incoming stimulation. Specifically, we discuss the information available for categorization from an analysis of different spatial scales by a bank of flexible, interacting spatial-frequency (SF) channels. We contend that the activity of these channels is not determined simply bottom-up by the stimulus. Instead, we argue that, following perceptual learning a specification of the diagnostic, object-based, SF information dynamically influences the top-down processing of retina-based SF information by these channels. Our analysis of SF processing provides a case study that emphasizes the continuity between higher-level cognition and lower-level perception.
Psychological Science, 2005
This article examines the human face as a transmitter of expression signals and the brain as a de... more This article examines the human face as a transmitter of expression signals and the brain as a decoder of these expression signals. If the face has evolved to optimize transmission of such signals, the basic facial expressions should have minimal overlap in their information. If the brain has evolved to optimize categorization of expressions, it should be efficient with the information available from the transmitter for the task. In this article, we characterize the information underlying the recognition of the six basic facial expression signals and evaluate how efficiently each expression is decoded by the underlying brain structures.
Psychological Science, 2003
Everyone has seen a human face in a cloud, a pebble, or blots on a wall. Evidence of superstitiou... more Everyone has seen a human face in a cloud, a pebble, or blots on a wall. Evidence of superstitious perceptions has been documented since classical antiquity, but has received little scientific attention. In the study reported here, we used superstitious perceptions in a new principled method to reveal the properties of unobservable object representations in memory. We stimulated the visual system with unstructured white noise. Observers firmly believed that they perceived the letter S in Experiment 1 and a smile on a face in Experiment 2. Using reverse correlation and computational analyses, we rendered the memory representations underlying these superstitious perceptions.
Psychological Science, 2004
Examining the receptive fields of brain signals can elucidate how information impinging on the fo... more Examining the receptive fields of brain signals can elucidate how information impinging on the former modulates the latter. We applied this time-honored approach in early vision to the higher-level brain processes underlying face categorizations. Electroencephalograms in response to face-information samples were recorded while observers resolved two different categorizations (gender, expressive or not). Using a method with low bias and low variance, we compared, in a common space of information states, the information determining behavior (accuracy and reaction time) with the information that modulates emergent brain signals associated with early face encoding and later category decision. Our results provide a time line for face processing in which selective attention to diagnostic information for categorizing stimuli (the eyes and their second-order relationships in gender categorization; the mouth in expressive-or-not categorization) correlates with late electrophysiological (P300) activity, whereas early face-sensitive occipitotemporal (N170) activity is mainly driven by the contralateral eye, irrespective of the categorization task.
Psychologica Belgica, 1995
... (Eds.), Similarity and analogial reasoning. Cambridge: Cambridge University Press. ... The lo... more ... (Eds.), Similarity and analogial reasoning. Cambridge: Cambridge University Press. ... The lobsters are composed of the same set of features which changed shape across stimuli. ... Figure 5: These pictures present exemplars of the Martian Landscape stimuli used in an ...
Behavioral and Brain Sciences, 1998
Abstract The origin of features from nonfeatural information is a problem that should concern all... more Abstract The origin of features from nonfeatural information is a problem that should concern all theories of object categorization and recognition, not just the flexible feature approach. In contrast to the idea that new features must originate from combinations of simpler fixed features, we argue that holistic features can be created from a direct imprinting on the visual medium. Furthermore, featural descriptions can emerge from processes that by themselves do not operate on feature detectors. Once acquired, features can be decomposed into ...
Central to all human interaction is the mutual understanding of emotions, achieved primarily by a... more Central to all human interaction is the mutual understanding of emotions, achieved primarily by a set of biologically rooted social signals evolved for this purpose-facial expressions of emotion. Although facial expressions are widely considered to be the universal language of emotion [1-3], some negative facial expressions consistently elicit lower recognition levels among Eastern compared to Western groups (see for a meta-analysis and [5, 6] for review). Here, focusing on the decoding of facial expression signals, we merge behavioral and computational analyses with novel spatiotemporal analyses of eye movements, showing that Eastern observers use a culture-specific decoding strategy that is inadequate to reliably distinguish universal facial expressions of ''fear'' and ''disgust.'' Rather than distributing their fixations evenly across the face as Westerners do, Eastern observers persistently fixate the eye region. Using a model information sampler, we demonstrate that by persistently fixating the eyes, Eastern observers sample ambiguous information, thus causing significant confusion. Our results question the universality of human facial expressions of emotion, highlighting their true complexity, with critical consequences for crosscultural communication and globalization.
Current Biology, 2015
Humans show a remarkable ability to understand continuous speech even under adverse listening con... more Humans show a remarkable ability to understand continuous speech even under adverse listening conditions. This ability critically relies on dynamically updated predictions of incoming sensory information, but exactly how top-down predictions improve speech processing is still unclear. Brain oscillations are a likely mechanism for these top-down predictions [1, 2]. Quasi-rhythmic components in speech are known to entrain low-frequency oscillations in auditory areas [3, 4], and this entrainment increases with intelligibility [5]. We hypothesize that top-down signals from frontal brain areas causally modulate the phase of brain oscillations in auditory cortex. We use magnetoencephalography (MEG) to monitor brain oscillations in 22 participants during continuous speech perception. We characterize prominent spectral components of speech-brain coupling in auditory cortex and use causal connectivity analysis (transfer entropy) to identify the top-down signals driving this coupling more strongly during intelligible speech than during unintelligible speech. We report three main findings. First, frontal and motor cortices significantly modulate the phase of speech-coupled low-frequency oscillations in auditory cortex, and this effect depends on intelligibility of speech. Second, top-down signals are significantly stronger for left auditory cortex than for right auditory cortex. Third, speech-auditory cortex coupling is enhanced as a function of stronger top-down signals. Together, our results suggest that low-frequency brain oscillations play a role in implementing predictive top-down control during continuous speech perception and that top-down control is largely directed at left auditory cortex. This suggests a close relationship between (left-lateralized) speech production areas and the implementation of top-down control in continuous speech perception.
An important goal of functional neuroimaging has been to localize stimulus-specific processes in ... more An important goal of functional neuroimaging has been to localize stimulus-specific processes in the brain. Numerous studies have revealed particular patterns of brain activity in different cortical areas in response to different object categories such as faces, body parts, places, words, letters and so forth. However, quite different patterns of activation have been given a similar interpretation in terms of category or domain specificity. Other characteristics than the response to the target category have sometimes been used to address whether a cortical brain area is functionally specialized for a given stimulus category, such as automatic processing [e.g. Joseph, J., Cerullo, M., Farley, A., Steinmetz, N., Mier, C., 2006. fMRI correlates of cortical specialization and generalization for letter processing. NeuroImage 32, 806-820] or assemblage [Haxby, J.V., Gobbini, M.I., Furey, M.L., Ishai, A., Schouten, J.L., Pietrini, P., 2001. Distributed and overlapping representations of faces and objects in ventral temporal cortex. Science 293, 2425-2430]. Here we frame the debate around the notions of category specificity as defined by Fodor [Fodor, J., 1983. The modularity of Mind. MIT Press, Cambridge, MA., Fodor, J., 2001.
Journal of experimental psychology. General, 2012
Facial expressions have long been considered the "universal language of emotion." Yet c... more Facial expressions have long been considered the "universal language of emotion." Yet consistent cultural differences in the recognition of facial expressions contradict such notions (e.g., R. E. Jack, C. Blais, C. Scheepers, P. G. Schyns, & R. Caldara, 2009). Rather, culture--as an intricate system of social concepts and beliefs--could generate different expectations (i.e., internal representations) of facial expression signals. To investigate, they used a powerful psychophysical technique (reverse correlation) to estimate the observer-specific internal representations of the 6 basic facial expressions of emotion (i.e., happy, surprise, fear, disgust, anger, and sad) in two culturally distinct groups (i.e., Western Caucasian [WC] and East Asian [EA]). Using complementary statistical image analyses, cultural specificity was directly revealed in these representations. Specifically, whereas WC internal representations predominantly featured the eyebrows and mouth, EA interna...
Vision research, 2004
Murray and Gold discuss two "shortcomings" of the Bubbles method [Vision Research 41 (2... more Murray and Gold discuss two "shortcomings" of the Bubbles method [Vision Research 41 (2001) 2261]. The first one is theoretical: Bubbles would not fully characterize the LAM (Linear Amplifier Model) observer, whereas reverse correlation would. The second "shortcoming" is practical: the apertures that partly reveal information in a typical Bubbles experiment would induce atypical strategies in human observers, whereas the additive Gaussian white noise used by Murray and Gold (and others) in conjunction with reverse correlation would not. Here, we show that these claims are unfounded.
Vision research, 2001
Everyday, people flexibly perform different categorizations of common faces, objects and scenes. ... more Everyday, people flexibly perform different categorizations of common faces, objects and scenes. Intuition and scattered evidence suggest that these categorizations require the use of different visual information from the input. However, there is no unifying method, based on the categorization performance of subjects, that can isolate the information used. To this end, we developed Bubbles, a general technique that can assign the credit of human categorization performance to specific visual information. To illustrate the technique, we applied Bubbles on three categorization tasks (gender, expressive or not and identity) on the same set of faces, with human and ideal observers to compare the features they used.
Cortex, 2015
The human face transmits a wealth of signals that readily provide crucial information for social ... more The human face transmits a wealth of signals that readily provide crucial information for social interactions, such as facial identity and emotional expression. Yet, a fundamental question remains unresolved: does the face information for identity and emotional expression categorization tap into common or distinct representational systems? To address this question we tested PS, a pure case of acquired prosopagnosia with bilateral occipitotemporal lesions anatomically sparing the regions that are assumed to contribute to facial expression (de)coding (i.e., the amygdala, the insula and the posterior superior temporal sulcus e pSTS). We previously demonstrated that PS does not use information from the eye region to identify faces, but relies on the suboptimal mouth region. PS's abnormal information use for identity, coupled with her neural dissociation, provides a unique opportunity to probe the existence of a dichotomy in the face representational system. To reconstruct the mental models of the six basic facial expressions of emotion in PS and age-matched healthy observers, we used a novel reverse correlation technique tracking information use on dynamic faces. PS was comparable to controls, using all facial features to (de)code facial expressions with the exception of fear. PS's normal (de)coding of dynamic facial expressions suggests that the face system relies either on distinct representational systems for identity and expression, or dissociable cortical pathways to access them. Interestingly, PS showed a selective impairment for categorizing many static facial expressions, which could be accounted for by her lesion in the right inferior occipital gyrus. PS's advantage for dynamic facial expressions might instead relate to a functionally distinct and sufficient cortical pathway directly connecting the early visual cortex to the spared pSTS. Altogether, our data provide critical insights on the healthy and impaired face systems, question evidence of deficits obtained from patients by using static images of facial expressions, and offer novel routes for patient rehabilitation.
To account for the non-independence of individual observations within a year, and the large diffe... more To account for the non-independence of individual observations within a year, and the large differences in sample size among groups, analyses were, where possible, performed on yearly means for the relevant sub-group. When appropriate, yearly means were squareroot or arc-sine transformed before analysis 24 . All presented means and parameter estimates are back-transformed.
Fifth International Conference on Graphic and Image Processing (ICGIP 2013), 2014
Facial expressions reflect internal emotional states of a character or in response to social comm... more Facial expressions reflect internal emotional states of a character or in response to social communications. Though much effort has been taken to generate realistic facial expressions, it still remains a challenging topic due to human being's sensitivity to subtle facial movements. In this paper, we present a method for facial animation generation, which reflects true facial muscle movements with high fidelity. An intermediate model space is introduced to transfer captured static AU peak frames based on FACS to the conformed target face. And then dynamic parameters derived using a psychophysics method is integrated to generate facial animation, which is assumed to represent natural correlation of multiple AUs. Finally, the animation sequence in the intermediate model space is mapped to the target face to produce final animation.
Neuroimage, 2007
An important goal of functional neuroimaging has been to localize stimulus-specific processes in ... more An important goal of functional neuroimaging has been to localize stimulus-specific processes in the brain. Numerous studies have revealed particular patterns of brain activity in different cortical areas in response to different object categories such as faces, body parts, places, words, letters and so forth. However, quite different patterns of activation have been given a similar interpretation in terms of category or domain specificity. Other characteristics than the response to the target category have sometimes been used to address whether a cortical brain area is functionally specialized for a given stimulus category, such as automatic processing [e.g. Joseph, J., Cerullo, M., Farley, A., Steinmetz, N., Mier, C., 2006. fMRI correlates of cortical specialization and generalization for letter processing. NeuroImage 32, 806-820] or assemblage [Haxby, J.V., Gobbini, M.I., Furey, M.L., Ishai, A., Schouten, J.L., Pietrini, P., 2001. Distributed and overlapping representations of faces and objects in ventral temporal cortex. Science 293, 2425-2430]. Here we frame the debate around the notions of category specificity as defined by Fodor [Fodor, J., 1983. The modularity of Mind. MIT Press, Cambridge, MA., Fodor, J., 2001.
Visual Cognition, 2005
Recent evidence suggests that spatial frequency (SF) processing of simple and complex visual patt... more Recent evidence suggests that spatial frequency (SF) processing of simple and complex visual patterns is flexible. The use of spatial scale in scene perception seems to be influenced by people's expectations. However as yet there is no direct evidence for top-down attentional effects on flexible scale use in scene perception.
Visual Cognition, 2005
We examined the effects of colour cues on the express categorization of natural scenes. Using a g... more We examined the effects of colour cues on the express categorization of natural scenes. Using a go/no-go paradigm sensitive to fast recognition processes, we measured early event-related potential (ERP) correlates of scene categorization to elucidate the processing stage at which colour contributes to scene recognition. Observers were presented with scenes belonging to four colour-diagnostic categories (desert, forest, canyon and coastline). Scenes were presented in one of three forms: Diagnostically coloured, nondiagnostically coloured, or greyscale images. In a verification task, observers were instructed to respond whenever the presented stimulus matched a previously presented category name. Reaction times and accuracy were optimal when the stimuli were presented as their original
Vision Research, 2006
There is behavioral evidence that diVerent visual categorization tasks on various types of stimul... more There is behavioral evidence that diVerent visual categorization tasks on various types of stimuli (e.g., faces) are sensitive to distinct visual characteristics of the same image, for example, spatial frequencies. However, it has been more diYcult to address the question of how early in the processing stream this sensitivity to the information relevant to the categorization task emerges. The current study uses scalp event-related potentials recorded in humans to examine how and when information diagnostic to a particular task is processed during that task versus during a task for which it is not diagnostic. Subjects were shown diagnostic and anti-diagnostic face images for both expression and gender decisions (created using Gosselin and Schyns' Bubbles technique), and asked to perform both tasks on all stimuli. Behaviorally, there was a larger advantage of diagnostic over anti-diagnostic facial images when images designed to be diagnostic for a particular task were shown when performing that task, as compared to performing the other task. Most importantly, this interaction was seen in the amplitude of the occipito-temporal N170, a visual component reXecting a perceptual stage of processing associated with the categorization of faces. When participants performed the gender categorization task, the N170 amplitude was larger when they were presented with gender diagnostic images than with expression-diagnostic images, relative to their respective non-diagnostic stimuli. However, categorizing faces according to their facial expression was not signiWcantly associated with a larger N170 when subjects categorized expression diagnostic cues relative to gender-diagnostic cues. These results show that the inXuence of higher-level task-oriented processing may take place at the level of visual categorization stages for faces, at least for processes relying on shared diagnostic features with facial identity judgments, such as gender cues.
Vision Research, 2006
Observers can use spatial scale information flexibly depending on categorisation task and on thei... more Observers can use spatial scale information flexibly depending on categorisation task and on their prior sensitisation. Here, we explore whether attentional modulation of spatial frequency processing at early stages of visual analysis may be responsible. In three experiments, we find that observersÕ perception of spatial frequency (SF) band-limited scene stimuli is determined by the SF content of images previously experienced at that location during a sensitisation phase. We conclude that these findings are consistent with the involvement of relatively early, retinotopically mapped, stages of visual analysis, supporting the attentional modulation of spatial frequency channels account of sensitisation effects.
Trends in Cognitive Sciences, 2006
Vision provides us with an ever-changing neural representation of the world from which we must ex... more Vision provides us with an ever-changing neural representation of the world from which we must extract stable object categorizations. We argue that visual analysis involves a fundamental interaction between the observer's top-down categorization goals and the incoming stimulation. Specifically, we discuss the information available for categorization from an analysis of different spatial scales by a bank of flexible, interacting spatial-frequency (SF) channels. We contend that the activity of these channels is not determined simply bottom-up by the stimulus. Instead, we argue that, following perceptual learning a specification of the diagnostic, object-based, SF information dynamically influences the top-down processing of retina-based SF information by these channels. Our analysis of SF processing provides a case study that emphasizes the continuity between higher-level cognition and lower-level perception.
Psychological Science, 2005
This article examines the human face as a transmitter of expression signals and the brain as a de... more This article examines the human face as a transmitter of expression signals and the brain as a decoder of these expression signals. If the face has evolved to optimize transmission of such signals, the basic facial expressions should have minimal overlap in their information. If the brain has evolved to optimize categorization of expressions, it should be efficient with the information available from the transmitter for the task. In this article, we characterize the information underlying the recognition of the six basic facial expression signals and evaluate how efficiently each expression is decoded by the underlying brain structures.
Psychological Science, 2003
Everyone has seen a human face in a cloud, a pebble, or blots on a wall. Evidence of superstitiou... more Everyone has seen a human face in a cloud, a pebble, or blots on a wall. Evidence of superstitious perceptions has been documented since classical antiquity, but has received little scientific attention. In the study reported here, we used superstitious perceptions in a new principled method to reveal the properties of unobservable object representations in memory. We stimulated the visual system with unstructured white noise. Observers firmly believed that they perceived the letter S in Experiment 1 and a smile on a face in Experiment 2. Using reverse correlation and computational analyses, we rendered the memory representations underlying these superstitious perceptions.
Psychological Science, 2004
Examining the receptive fields of brain signals can elucidate how information impinging on the fo... more Examining the receptive fields of brain signals can elucidate how information impinging on the former modulates the latter. We applied this time-honored approach in early vision to the higher-level brain processes underlying face categorizations. Electroencephalograms in response to face-information samples were recorded while observers resolved two different categorizations (gender, expressive or not). Using a method with low bias and low variance, we compared, in a common space of information states, the information determining behavior (accuracy and reaction time) with the information that modulates emergent brain signals associated with early face encoding and later category decision. Our results provide a time line for face processing in which selective attention to diagnostic information for categorizing stimuli (the eyes and their second-order relationships in gender categorization; the mouth in expressive-or-not categorization) correlates with late electrophysiological (P300) activity, whereas early face-sensitive occipitotemporal (N170) activity is mainly driven by the contralateral eye, irrespective of the categorization task.