Jean Vroomen | Tilburg University (original) (raw)

Papers by Jean Vroomen

Research paper thumbnail of A bias-free two-alternative forced choice procedure to examine intersensory illusions applied to the ventriloquist effect by flashes and averted eye-gazes

European Journal of Neuroscience, 2014

We compared with a new psychophysical method whether flashes and averted eye-gazes of a cartoon f... more We compared with a new psychophysical method whether flashes and averted eye-gazes of a cartoon face induce a ventriloquist illusion (an illusory shift of the apparent location of a sound by a visual distracter). With standard psychophysical procedures that measure a direct ventriloquist effect and a ventriloquist aftereffect, we found in human subjects that both types of stimuli induced an illusory shift of sound location. These traditional methods, though, are probably contaminated by response strategies. We therefore developed a new two-alternative forced choice procedure that allows measuring the strength of an intersensory illusion in a bias-free way. With this new procedure we found that only flashes, but not averted eye-gazes, induced an illusory shift in sound location. This difference between flashes and eye-gazes was validated in an EEG study in which again only flashes illusorily shifted the apparent location of a sound thereby evoking a mismatch negativity response. These results are important because they highlight that commonly used measures of multisensory illusions are contaminated while there is an easy yet stringent way to measure the strength of an illusion in a bias-free way.

Research paper thumbnail of Neural Correlates of Multisensory Integration of Ecologically Valid Audiovisual Events

Journal of Cognitive Neuroscience, 2007

A question that has emerged over recent years is whether audiovisual (AV) speech perception is a ... more A question that has emerged over recent years is whether audiovisual (AV) speech perception is a special case of multi-sensory perception. Electrophysiological (ERP) studies have found that auditory neural activity (N1 component of the ERP) induced by speech is suppressed and speeded up when a speech sound is accompanied by concordant lip movements. In Experiment 1, we show that this AV interaction is not speech-specific. Ecologically valid nonspeech AV events (actions performed by an actor such as handclapping) were associated with a similar speeding-up and suppression of auditory N1 amplitude as AV speech (syllables). Experiment 2 demonstrated that these AV interactions were not influenced by whether A and V were congruent or incongruent. In Experiment 3 we show that the AV interaction on N1 was absent when there was no anticipatory visual motion, indicating that the AV interaction only occurred when visual anticipatory motion preceded the sound. These results demonstrate that the visually induced speeding-up and suppression of auditory N1 amplitude reflect multisensory integrative mechanisms of AV events that crucially depend on whether vision predicts when the sound occurs.

Research paper thumbnail of Duration and intonation in emotional speech

Three experiments investigated the role of duration and intonation in the expression of emotions ... more Three experiments investigated the role of duration and intonation in the expression of emotions in natural and synthetic speech. Two sentences of an actor portraying seven emotions (neutral, joy, boredom, anger, sadness, fear, indignation) were acoustically analyzed. By copying pitch and duration of the original utterances to a monotonous one, it could be shown that both factors were sufficient to express the various emotions. In the second part, rules about intonation and duration were derived and tested. These rules were applied to resynthesized natural speech and synthetic speech generated from LPC-coded diphones. The results showed that emotions can be expressed accurately by manipulating pitch and duration in a rule-based way.

Research paper thumbnail of Cues to speech segmentation: Evidence from juncture misperceptions and word spotting

Understanding spoken language requires that listeners segment a spoken utterance into words or in... more Understanding spoken language requires that listeners segment a spoken utterance into words or into some smaller unit from which the lexicon can be accessed. A major difficulty in speech segmentation is the fact that speakers do not provide stable acoustic cues to indicate boundaries between words or segments. At present, it is therefore unclear as to how to start a lexical access attempt in the absence of a reliable cue about where to start. Several decades of speech research have not yet led to a widely accepted solution for the speech segmentation problem. So far, three proposals have appeared in the literature that are of direct relevance here. One is that the continuous speech stream is categorized into discrete segments which then mediate between the acoustic signal and the lexicon. The second proposal is that there is an explicit mechanism that targets locations in the speech stream where word boundaries are likely to occur. The third is that word segmentation is a by-product of lexical competition. In the present study, these alternatives are considered.

Research paper thumbnail of Visual recalibration of auditory speech identification: A McGurk aftereffect

Psychological Science, 2003

The kinds of aftereffects, indicative of cross-modal recalibration, that are observed after expos... more The kinds of aftereffects, indicative of cross-modal recalibration, that are observed after exposure to spatially incongruent inputs from different sensory modalities have not been demonstrated so far for identity incongruence. We show that exposure to incongruent audiovisual speech (producing the well-known McGurk effect) can recalibrate auditory speech identification. In Experiment 1, exposure to an ambiguous sound intermediate between /aba/ and /ada/ dubbed onto a video of a face articulating either /aba/ or /ada/ increased the proportion of /aba/ or /ada/ responses, respectively, during subsequent sound identification trials. Experiment 2 demonstrated the same recalibration effect or the opposite one, fewer /aba/ or /ada/ responses, revealing selective speech adaptation, depending on whether the ambiguous sound or a congruent nonambiguous one was used during exposure. In separate forced-choice identification trials, bimodal stimulus pairs producing these contrasting effects were identically categorized, which makes a role of postperceptual factors in the generation of the effects unlikely.

Research paper thumbnail of Recalibration of temporal order perception by exposure to audio-visual asynchrony

Cognitive Brain Research, 2004

The perception of simultaneity between auditory and visual information is of crucial importance f... more The perception of simultaneity between auditory and visual information is of crucial importance for maintaining a coordinated representation of a multisensory event. Here we show that the perceptual system is able to adaptively recalibrate itself to audio-visual temporal asynchronies. Participants were exposed to a train of sounds and light flashes with a constant time lag ranging from À200 (sound first) to +200 ms (light first). Following this exposure, a temporal order judgement (TOJ) task was performed in which a sound and light were presented with a stimulus onset asynchrony (SOA) chosen from 11 values between À240 and +240 ms. Participants either judged whether the sound or the light was presented first, or whether the sound and light were presented simultaneously or successively. The point of subjective simultaneity (PSS) was, in both cases, shifted in the direction of the exposure lag, indicative of recalibration. D 2004 Elsevier B.V. All rights reserved.

Research paper thumbnail of Illusory sound shifts induced by the ventriloquist illusion evoke the mismatch negativity

Neuroscience Letters, 2004

The ventriloquist illusion arises when sounds are mislocated towards a synchronous but spatially ... more The ventriloquist illusion arises when sounds are mislocated towards a synchronous but spatially discrepant visual event. Here, we investigated the ventriloquist illusion at a neurophysiological level. The question was whether an illusory shift in sound location was reflected in the auditory mismatch negativity (MMN). An 'oddball' paradigm was used whereby simultaneously presented sounds and flashes coming from the same location served as standard. The deviant consisted of a sound originating from the same source as the standard together with a flash at 208 spatial separation, which evoked an illusory sound shift. This illusory sound shift evoked an MMN closely resembling the MMN evoked by an actual sound shift. A visual-only control condition ruled out that the illusory-evoked MMN was confounded by the visual part of the audiovisual deviant. These results indicate that the crossmodal interaction on which the ventriloquist illusion is based takes place automatically at an early processing stage, within 200 ms after stimulus onset. q

Research paper thumbnail of Directing spatial attention towards the illusory location of a ventriloquized sound

Acta Psychologica, 2001

In this study, we examined whether ventriloquism can rearrange external space on which spatial re... more In this study, we examined whether ventriloquism can rearrange external space on which spatial re¯exive attention operates. The task was to judge the elevation (up vs down) of auditory targets delivered in the left or the right periphery, taking no account of side of presentation. Targets were preceded by either auditory, visual, or audiovisual cues to that side. Auditory, but not visual cues had an eect on the speed of auditory target discrimination. On the other hand, a ventriloquized cue, consisting of a tone in central location synchronized with a light¯ash in the periphery, facilitated responses to targets appearing on the same side as thē ash. That eect presumably resulted from the attraction of the apparent location of the tone towards the¯ash, a well-known manifestation of ventriloquism. Ventriloquism thus can reorganize space in which re¯exive attention operates. Ó

Research paper thumbnail of The time course of intermodal binding between seeing and hearing affective information

Games and Economic Behavior, 2000

Acknowledgements: Thanks to S. Philippart for technical assistance during the process of the EEG ... more Acknowledgements: Thanks to S. Philippart for technical assistance during the process of the EEG and thanks to the participants for their patience and interest.

Research paper thumbnail of The perception of emotions by ear and by eye

Cognition & Emotion, 2000

Emotions are expressed in the voice as well as on the face. As a ® rst step to explore the questi... more Emotions are expressed in the voice as well as on the face. As a ® rst step to explore the question of their integration, we used a bimodal perception situation modelled after the McGurk paradigm, in which varying degrees of discordance can be created between the affects expressed in a face and in a tone of voice. Experiment 1 showed that subjects can effectively combine information from the two sources, in that identi® cation of the emotion in the face is biased in the direction of the simultaneously presented tone of voice. Experiment 2 showed that this effect occurs also under instructions to base the judgement exclusively on the face. Experiment 3 showed the reverse effect, a bias from the emotion in the face on judgement of the emotion in the voice. These results strongly suggest the existence of mandatory bidirectional links between affect detection structures in vision and audition.

Research paper thumbnail of The ventriloquist effect does not depend on the direction of automatic visual attention

Attention Perception & Psychophysics, 2001

Previously, we showed that the visual bias of auditory sound location, or ventriloquism, does not... more Previously, we showed that the visual bias of auditory sound location, or ventriloquism, does not depend on the direction of deliberate, orendogenous, attention (Bertelson, Vroomen, de Gelder, & Driver, 2000). In the present study, a similar question concerning automatic, orexogenous, attention was examined. The experimental manipulation was based on the fact that exogenous visual attention can be attracted toward asingleton—that is, an item different on some dimension from all other items presented simultaneously. A display was used that consisted of a row of four bright squares with one square, in either the left- or the rightmost position,smaller than the others, serving as the singleton. In Experiment 1, subjects made dichotomous left-right judgments concerning sound bursts, whose successivelocations were controlled by a psychophysical staircase procedure and which were presented in synchrony with a display with the singleton either left or right. Results showed that the apparent location of the sound was attractednot toward the singleton, but instead toward the big squares at the opposite end of the display. Experiment 2 was run to check that the singleton effectively attracted exogenous attention. The task was to discriminate target letters presented either on the singleton or on the opposite big square. Performance deteriorated when the target was on the big square opposite the singleton, in comparison with control trials with no singleton, thus showing that the singleton attracted attention away from the target location. In Experiment 3, localization and discrimination trials were mixed randomly so as to control for potential differences in subjects’ strategies in the two preceding experiments. Results were as before, showing that the singleton attracted attention, whereas sound localization was shifted away from the singleton. Ventriloquism can thus be dissociated from exogenous visual attention and appears to reflect sensory interactions with little role for the direction of visual spatial attention.

Research paper thumbnail of Exploring the relation between mcgurk interference and ventriloquism

Research paper thumbnail of The ventriloquist effect does not depend on the direction of deliberatevisual attention

It is well known that discrepancies in the location of synchronized auditory and visual events ca... more It is well known that discrepancies in the location of synchronized auditory and visual events can lead to mislocalizations of the auditory source, so-called ventriloquism. In two experiments, we tested whether such cross-modal influences on auditory localization depend on deliberate visual attention to the biasing visual event. In Experiment 1, subjects pointed to the apparent source of sounds in the presence or absence of a synchronous peripheral flash. They also monitored for target visual events, either at the location of the peripheral flash or in a central location. Auditory localization was attracted toward the synchronous peripheral flash, but this was unaffected by where deliberate visual attention was directed in the monitoring task. In Experiment 2, bilateral flashes were presented in synchrony with each sound, to provide competing visual attractors. When these visual events were equally salient on the two sides, auditory localization was unaffected by which side subjects monitored for visual targets. When one flash was larger than the other, auditory localization was slightly but reliably attracted toward it, but again regardless of where visual monitoring was required. We conclude that ventriloquism largely reflects automatic sensory interactions, with little or no role for deliberate spatial attention.

Research paper thumbnail of Is cross-modal integration of emotional expressions independent ofattentional resources

In this study, we examined whether integration of visual and auditory information about emotions ... more In this study, we examined whether integration of visual and auditory information about emotions requires limited attentional resources. Subjects judged whether a voice expressed happiness or fear, while trying to ignore a concurrently presented static facial expression. As an additional task, the subjects had to add two numbers together rapidly (Experiment 1), count the occurrences of a target digit in a rapid serial visual presentation (Experiment 2), or judge the pitch of a tone as high or low (Experiment 3). The visible face had an impact on judgments of the emotion of the heard voice in all the experiments. This cross-modal effect was independent of whether or not the subjects performed a demanding additional task. This suggests that integration of visual and auditory information about emotions may be a mandatory process, unconstrained by attentional resources.

Research paper thumbnail of Face recognition and lip-reading in autism

European Journal of Cognitive Psychology, 1991

Research paper thumbnail of The Spatial Constraint in Intersensory Pairing: No Role in Temporal Ventriloquism

Journal of Experimental Psychology-human Perception and Performance, 2006

A sound presented in temporal proximity to a light can alter the perceived temporal occurrence of... more A sound presented in temporal proximity to a light can alter the perceived temporal occurrence of that light (temporal ventriloquism). The authors explored whether spatial discordance between the sound and light affects this phenomenon. Participants made temporal order judgments about which of 2 lights appeared first, while they heard sounds before the 1st and after the 2nd light. Sensitivity was higher (i.e., a lower just noticeable difference) when the sound-light interval was approximately 100 ms rather than approximately 0 ms. This temporal ventriloquist effect was unaffected by whether sounds came from the same or a different position as the lights, whether the sounds were static or moved, or whether they came from the same or opposite sides of fixation. Yet, discordant sounds interfered with speeded visual discrimination. These results challenge the view that intersensory interactions in general require spatial correspondence between the stimuli.

Research paper thumbnail of Ventriloquism in patients with unilateral visual neglect

Neuropsychologia, 2000

Can visual stimuli that go undetected, because they are presented in the extinguished region of n... more Can visual stimuli that go undetected, because they are presented in the extinguished region of neglect patients' visual field, nevertheless shift in their direction the apparent location of simultaneous sounds (the well-known 'ventriloquist effect')? This issue was examined using a situation in which each trial involved the simultaneous presentation of a tone over loudspeakers, together with a bright square area on either the left, the right or both sides of fixation. Participants were required to report the presence of squares, and indicate by hand pointing the apparent location of the tone. Five patients with left hemineglect consistently failed to detect the left square, either presented alone or together with another square on the right. Nevertheless, on bimodal trials with a single undetected square to the left, their sound localization was significantly shifted in the direction of that undetected square. By contrast, in bimodal trials with either a single square on the right or a square on each side, their sound localization showed only small and non-significant shifts. This particular result might be due to a combination of low discrimination of lateral sound deviations with variable individual strategies triggered by conscious detection of the right square. The important finding is the crossmodal bias produced by the undetected left visual distractors. It provides a new example of implicit processing of inputs affected by unilateral visual neglect, and on the other hand is consistent with earlier demonstrations of the automaticity of crossmodal bias.

Research paper thumbnail of Temporal Ventriloquism: Sound Modulates the Flash-Lag Effect

Journal of Experimental Psychology-human Perception and Performance, 2004

A sound presented in close temporal proximity to a visual stimulus can alter the perceived tempor... more A sound presented in close temporal proximity to a visual stimulus can alter the perceived temporal dimensions of the visual stimulus (temporal ventriloquism). In this article, the authors demonstrate temporal ventriloquism in the flash-lag effect (FLE), a visual illusion in which a flash appears to lag relative to a moving object. In Experiment 1, the magnitude and the variability of the FLE were reduced, relative to a silent condition, when a noise burst was synchronized with the flash. In Experiment 2, the sound was presented before, at, or after the flash (Ϯϳ100 ms), and the size of the FLE varied linearly with the delay of the sound. These findings demonstrate that an isolated sound can sharpen the temporal boundaries of a flash and attract its temporal occurrence.

Research paper thumbnail of The effects of alphabetic-reading competence on language representation in bilingual Chinese subjects

Psychological Research-psychologische Forschung, 1993

The metaphonological abilities of two groups of bilingual Chinese adults residing in the Netherla... more The metaphonological abilities of two groups of bilingual Chinese adults residing in the Netherlands were examined. All subjects were able to read Chinese logograms, but those in the alphabetic group had, unlike those in the non-alphabetic group, also acquired some competence in reading Dutch. In Experiment 1, strong, significant differences between the two groups were obtained in the task of deleting the initial consonant of a Dutch spoken pseudo-word and also in a task consisting of segmenting a sentence into progressively smaller fragments, but there was no difference in a rhyme-nonrhyme classification task with pairs of Dutch words. In the latter task, the subjects in the two groups performed at a near-ceiling level. In Experiment 2, a significant difference was obtained again for the consonant-deletion task and no difference with an initial syllabic-vowel-deletion task, but the non-alphabetic subjects performed at a significantly lower level than the alphabetic subjects in the rhyme-judgement task. Taken together, these results are consistent with the earlier evidence that learning a non-alphabetic orthography does not promote awareness of the segmental structure of utterances. On the other hand, they confirm, for a population of Chinese readers, the conclusion drawn earlier from work with illiterate subjects that explicit instruction is more critical for the development of segmental representations of language than of representations of higher levels such as those of rhymes and syllables.

Research paper thumbnail of The Roles of Word Stress and Vowel Harmony in Speech Segmentation

Journal of Memory and Language, 1998

Three experiments investigated the role of word stress and vowel harmony in speech segmentation. ... more Three experiments investigated the role of word stress and vowel harmony in speech segmentation. Finnish has fixed word stress on the initial syllable, and vowels from a front or back harmony set cannot co-occur within a word. In Experiment 1, we replicated the results of showing that Finns use a mismatch in vowel harmony as a word boundary cue when the target-initial syllable is unstressed. Listeners found it easier to detect words such as HYmy in PUhymy (harmony mismatch) than in PYhymy (no harmony mismatch). In Experiment 2, words had stressed target-initial syllables (HYmy as in pyHYmy or puHYmy). Reaction times were now faster and the vowel harmony effect was greatly reduced. In Experiment 3, Finnish, Dutch, and French listeners learned to segment an artificial language. Performance was best when the phonological properties of the artificial language matched those of the native one. Finns profited, as in the previous experiments, from vowel harmony and word-initial stress; Dutch profited from word-initial stress, and French did not profit either from vowel-harmony or from word-initial stress. Vowel disharmony and word-initial stress are thus language-specific cues to word boundaries. ᭧ 1998 Academic Press features such as the presence of glottal stops, speaking Community, Concerted Research Action ''Language processing in different modalities: Comparative ap-laryngealized voicing, increased aspiration, or proaches.'' We thank Leo Vogten from the IPO, Eindhoven for help in preparing the stimuli of Experiment 3 and Juan Address correspondence and reprint requests to Jean SeguıB for help in testing the French subjects. We also thank The Nether-wusch, and two other anonymous reviewers for comments on an earlier version of the paper.

Research paper thumbnail of A bias-free two-alternative forced choice procedure to examine intersensory illusions applied to the ventriloquist effect by flashes and averted eye-gazes

European Journal of Neuroscience, 2014

We compared with a new psychophysical method whether flashes and averted eye-gazes of a cartoon f... more We compared with a new psychophysical method whether flashes and averted eye-gazes of a cartoon face induce a ventriloquist illusion (an illusory shift of the apparent location of a sound by a visual distracter). With standard psychophysical procedures that measure a direct ventriloquist effect and a ventriloquist aftereffect, we found in human subjects that both types of stimuli induced an illusory shift of sound location. These traditional methods, though, are probably contaminated by response strategies. We therefore developed a new two-alternative forced choice procedure that allows measuring the strength of an intersensory illusion in a bias-free way. With this new procedure we found that only flashes, but not averted eye-gazes, induced an illusory shift in sound location. This difference between flashes and eye-gazes was validated in an EEG study in which again only flashes illusorily shifted the apparent location of a sound thereby evoking a mismatch negativity response. These results are important because they highlight that commonly used measures of multisensory illusions are contaminated while there is an easy yet stringent way to measure the strength of an illusion in a bias-free way.

Research paper thumbnail of Neural Correlates of Multisensory Integration of Ecologically Valid Audiovisual Events

Journal of Cognitive Neuroscience, 2007

A question that has emerged over recent years is whether audiovisual (AV) speech perception is a ... more A question that has emerged over recent years is whether audiovisual (AV) speech perception is a special case of multi-sensory perception. Electrophysiological (ERP) studies have found that auditory neural activity (N1 component of the ERP) induced by speech is suppressed and speeded up when a speech sound is accompanied by concordant lip movements. In Experiment 1, we show that this AV interaction is not speech-specific. Ecologically valid nonspeech AV events (actions performed by an actor such as handclapping) were associated with a similar speeding-up and suppression of auditory N1 amplitude as AV speech (syllables). Experiment 2 demonstrated that these AV interactions were not influenced by whether A and V were congruent or incongruent. In Experiment 3 we show that the AV interaction on N1 was absent when there was no anticipatory visual motion, indicating that the AV interaction only occurred when visual anticipatory motion preceded the sound. These results demonstrate that the visually induced speeding-up and suppression of auditory N1 amplitude reflect multisensory integrative mechanisms of AV events that crucially depend on whether vision predicts when the sound occurs.

Research paper thumbnail of Duration and intonation in emotional speech

Three experiments investigated the role of duration and intonation in the expression of emotions ... more Three experiments investigated the role of duration and intonation in the expression of emotions in natural and synthetic speech. Two sentences of an actor portraying seven emotions (neutral, joy, boredom, anger, sadness, fear, indignation) were acoustically analyzed. By copying pitch and duration of the original utterances to a monotonous one, it could be shown that both factors were sufficient to express the various emotions. In the second part, rules about intonation and duration were derived and tested. These rules were applied to resynthesized natural speech and synthetic speech generated from LPC-coded diphones. The results showed that emotions can be expressed accurately by manipulating pitch and duration in a rule-based way.

Research paper thumbnail of Cues to speech segmentation: Evidence from juncture misperceptions and word spotting

Understanding spoken language requires that listeners segment a spoken utterance into words or in... more Understanding spoken language requires that listeners segment a spoken utterance into words or into some smaller unit from which the lexicon can be accessed. A major difficulty in speech segmentation is the fact that speakers do not provide stable acoustic cues to indicate boundaries between words or segments. At present, it is therefore unclear as to how to start a lexical access attempt in the absence of a reliable cue about where to start. Several decades of speech research have not yet led to a widely accepted solution for the speech segmentation problem. So far, three proposals have appeared in the literature that are of direct relevance here. One is that the continuous speech stream is categorized into discrete segments which then mediate between the acoustic signal and the lexicon. The second proposal is that there is an explicit mechanism that targets locations in the speech stream where word boundaries are likely to occur. The third is that word segmentation is a by-product of lexical competition. In the present study, these alternatives are considered.

Research paper thumbnail of Visual recalibration of auditory speech identification: A McGurk aftereffect

Psychological Science, 2003

The kinds of aftereffects, indicative of cross-modal recalibration, that are observed after expos... more The kinds of aftereffects, indicative of cross-modal recalibration, that are observed after exposure to spatially incongruent inputs from different sensory modalities have not been demonstrated so far for identity incongruence. We show that exposure to incongruent audiovisual speech (producing the well-known McGurk effect) can recalibrate auditory speech identification. In Experiment 1, exposure to an ambiguous sound intermediate between /aba/ and /ada/ dubbed onto a video of a face articulating either /aba/ or /ada/ increased the proportion of /aba/ or /ada/ responses, respectively, during subsequent sound identification trials. Experiment 2 demonstrated the same recalibration effect or the opposite one, fewer /aba/ or /ada/ responses, revealing selective speech adaptation, depending on whether the ambiguous sound or a congruent nonambiguous one was used during exposure. In separate forced-choice identification trials, bimodal stimulus pairs producing these contrasting effects were identically categorized, which makes a role of postperceptual factors in the generation of the effects unlikely.

Research paper thumbnail of Recalibration of temporal order perception by exposure to audio-visual asynchrony

Cognitive Brain Research, 2004

The perception of simultaneity between auditory and visual information is of crucial importance f... more The perception of simultaneity between auditory and visual information is of crucial importance for maintaining a coordinated representation of a multisensory event. Here we show that the perceptual system is able to adaptively recalibrate itself to audio-visual temporal asynchronies. Participants were exposed to a train of sounds and light flashes with a constant time lag ranging from À200 (sound first) to +200 ms (light first). Following this exposure, a temporal order judgement (TOJ) task was performed in which a sound and light were presented with a stimulus onset asynchrony (SOA) chosen from 11 values between À240 and +240 ms. Participants either judged whether the sound or the light was presented first, or whether the sound and light were presented simultaneously or successively. The point of subjective simultaneity (PSS) was, in both cases, shifted in the direction of the exposure lag, indicative of recalibration. D 2004 Elsevier B.V. All rights reserved.

Research paper thumbnail of Illusory sound shifts induced by the ventriloquist illusion evoke the mismatch negativity

Neuroscience Letters, 2004

The ventriloquist illusion arises when sounds are mislocated towards a synchronous but spatially ... more The ventriloquist illusion arises when sounds are mislocated towards a synchronous but spatially discrepant visual event. Here, we investigated the ventriloquist illusion at a neurophysiological level. The question was whether an illusory shift in sound location was reflected in the auditory mismatch negativity (MMN). An 'oddball' paradigm was used whereby simultaneously presented sounds and flashes coming from the same location served as standard. The deviant consisted of a sound originating from the same source as the standard together with a flash at 208 spatial separation, which evoked an illusory sound shift. This illusory sound shift evoked an MMN closely resembling the MMN evoked by an actual sound shift. A visual-only control condition ruled out that the illusory-evoked MMN was confounded by the visual part of the audiovisual deviant. These results indicate that the crossmodal interaction on which the ventriloquist illusion is based takes place automatically at an early processing stage, within 200 ms after stimulus onset. q

Research paper thumbnail of Directing spatial attention towards the illusory location of a ventriloquized sound

Acta Psychologica, 2001

In this study, we examined whether ventriloquism can rearrange external space on which spatial re... more In this study, we examined whether ventriloquism can rearrange external space on which spatial re¯exive attention operates. The task was to judge the elevation (up vs down) of auditory targets delivered in the left or the right periphery, taking no account of side of presentation. Targets were preceded by either auditory, visual, or audiovisual cues to that side. Auditory, but not visual cues had an eect on the speed of auditory target discrimination. On the other hand, a ventriloquized cue, consisting of a tone in central location synchronized with a light¯ash in the periphery, facilitated responses to targets appearing on the same side as thē ash. That eect presumably resulted from the attraction of the apparent location of the tone towards the¯ash, a well-known manifestation of ventriloquism. Ventriloquism thus can reorganize space in which re¯exive attention operates. Ó

Research paper thumbnail of The time course of intermodal binding between seeing and hearing affective information

Games and Economic Behavior, 2000

Acknowledgements: Thanks to S. Philippart for technical assistance during the process of the EEG ... more Acknowledgements: Thanks to S. Philippart for technical assistance during the process of the EEG and thanks to the participants for their patience and interest.

Research paper thumbnail of The perception of emotions by ear and by eye

Cognition & Emotion, 2000

Emotions are expressed in the voice as well as on the face. As a ® rst step to explore the questi... more Emotions are expressed in the voice as well as on the face. As a ® rst step to explore the question of their integration, we used a bimodal perception situation modelled after the McGurk paradigm, in which varying degrees of discordance can be created between the affects expressed in a face and in a tone of voice. Experiment 1 showed that subjects can effectively combine information from the two sources, in that identi® cation of the emotion in the face is biased in the direction of the simultaneously presented tone of voice. Experiment 2 showed that this effect occurs also under instructions to base the judgement exclusively on the face. Experiment 3 showed the reverse effect, a bias from the emotion in the face on judgement of the emotion in the voice. These results strongly suggest the existence of mandatory bidirectional links between affect detection structures in vision and audition.

Research paper thumbnail of The ventriloquist effect does not depend on the direction of automatic visual attention

Attention Perception & Psychophysics, 2001

Previously, we showed that the visual bias of auditory sound location, or ventriloquism, does not... more Previously, we showed that the visual bias of auditory sound location, or ventriloquism, does not depend on the direction of deliberate, orendogenous, attention (Bertelson, Vroomen, de Gelder, & Driver, 2000). In the present study, a similar question concerning automatic, orexogenous, attention was examined. The experimental manipulation was based on the fact that exogenous visual attention can be attracted toward asingleton—that is, an item different on some dimension from all other items presented simultaneously. A display was used that consisted of a row of four bright squares with one square, in either the left- or the rightmost position,smaller than the others, serving as the singleton. In Experiment 1, subjects made dichotomous left-right judgments concerning sound bursts, whose successivelocations were controlled by a psychophysical staircase procedure and which were presented in synchrony with a display with the singleton either left or right. Results showed that the apparent location of the sound was attractednot toward the singleton, but instead toward the big squares at the opposite end of the display. Experiment 2 was run to check that the singleton effectively attracted exogenous attention. The task was to discriminate target letters presented either on the singleton or on the opposite big square. Performance deteriorated when the target was on the big square opposite the singleton, in comparison with control trials with no singleton, thus showing that the singleton attracted attention away from the target location. In Experiment 3, localization and discrimination trials were mixed randomly so as to control for potential differences in subjects’ strategies in the two preceding experiments. Results were as before, showing that the singleton attracted attention, whereas sound localization was shifted away from the singleton. Ventriloquism can thus be dissociated from exogenous visual attention and appears to reflect sensory interactions with little role for the direction of visual spatial attention.

Research paper thumbnail of Exploring the relation between mcgurk interference and ventriloquism

Research paper thumbnail of The ventriloquist effect does not depend on the direction of deliberatevisual attention

It is well known that discrepancies in the location of synchronized auditory and visual events ca... more It is well known that discrepancies in the location of synchronized auditory and visual events can lead to mislocalizations of the auditory source, so-called ventriloquism. In two experiments, we tested whether such cross-modal influences on auditory localization depend on deliberate visual attention to the biasing visual event. In Experiment 1, subjects pointed to the apparent source of sounds in the presence or absence of a synchronous peripheral flash. They also monitored for target visual events, either at the location of the peripheral flash or in a central location. Auditory localization was attracted toward the synchronous peripheral flash, but this was unaffected by where deliberate visual attention was directed in the monitoring task. In Experiment 2, bilateral flashes were presented in synchrony with each sound, to provide competing visual attractors. When these visual events were equally salient on the two sides, auditory localization was unaffected by which side subjects monitored for visual targets. When one flash was larger than the other, auditory localization was slightly but reliably attracted toward it, but again regardless of where visual monitoring was required. We conclude that ventriloquism largely reflects automatic sensory interactions, with little or no role for deliberate spatial attention.

Research paper thumbnail of Is cross-modal integration of emotional expressions independent ofattentional resources

In this study, we examined whether integration of visual and auditory information about emotions ... more In this study, we examined whether integration of visual and auditory information about emotions requires limited attentional resources. Subjects judged whether a voice expressed happiness or fear, while trying to ignore a concurrently presented static facial expression. As an additional task, the subjects had to add two numbers together rapidly (Experiment 1), count the occurrences of a target digit in a rapid serial visual presentation (Experiment 2), or judge the pitch of a tone as high or low (Experiment 3). The visible face had an impact on judgments of the emotion of the heard voice in all the experiments. This cross-modal effect was independent of whether or not the subjects performed a demanding additional task. This suggests that integration of visual and auditory information about emotions may be a mandatory process, unconstrained by attentional resources.

Research paper thumbnail of Face recognition and lip-reading in autism

European Journal of Cognitive Psychology, 1991

Research paper thumbnail of The Spatial Constraint in Intersensory Pairing: No Role in Temporal Ventriloquism

Journal of Experimental Psychology-human Perception and Performance, 2006

A sound presented in temporal proximity to a light can alter the perceived temporal occurrence of... more A sound presented in temporal proximity to a light can alter the perceived temporal occurrence of that light (temporal ventriloquism). The authors explored whether spatial discordance between the sound and light affects this phenomenon. Participants made temporal order judgments about which of 2 lights appeared first, while they heard sounds before the 1st and after the 2nd light. Sensitivity was higher (i.e., a lower just noticeable difference) when the sound-light interval was approximately 100 ms rather than approximately 0 ms. This temporal ventriloquist effect was unaffected by whether sounds came from the same or a different position as the lights, whether the sounds were static or moved, or whether they came from the same or opposite sides of fixation. Yet, discordant sounds interfered with speeded visual discrimination. These results challenge the view that intersensory interactions in general require spatial correspondence between the stimuli.

Research paper thumbnail of Ventriloquism in patients with unilateral visual neglect

Neuropsychologia, 2000

Can visual stimuli that go undetected, because they are presented in the extinguished region of n... more Can visual stimuli that go undetected, because they are presented in the extinguished region of neglect patients' visual field, nevertheless shift in their direction the apparent location of simultaneous sounds (the well-known 'ventriloquist effect')? This issue was examined using a situation in which each trial involved the simultaneous presentation of a tone over loudspeakers, together with a bright square area on either the left, the right or both sides of fixation. Participants were required to report the presence of squares, and indicate by hand pointing the apparent location of the tone. Five patients with left hemineglect consistently failed to detect the left square, either presented alone or together with another square on the right. Nevertheless, on bimodal trials with a single undetected square to the left, their sound localization was significantly shifted in the direction of that undetected square. By contrast, in bimodal trials with either a single square on the right or a square on each side, their sound localization showed only small and non-significant shifts. This particular result might be due to a combination of low discrimination of lateral sound deviations with variable individual strategies triggered by conscious detection of the right square. The important finding is the crossmodal bias produced by the undetected left visual distractors. It provides a new example of implicit processing of inputs affected by unilateral visual neglect, and on the other hand is consistent with earlier demonstrations of the automaticity of crossmodal bias.

Research paper thumbnail of Temporal Ventriloquism: Sound Modulates the Flash-Lag Effect

Journal of Experimental Psychology-human Perception and Performance, 2004

A sound presented in close temporal proximity to a visual stimulus can alter the perceived tempor... more A sound presented in close temporal proximity to a visual stimulus can alter the perceived temporal dimensions of the visual stimulus (temporal ventriloquism). In this article, the authors demonstrate temporal ventriloquism in the flash-lag effect (FLE), a visual illusion in which a flash appears to lag relative to a moving object. In Experiment 1, the magnitude and the variability of the FLE were reduced, relative to a silent condition, when a noise burst was synchronized with the flash. In Experiment 2, the sound was presented before, at, or after the flash (Ϯϳ100 ms), and the size of the FLE varied linearly with the delay of the sound. These findings demonstrate that an isolated sound can sharpen the temporal boundaries of a flash and attract its temporal occurrence.

Research paper thumbnail of The effects of alphabetic-reading competence on language representation in bilingual Chinese subjects

Psychological Research-psychologische Forschung, 1993

The metaphonological abilities of two groups of bilingual Chinese adults residing in the Netherla... more The metaphonological abilities of two groups of bilingual Chinese adults residing in the Netherlands were examined. All subjects were able to read Chinese logograms, but those in the alphabetic group had, unlike those in the non-alphabetic group, also acquired some competence in reading Dutch. In Experiment 1, strong, significant differences between the two groups were obtained in the task of deleting the initial consonant of a Dutch spoken pseudo-word and also in a task consisting of segmenting a sentence into progressively smaller fragments, but there was no difference in a rhyme-nonrhyme classification task with pairs of Dutch words. In the latter task, the subjects in the two groups performed at a near-ceiling level. In Experiment 2, a significant difference was obtained again for the consonant-deletion task and no difference with an initial syllabic-vowel-deletion task, but the non-alphabetic subjects performed at a significantly lower level than the alphabetic subjects in the rhyme-judgement task. Taken together, these results are consistent with the earlier evidence that learning a non-alphabetic orthography does not promote awareness of the segmental structure of utterances. On the other hand, they confirm, for a population of Chinese readers, the conclusion drawn earlier from work with illiterate subjects that explicit instruction is more critical for the development of segmental representations of language than of representations of higher levels such as those of rhymes and syllables.

Research paper thumbnail of The Roles of Word Stress and Vowel Harmony in Speech Segmentation

Journal of Memory and Language, 1998

Three experiments investigated the role of word stress and vowel harmony in speech segmentation. ... more Three experiments investigated the role of word stress and vowel harmony in speech segmentation. Finnish has fixed word stress on the initial syllable, and vowels from a front or back harmony set cannot co-occur within a word. In Experiment 1, we replicated the results of showing that Finns use a mismatch in vowel harmony as a word boundary cue when the target-initial syllable is unstressed. Listeners found it easier to detect words such as HYmy in PUhymy (harmony mismatch) than in PYhymy (no harmony mismatch). In Experiment 2, words had stressed target-initial syllables (HYmy as in pyHYmy or puHYmy). Reaction times were now faster and the vowel harmony effect was greatly reduced. In Experiment 3, Finnish, Dutch, and French listeners learned to segment an artificial language. Performance was best when the phonological properties of the artificial language matched those of the native one. Finns profited, as in the previous experiments, from vowel harmony and word-initial stress; Dutch profited from word-initial stress, and French did not profit either from vowel-harmony or from word-initial stress. Vowel disharmony and word-initial stress are thus language-specific cues to word boundaries. ᭧ 1998 Academic Press features such as the presence of glottal stops, speaking Community, Concerted Research Action ''Language processing in different modalities: Comparative ap-laryngealized voicing, increased aspiration, or proaches.'' We thank Leo Vogten from the IPO, Eindhoven for help in preparing the stimuli of Experiment 3 and Juan Address correspondence and reprint requests to Jean SeguıB for help in testing the French subjects. We also thank The Nether-wusch, and two other anonymous reviewers for comments on an earlier version of the paper.