The effect of viewing speech on auditory speech processing is different in the left and right hemispheres (original) (raw)

Activation of human auditory cortex during speech perception: Effects of monaural, binaural, and dichotic presentation

Neuropsychologia, 2008

We used a rapid event-related functional magnetic resonance imaging (fMRI) paradigm to compare cortical activation to speech tokens presented monaurally to each ear, binaurally, and dichotically. Two forms of dichotic conditions were examined: one presented consonant-vowel (CV) syllables simultaneously to each ear while the other paired a CV syllable with a non-speech stimulus (band-limited noise). Right-handed adults were asked to differentially respond to serially presented target and distractor CV syllables. Activations were localized with reference to anatomic segmentation algorithms that allowed us to distinguish between activity in primary (PAC) and non-primary auditory cortex (NPAC). Monaural CV syllables presented to the right ear (CVR) produced highly asymmetric activations in left PAC and NPAC. A similar but reduced left hemisphere (LH) bias was evident in binaural presentation, when monaural syllables were paired with contra-aural noise, and in dichotic CV-CV presentations. However, LH activation was two times larger to CVR than any other condition, while RH activation to CVR was insubstantial. By contrast, a small rightward asymmetry in PAC activation was observed from monaural left ear (CVL) presentation. In all conditions except CVL, magnitude of response favored left PAC and NPAC. CV processing across different listening conditions disclosed complex interactions in activation. Our results confirm the superiority of left NPAC in speech processing and suggest comparable left lateralization in PAC. The findings suggest that monaural CV presentation may be more useful than previously anticipated. The paradigm developed here may hold some promise in investigations where abnormal hemispheric balance of speech processing is suspected.

Two left-hemisphere mechanisms in speech perception

Perception & Psychophysics, 1974

Right-ear advantages of different magnitudes occur systematically in dichotic listening for different phoneme classes and for certain phonemes according to their syllabic position. Such differences cannot be accounted for in terms of a single mechanism unique to the left hemisphere. Instead, at least two mechanisms are needed. One such device appears to be involved in the auditory analysis of transitions and other aspects of the speech signal. This device appears to be engaged for speech and nonspeech sounds alike. The other mechanism, the more accustomed "speech processor," appears to make all phonetic decisions in identifying the stimulus.

Analysis of speech sounds is left-hemisphere predominant at 100–150 ms after sound onset

NeuroReport, 1999

HEMISPHERIC specialization of human speech processing has been found in brain imaging studies using fMRI and PET. Due to the restricted time resolution, these methods cannot, however, determine the stage of auditory processing at which this specialization ®rst emerges. We used a dense electrode array covering the whole scalp to record the mismatch negativity (MMN), an event-related brain potential (ERP) automatically elicited by occasional changes in sounds, which ranged from non-phonetic (tones) to phonetic (vowels). MMN can be used to probe auditory central processing on a millisecond scale with no attention-dependent task requirements. Our results indicate that speech processing occurs predominantly in the left hemisphere at the early, pre-attentive level of auditory analysis. Neuro-

Processing of changes in visual speech in the human auditory cortex

Cognitive Brain Research, 2002

Seeing a talker's articulatory gestures may affect the observer's auditory speech percept. Observing congruent articulatory gestures may enhance the recognition of speech sounds [J. Acoust. Soc. Am. 26 (1954) 212], whereas observing incongruent gestures may change the auditory percept phonetically as occurs in the McGurk effect [Nature 264 ]. For example, simultaneous acoustic / ba / and visual / ga / are usually heard as / da /. We studied cortical processing of occasional changes in audiovisual and visual speech stimuli with magnetoencephalography. In the audiovisual experiment congruent (acoustic / iti /, visual / iti / ) and incongruent (acoustic / ipi /, visual / iti / ) audiovisual stimuli, which were both perceived as / iti /, were presented among congruent / ipi / (acoustic / ipi /, visual / ipi / ) stimuli. In the visual experiment only the visual components of these stimuli were presented. A visual change both in audiovisual and visual experiments activated supratemporal auditory cortices bilaterally. The auditory cortex activation to a visual change occurred later in the visual than in the audiovisual experiment, suggesting that interaction between modalities accelerates the detection of visual change in speech.

Hemispheric contributions to the integration of visual and auditory information in speech perception

Perception & Psychophysics, 1994

Differential hemispheric contributions to the perceptual phenomenon known as the McGurk effect were examined in normal subjects, 1 callosotomy patient, and 4 patients with intractable epilepsy. Twenty-five right-handed subjects were more likely to demonstrate an influence of a mouthed word on identification of a dubbed acoustic word when the speaker's face was lateralized to the LVF as compared with the RVF. In contrast, display of printed response alternatives in the RVF elicited a greater percentage of McGur)t.responses than display in the LVF. Visual field differences were absent in a group of 15 left-handed subjects. These results suggest that in right-handers, the two hemispheres may make distinct contributions to the McGurk effect. The callosotomy patient demonstrated reliable McGurk effects, but at a lower rate than the normal subjects and the epileptic control subjects. These data support the view that both the right and left hemisphere can make significant contributions to the McGurk effect.

Cross-modal Interactions during Perception of Audiovisual Speech and Nonspeech Signals: An fMRI Study

Journal of Cognitive Neuroscience, 2011

During speech communication, visual information may interact with the auditory system at various processing stages. Most noteworthy, recent magnetoencephalography (MEG) data provided first evidence for early and preattentive phonetic/phonological encoding of the visual data stream—prior to its fusion with auditory phonological features [Hertrich, I., Mathiak, K., Lutzenberger, W., & Ackermann, H. Time course of early audiovisual interactions during speech and non-speech central-auditory processing: An MEG study. Journal of Cognitive Neuroscience, 21, 259–274, 2009]. Using functional magnetic resonance imaging, the present follow-up study aims to further elucidate the topographic distribution of visual–phonological operations and audiovisual (AV) interactions during speech perception. Ambiguous acoustic syllables—disambiguated to /pa/ or /ta/ by the visual channel (speaking face)—served as test materials, concomitant with various control conditions (nonspeech AV signals, visual-only ...

Do temporal processes underlie left hemisphere dominance in speech perception?

Brain and Language, 2013

It is not unusual to find it stated as a fact that the left hemisphere is specialized for the processing of rapid, or temporal aspects of sound, and that the dominance of the left hemisphere in the perception of speech can be a consequence of this specialization. In this review we explore the history of this claim and assess the weight of this assumption. We will demonstrate that instead of a supposed sensitivity of the left temporal lobe for the acoustic properties of speech, it is the right temporal lobe which shows a marked preference for certain properties of sounds, for example longer durations, or variations in pitch. We finish by outlining some alternative factors that contribute to the left lateralization of speech perception.

The right hemisphere is highlighted in connected natural speech production and perception

A B S T R A C T Current understanding of the cortical mechanisms of speech perception and production stems mostly from studies that focus on single words or sentences. However, it has been suggested that processing of real-life connected speech may rely on additional cortical mechanisms. In the present study, we examined the neural substrates of natural speech production and perception with magnetoencephalography by modulating three central features related to speech: amount of linguistic content, speaking rate and social relevance. The amount of linguistic content was modulated by contrasting natural speech production and perception to speech-like non-linguistic tasks. Meaningful speech was produced and perceived at three speaking rates: normal, slow and fast. Social relevance was probed by having participants attend to speech produced by themselves and an unknown person. These speech-related features were each associated with distinct spatiospectral modulation patterns that involved cortical regions in both hemispheres. Natural speech processing markedly engaged the right hemisphere in addition to the left. In particular, the right temporo-parietal junction, previously linked to attentional processes and social cognition, was highlighted in the task modulations. The present findings suggest that its functional role extends to active generation and perception of meaningful, socially relevant speech.