Sonja Kotz | Maastricht University (original) (raw)

Papers by Sonja Kotz

Research paper thumbnail of Exercising during learning improves vocabulary acquisition: Behavioral and ERP evidence

Neuroscience Letters, 2010

Research paper thumbnail of Event-related potential responses to metric violations: rules versus meaning

NeuroReport, 2010

In stress-timed languages, the alternation of stressed and unstressed syllables (or &... more In stress-timed languages, the alternation of stressed and unstressed syllables (or 'meter') is an important formal and temporal cue to guide speech processing. Previous electroencephalography studies have shown that metric violations result in an early negative event-related potential. It is unclear whether this 'metric' negativity is an N400 elicited by misplaced stress or whether it responds to error detection. The aim of this study was to investigate the nature of the 'metric' negativity as a function of rule-based, predictive sequencing. Our results show that the negativity occurs independent of the lexical-semantic content. We therefore suggest that the metric negativity reflects a rule-based sequencing mechanism.

Research paper thumbnail of The role of attention in processing jabberwocky sentences: Revisiting the P600

Research paper thumbnail of Did you get the beat? Late proficient French-German learners extract strong–weak patterns in tonal but not in linguistic sequences

NeuroImage, 2011

Event-related potential (ERP) data in French and German have shown that metric violations (i.e. i... more Event-related potential (ERP) data in French and German have shown that metric violations (i.e. incorrectly stressed words) in a sentence elicit a P600. Furthermore, French speakers find it difficult to discriminate stimuli that vary in word stress position and have been labelled as "stress deaf." In the current study we investigated (i) whether French late learners of German can perceive deviations of a regular strong-weak stress pattern (trochee) in German sentences, and (ii) whether the same subjects differ in their electrophysiological response from German monolinguals in a non-linguistic "subjective rhythmization" paradigm. Irrespective of the native language both groups show similar results in the latter paradigm in which isochronous stimulus trains are subjectively converted into a binary strong-weak grouped percept (trochee). However, we report differences between native and non-native speakers of German in the sentence paradigm. In contrast to German native speakers French late learners of German fail to show a P600 component in response to deviations from a regular trochaic stress pattern, although attention was directed to the metric pattern of the sentences. The current data suggest that French stress deafness selectively affects the perception of a strong-weak pattern in sentences while strong-weak grouping of non-linguistic sequences is not language specific. The results imply that linguistic and non-linguistic grouping do not rely on the same neural mechanisms.

Research paper thumbnail of Temporal regularity effects on pre-attentive and attentive processing of deviance

Biological Psychology, 2011

Temporal regularity allows predicting the temporal locus of future information thereby potentiall... more Temporal regularity allows predicting the temporal locus of future information thereby potentially facilitating cognitive processing. We applied event-related brain potentials (ERPs) to investigate how temporal regularity impacts pre-attentive and attentive processing of deviance in the auditory modality. Participants listened to sequences of sinusoidal tones differing exclusively in pitch. The inter-stimulus interval (ISI) in these sequences was manipulated to convey either isochronous or random temporal structure. In the pre-attentive session, deviance processing was unaffected by the regularity manipulation as evidenced in three event-related-potentials (ERPs): mismatch negativity (MMN), P3a, and reorienting negativity (RON). In the attentive session, the P3b was smaller for deviant tones embedded in irregular temporal structure, while the N2b component remained unaffected. These findings confirm that temporal regularity can reinforce cognitive mechanisms associated with the attentive processing of deviance. Furthermore, they provide evidence for the dynamic allocation of attention in time and dissociable pre-attentive and attention-dependent temporal processing mechanisms.

Research paper thumbnail of How relevant is social interaction in second language learning?

Verbal language is the most widespread mode of human communication, and an intrinsically social a... more Verbal language is the most widespread mode of human communication, and an intrinsically social activity. This claim is strengthened by evidence emerging from different fields, which clearly indicates that social interaction influences human communication, and more specifically, language learning. Indeed, research conducted with infants and children shows that interaction with a caregiver is necessary to acquire language. Further evidence on the influence of sociality on language comes from social and linguistic pathologies, in which deficits in social and linguistic abilities are tightly intertwined, as is the case for Autism, for example. However, studies on adult second language (L2) learning have been mostly focused on individualistic approaches, partly because of methodological constraints, especially of imaging methods. The question as to whether social interaction should be considered as a critical factor impacting upon adult language learning still remains underspecified. Here, we review evidence in support of the view that sociality plays a significant role in communication and language learning, in an attempt to emphasize factors that could facilitate this process in adult language learning. We suggest that sociality should be considered as a potentially influential factor in adult language learning and that future studies in this domain should explicitly target this factor.

Research paper thumbnail of Positive emotion impedes emotional but not cognitive conflict processing

Cognitive control enables successful goal-directed behavior by resolving a conflict between oppos... more Cognitive control enables successful goal-directed behavior by resolving a conflict between opposing action ten- dencies, while emotional control arises as a consequence of emotional conflict processing such as in irony. While negative emotion facilitates both cognitive and emotional conflict pro- cessing, it is unclear how emotional conflict processing is affected by positive emotion (e.g., humor). In 2 EEG experi- ments, we investigated the role of positive audiovisual target stimuli in cognitive and emotional conflict processing. Participants categorized either spoken vowels (cognitive task) or their emotional valence (emotional task) and ignored the visu al stim ulus dimension. Behaviora lly, a positive target showed no influence on cognitive conflict processing, but impeded emotional conflict processing. In the emotional task, response time conflict costs were higher for positive than for neutral targets. In the EEG, we observed an interaction of emotion by congruence in the P200 and N200 ERP compo- nents in emotional but not in cognitive conflict processing. In the emotional conflict task, the P200 and N200 conflict effect was larger for emotional than neutral targets. Thus, our results show that emotion affects conflict processing differently as a function of conflict type and emotional valence. This suggests that there are conflict- and valence-specific mechanisms mod- ulating executive control.

Research paper thumbnail of efMRI Evidence for Implicit Emotional Prosodic Processing

The current efMRI experiment investigated the potential right hemisphere dominance of emotional p... more The current efMRI experiment investigated the potential right hemisphere dominance of emotional prosodic processing under implicit task demands. Participants evaluated the relative tonal height (high, medium, low) of intelligible and unintelligible sentences spoken by a trained female speaker of German with three prosodic contours: happy, angry, and neutral. The results confirm the activation of a bilateral fronto- striato-temporal network with

Research paper thumbnail of Distraction by Emotional Sounds: Disentangling Arousal Benefits and Orienting Costs

Emotion (Washington, D.C.), Jan 8, 2015

Unexpectedly occurring task-irrelevant stimuli have been shown to impair performance. They captur... more Unexpectedly occurring task-irrelevant stimuli have been shown to impair performance. They capture attention away from the main task leaving fewer resources for target processing. However, the actual distraction effect depends on various variables; for example, only target-informative distractors have been shown to cause costs of attentional orienting. Furthermore, recent studies have shown that high arousing emotional distractors, as compared with low arousing neutral distractors, can improve performance by increasing alertness. We aimed to separate costs of attentional orienting and benefits of arousal by presenting negative and neutral environmental sounds (novels) as oddballs in an auditory-visual distraction paradigm. Participants categorized pictures while task-irrelevant sounds preceded visual targets in two conditions: (a) informative sounds reliably signaled onset and occurrence of visual targets, and (b) noninformative sounds occurred unrelated to visual targets. Results c...

Research paper thumbnail of Editorial: Current research and emerging directions on the cognitive and neural organization of speech processing

Frontiers in Human Neuroscience, 2015

Research paper thumbnail of Emotional Speech Perception Unfolding in Time: The Role of the Basal Ganglia

PLoS ONE, 2011

The basal ganglia (BG) have repeatedly been linked to emotional speech processing in studies invo... more The basal ganglia (BG) have repeatedly been linked to emotional speech processing in studies involving patients with neurodegenerative and structural changes of the BG. However, the majority of previous studies did not consider that (i) emotional speech processing entails multiple processing steps, and the possibility that (ii) the BG may engage in one rather than the other of these processing steps. In the present study we investigate three different stages of emotional speech processing (emotional salience detection, meaning-related processing, and identification) in the same patient group to verify whether lesions to the BG affect these stages in a qualitatively different manner. Specifically, we explore early implicit emotional speech processing (probe verification) in an ERP experiment followed by an explicit behavioral emotional recognition task. In both experiments, participants listened to emotional sentences expressing one of four emotions (anger, fear, disgust, happiness) or neutral sentences. In line with previous evidence patients and healthy controls show differentiation of emotional and neutral sentences in the P200 component (emotional salience detection) and a following negative-going brain wave (meaning-related processing). However, the behavioral recognition (identification stage) of emotional sentences was impaired in BG patients, but not in healthy controls. The current data provide further support that the BG are involved in late, explicit rather than early emotional speech processing stages.

Research paper thumbnail of Pitch modulates lexical identification in spoken word recognition: ERP and behavioral evidence

Cognitive Brain Research, 2004

Event-related potentials (ERPs) were recorded in cross-modal word fragment priming (CMWP) to addr... more Event-related potentials (ERPs) were recorded in cross-modal word fragment priming (CMWP) to address the function of pitch for the identification of spoken words. In CMWP fragments of spoken words (e.g., re taken from Regal [Engl. shelves]) are immediately followed by visual targets. Together with reduced reaction times (RTs), an ERP deflection named P350 has been found to be reduced for targets, which match the primes (e.g., in the prime -target pair re -REGAL) as compared to unrelated targets (e.g., re -WIRBEL [Engl. burble]). The P350 has been related to facilitated lexical identification [Friedrich, Kotz, Friederici and Gunter (in press), ERPs reflect lexical identification in word fragment priming, JOCN]. In the present study, we presented syllable primes with different pitch contours. One version of each prime bore a stressed pitch contour (e.g., re_1), the other an unstressed pitch contour (e.g., re_2). Primes were combined with targets being either stressed on the first syllable (e.g., REgel [Engl. rule]) or on the second syllable (e.g., reGAL [Engl. shelves]). We found a reduced amplitude of the P350 and slightly faster reactions for targets with a stress pattern that matched the pitch of the primes (e.g., re_1 -REgel) as compared to targets with a stress pattern that did not match the pitch of the primes (e.g., re_1 -reGAL). The present study replicates the P350 effect with different material, and indicates that pitch is used for lexical identification in spoken word recognition. D 2004 Elsevier B.V. All rights reserved.

Research paper thumbnail of Rapid processing of emotional and voice information as evidenced by ERPs

Next to linguistic content, the human voice carries speaker identity information (e.g. female/mal... more Next to linguistic content, the human voice carries speaker identity information (e.g. female/male, young/old) and can also carry emotional information. Although various studies have started to specify the brain regions that underlie the different functions of human voice processing, few studies have aimed to specify the time course underlying these processes. By means of event-related potentials (ERPs) we aimed to determine the time-course of neural responses to emotional speech, speaker identification, and their interplay. While engaged in an implicit voice processing task (probe verification) participants listened to emotional sentences spoken by two female and two male speakers of two different ages (young and middle-aged). For all four speakers rapid emotional decoding was observed as emotional sentences could be differentiated from neutral sentences already within 200 ms after sentence onset (P200). However, results also imply that individual capacity to encode emotional expressions may have an influence on this early emotion detection as the P200 differentiation pattern (neutral vs. emotion) differed for each individual speaker.

Research paper thumbnail of The role of the ventral premotor cortex in sequencing linguistic information

Research paper thumbnail of Temporal Interaction of Emotional Prosody and Emotional Semantics: Evidence from ERPs

Emotional prosody carries information about the inner state of a speaker and therefore helps us t... more Emotional prosody carries information about the inner state of a speaker and therefore helps us to understand how other people feel. However, emotions are also transferred verbally. In or- der to further substantiate the underlying mechanisms of emo- tional prosodic processing we investigated the interaction of both emotional prosody and emotional semantics with event- related brain potentials (ERPs) utilizing a

Research paper thumbnail of ERPs reveal comparable syntactic sentence processing in native and non-native readers of English

L2 syntactic processing has been primarily investigated in the context of syntactic anomaly detec... more L2 syntactic processing has been primarily investigated in the context of syntactic anomaly detection, but only sparsely with syntactic ambiguity. In the field of event-related potentials (ERPs) syntactic anomaly detection and syntactic ambiguity resolution is linked to the P600. The current ERP experiment examined L2 syntactic processing in highly proficient L1 Spanish-L2 English readers who had acquired English informally around

Research paper thumbnail of Lateralization of emotional prosody in the brain: an overview and synopsis on the impact of study design

Progress in Brain Research, 2006

Recently, research on the lateralization of linguistic and nonlinguistic (emotional) prosody has ... more Recently, research on the lateralization of linguistic and nonlinguistic (emotional) prosody has experienced a revival. However, both neuroimaging and patient evidence do not draw a coherent picture substantiating right-hemispheric lateralization of prosody and emotional prosody in particular. The current overview summarizes positions and data on the lateralization of emotion and emotional prosodic processing in the brain and proposes that: (1) the realization of emotional prosodic processing in the brain is based on differentially lateralized subprocesses and (2) methodological factors can influence the lateralization of emotional prosody in neuroimaging investigations. Latter evidence reveals that emotional valence effects are strongly right lateralized in studies using compact blocked presentation of emotional stimuli. In contrast, data obtained from event-related studies are indicative of bilateral or left-accented lateralization of emotional prosodic valence. These findings suggest a strong interaction between language and emotional prosodic processing.

Research paper thumbnail of Musical rhythm discrimination explains individual differences in grammar skills in children

Developmental Science, 2014

This study considered a relation between rhythm perception skills and individual differences in p... more This study considered a relation between rhythm perception skills and individual differences in phonological awareness and grammar abilities, which are two language skills crucial for academic achievement. Twenty-five typically developing 6-year-old children were given standardized assessments of rhythm perception, phonological awareness, morpho-syntactic competence, and non-verbal cognitive ability. Rhythm perception accounted for 48% of the variance in morpho-syntactic competence after controlling for non-verbal IQ, socioeconomic status, and prior musical activities. Children with higher phonological awareness scores were better able to discriminate complex rhythms than children with lower scores, but not after controlling for IQ. This study is the first to show a relation between rhythm perception skills and morpho-syntactic production in children with typical language development. These findings extend the literature showing substantial overlap of neurocognitive resources for processing music and language. A video abstract of this article can be viewed at: http://youtu.be/_lO692qHDNg.

Research paper thumbnail of On the role of attention for the processing of emotions in speech: sex differences revisited

Brain research. Cognitive brain research, 2005

In a previous cross-modal priming study [A. Schirmer, A.S. Kotz, A.D. Friederici, Sex differentia... more In a previous cross-modal priming study [A. Schirmer, A.S. Kotz, A.D. Friederici, Sex differentiates the role of emotional prosody during word processing, Cogn. Brain Res. 14 (2002) 228-233.], we found that women integrated emotional prosody and word valence earlier than men. Both sexes showed a smaller N400 in the event-related potential to emotional words when these words were preceded by a sentence with congruous compared to incongruous emotional prosody. However, women showed this effect with a 200-ms interval between prime sentence and target word whereas men showed the effect with a 750-ms interval. The present study was designed to determine whether these sex differences prevail when attention is directed towards the emotional content of prosody and word meaning. To this end, we presented the same prime sentences and target words as in our previous study. Sentences were spoken with happy or sad prosody and followed by a congruous or incongruous emotional word or pseudoword. T...

Research paper thumbnail of Bilateral Speech Comprehension Reflects Differential Sensitivity to Spectral and Temporal Features

Journal of Neuroscience, 2008

Speech comprehension has been shown to be a strikingly bilateral process, but the differential co... more Speech comprehension has been shown to be a strikingly bilateral process, but the differential contributions of the subfields of left and right auditory cortices have remained elusive. The hypothesis that left auditory areas engage predominantly in decoding fast temporal perturbations of a signal whereas the right areas are relatively more driven by changes of the frequency spectrum has not been directly tested in speech or music. This brain-imaging study independently manipulated the speech signal itself along the spectral and the temporal domain using noise-band vocoding. In a parametric design with five temporal and five spectral degradation levels in word comprehension, a functional distinction of the left and right auditory association cortices emerged: increases in the temporal detail of the signal were most effective in driving brain activation of the left anterolateral superior temporal sulcus (STS), whereas the right homolog areas exhibited stronger sensitivity to the variations in spectral detail. In accordance with behavioral measures of speech comprehension acquired in parallel, change of spectral detail exhibited a stronger coupling with the STS BOLD signal. The relative pattern of lateralization (quantified using lateralization quotients) proved reliable in a jack-knifed iterative reanalysis of the group functional magnetic resonance imaging model. This study supplies direct evidence to the often implied functional distinction of the two cerebral hemispheres in speech processing. Applying direct manipulations to the speech signal rather than to low-level surrogates, the results lend plausibility to the notion of complementary roles for the left and right superior temporal sulci in comprehending the speech signal.

Research paper thumbnail of Exercising during learning improves vocabulary acquisition: Behavioral and ERP evidence

Neuroscience Letters, 2010

Research paper thumbnail of Event-related potential responses to metric violations: rules versus meaning

NeuroReport, 2010

In stress-timed languages, the alternation of stressed and unstressed syllables (or &... more In stress-timed languages, the alternation of stressed and unstressed syllables (or 'meter') is an important formal and temporal cue to guide speech processing. Previous electroencephalography studies have shown that metric violations result in an early negative event-related potential. It is unclear whether this 'metric' negativity is an N400 elicited by misplaced stress or whether it responds to error detection. The aim of this study was to investigate the nature of the 'metric' negativity as a function of rule-based, predictive sequencing. Our results show that the negativity occurs independent of the lexical-semantic content. We therefore suggest that the metric negativity reflects a rule-based sequencing mechanism.

Research paper thumbnail of The role of attention in processing jabberwocky sentences: Revisiting the P600

Research paper thumbnail of Did you get the beat? Late proficient French-German learners extract strong–weak patterns in tonal but not in linguistic sequences

NeuroImage, 2011

Event-related potential (ERP) data in French and German have shown that metric violations (i.e. i... more Event-related potential (ERP) data in French and German have shown that metric violations (i.e. incorrectly stressed words) in a sentence elicit a P600. Furthermore, French speakers find it difficult to discriminate stimuli that vary in word stress position and have been labelled as "stress deaf." In the current study we investigated (i) whether French late learners of German can perceive deviations of a regular strong-weak stress pattern (trochee) in German sentences, and (ii) whether the same subjects differ in their electrophysiological response from German monolinguals in a non-linguistic "subjective rhythmization" paradigm. Irrespective of the native language both groups show similar results in the latter paradigm in which isochronous stimulus trains are subjectively converted into a binary strong-weak grouped percept (trochee). However, we report differences between native and non-native speakers of German in the sentence paradigm. In contrast to German native speakers French late learners of German fail to show a P600 component in response to deviations from a regular trochaic stress pattern, although attention was directed to the metric pattern of the sentences. The current data suggest that French stress deafness selectively affects the perception of a strong-weak pattern in sentences while strong-weak grouping of non-linguistic sequences is not language specific. The results imply that linguistic and non-linguistic grouping do not rely on the same neural mechanisms.

Research paper thumbnail of Temporal regularity effects on pre-attentive and attentive processing of deviance

Biological Psychology, 2011

Temporal regularity allows predicting the temporal locus of future information thereby potentiall... more Temporal regularity allows predicting the temporal locus of future information thereby potentially facilitating cognitive processing. We applied event-related brain potentials (ERPs) to investigate how temporal regularity impacts pre-attentive and attentive processing of deviance in the auditory modality. Participants listened to sequences of sinusoidal tones differing exclusively in pitch. The inter-stimulus interval (ISI) in these sequences was manipulated to convey either isochronous or random temporal structure. In the pre-attentive session, deviance processing was unaffected by the regularity manipulation as evidenced in three event-related-potentials (ERPs): mismatch negativity (MMN), P3a, and reorienting negativity (RON). In the attentive session, the P3b was smaller for deviant tones embedded in irregular temporal structure, while the N2b component remained unaffected. These findings confirm that temporal regularity can reinforce cognitive mechanisms associated with the attentive processing of deviance. Furthermore, they provide evidence for the dynamic allocation of attention in time and dissociable pre-attentive and attention-dependent temporal processing mechanisms.

Research paper thumbnail of How relevant is social interaction in second language learning?

Verbal language is the most widespread mode of human communication, and an intrinsically social a... more Verbal language is the most widespread mode of human communication, and an intrinsically social activity. This claim is strengthened by evidence emerging from different fields, which clearly indicates that social interaction influences human communication, and more specifically, language learning. Indeed, research conducted with infants and children shows that interaction with a caregiver is necessary to acquire language. Further evidence on the influence of sociality on language comes from social and linguistic pathologies, in which deficits in social and linguistic abilities are tightly intertwined, as is the case for Autism, for example. However, studies on adult second language (L2) learning have been mostly focused on individualistic approaches, partly because of methodological constraints, especially of imaging methods. The question as to whether social interaction should be considered as a critical factor impacting upon adult language learning still remains underspecified. Here, we review evidence in support of the view that sociality plays a significant role in communication and language learning, in an attempt to emphasize factors that could facilitate this process in adult language learning. We suggest that sociality should be considered as a potentially influential factor in adult language learning and that future studies in this domain should explicitly target this factor.

Research paper thumbnail of Positive emotion impedes emotional but not cognitive conflict processing

Cognitive control enables successful goal-directed behavior by resolving a conflict between oppos... more Cognitive control enables successful goal-directed behavior by resolving a conflict between opposing action ten- dencies, while emotional control arises as a consequence of emotional conflict processing such as in irony. While negative emotion facilitates both cognitive and emotional conflict pro- cessing, it is unclear how emotional conflict processing is affected by positive emotion (e.g., humor). In 2 EEG experi- ments, we investigated the role of positive audiovisual target stimuli in cognitive and emotional conflict processing. Participants categorized either spoken vowels (cognitive task) or their emotional valence (emotional task) and ignored the visu al stim ulus dimension. Behaviora lly, a positive target showed no influence on cognitive conflict processing, but impeded emotional conflict processing. In the emotional task, response time conflict costs were higher for positive than for neutral targets. In the EEG, we observed an interaction of emotion by congruence in the P200 and N200 ERP compo- nents in emotional but not in cognitive conflict processing. In the emotional conflict task, the P200 and N200 conflict effect was larger for emotional than neutral targets. Thus, our results show that emotion affects conflict processing differently as a function of conflict type and emotional valence. This suggests that there are conflict- and valence-specific mechanisms mod- ulating executive control.

Research paper thumbnail of efMRI Evidence for Implicit Emotional Prosodic Processing

The current efMRI experiment investigated the potential right hemisphere dominance of emotional p... more The current efMRI experiment investigated the potential right hemisphere dominance of emotional prosodic processing under implicit task demands. Participants evaluated the relative tonal height (high, medium, low) of intelligible and unintelligible sentences spoken by a trained female speaker of German with three prosodic contours: happy, angry, and neutral. The results confirm the activation of a bilateral fronto- striato-temporal network with

Research paper thumbnail of Distraction by Emotional Sounds: Disentangling Arousal Benefits and Orienting Costs

Emotion (Washington, D.C.), Jan 8, 2015

Unexpectedly occurring task-irrelevant stimuli have been shown to impair performance. They captur... more Unexpectedly occurring task-irrelevant stimuli have been shown to impair performance. They capture attention away from the main task leaving fewer resources for target processing. However, the actual distraction effect depends on various variables; for example, only target-informative distractors have been shown to cause costs of attentional orienting. Furthermore, recent studies have shown that high arousing emotional distractors, as compared with low arousing neutral distractors, can improve performance by increasing alertness. We aimed to separate costs of attentional orienting and benefits of arousal by presenting negative and neutral environmental sounds (novels) as oddballs in an auditory-visual distraction paradigm. Participants categorized pictures while task-irrelevant sounds preceded visual targets in two conditions: (a) informative sounds reliably signaled onset and occurrence of visual targets, and (b) noninformative sounds occurred unrelated to visual targets. Results c...

Research paper thumbnail of Editorial: Current research and emerging directions on the cognitive and neural organization of speech processing

Frontiers in Human Neuroscience, 2015

Research paper thumbnail of Emotional Speech Perception Unfolding in Time: The Role of the Basal Ganglia

PLoS ONE, 2011

The basal ganglia (BG) have repeatedly been linked to emotional speech processing in studies invo... more The basal ganglia (BG) have repeatedly been linked to emotional speech processing in studies involving patients with neurodegenerative and structural changes of the BG. However, the majority of previous studies did not consider that (i) emotional speech processing entails multiple processing steps, and the possibility that (ii) the BG may engage in one rather than the other of these processing steps. In the present study we investigate three different stages of emotional speech processing (emotional salience detection, meaning-related processing, and identification) in the same patient group to verify whether lesions to the BG affect these stages in a qualitatively different manner. Specifically, we explore early implicit emotional speech processing (probe verification) in an ERP experiment followed by an explicit behavioral emotional recognition task. In both experiments, participants listened to emotional sentences expressing one of four emotions (anger, fear, disgust, happiness) or neutral sentences. In line with previous evidence patients and healthy controls show differentiation of emotional and neutral sentences in the P200 component (emotional salience detection) and a following negative-going brain wave (meaning-related processing). However, the behavioral recognition (identification stage) of emotional sentences was impaired in BG patients, but not in healthy controls. The current data provide further support that the BG are involved in late, explicit rather than early emotional speech processing stages.

Research paper thumbnail of Pitch modulates lexical identification in spoken word recognition: ERP and behavioral evidence

Cognitive Brain Research, 2004

Event-related potentials (ERPs) were recorded in cross-modal word fragment priming (CMWP) to addr... more Event-related potentials (ERPs) were recorded in cross-modal word fragment priming (CMWP) to address the function of pitch for the identification of spoken words. In CMWP fragments of spoken words (e.g., re taken from Regal [Engl. shelves]) are immediately followed by visual targets. Together with reduced reaction times (RTs), an ERP deflection named P350 has been found to be reduced for targets, which match the primes (e.g., in the prime -target pair re -REGAL) as compared to unrelated targets (e.g., re -WIRBEL [Engl. burble]). The P350 has been related to facilitated lexical identification [Friedrich, Kotz, Friederici and Gunter (in press), ERPs reflect lexical identification in word fragment priming, JOCN]. In the present study, we presented syllable primes with different pitch contours. One version of each prime bore a stressed pitch contour (e.g., re_1), the other an unstressed pitch contour (e.g., re_2). Primes were combined with targets being either stressed on the first syllable (e.g., REgel [Engl. rule]) or on the second syllable (e.g., reGAL [Engl. shelves]). We found a reduced amplitude of the P350 and slightly faster reactions for targets with a stress pattern that matched the pitch of the primes (e.g., re_1 -REgel) as compared to targets with a stress pattern that did not match the pitch of the primes (e.g., re_1 -reGAL). The present study replicates the P350 effect with different material, and indicates that pitch is used for lexical identification in spoken word recognition. D 2004 Elsevier B.V. All rights reserved.

Research paper thumbnail of Rapid processing of emotional and voice information as evidenced by ERPs

Next to linguistic content, the human voice carries speaker identity information (e.g. female/mal... more Next to linguistic content, the human voice carries speaker identity information (e.g. female/male, young/old) and can also carry emotional information. Although various studies have started to specify the brain regions that underlie the different functions of human voice processing, few studies have aimed to specify the time course underlying these processes. By means of event-related potentials (ERPs) we aimed to determine the time-course of neural responses to emotional speech, speaker identification, and their interplay. While engaged in an implicit voice processing task (probe verification) participants listened to emotional sentences spoken by two female and two male speakers of two different ages (young and middle-aged). For all four speakers rapid emotional decoding was observed as emotional sentences could be differentiated from neutral sentences already within 200 ms after sentence onset (P200). However, results also imply that individual capacity to encode emotional expressions may have an influence on this early emotion detection as the P200 differentiation pattern (neutral vs. emotion) differed for each individual speaker.

Research paper thumbnail of The role of the ventral premotor cortex in sequencing linguistic information

Research paper thumbnail of Temporal Interaction of Emotional Prosody and Emotional Semantics: Evidence from ERPs

Emotional prosody carries information about the inner state of a speaker and therefore helps us t... more Emotional prosody carries information about the inner state of a speaker and therefore helps us to understand how other people feel. However, emotions are also transferred verbally. In or- der to further substantiate the underlying mechanisms of emo- tional prosodic processing we investigated the interaction of both emotional prosody and emotional semantics with event- related brain potentials (ERPs) utilizing a

Research paper thumbnail of ERPs reveal comparable syntactic sentence processing in native and non-native readers of English

L2 syntactic processing has been primarily investigated in the context of syntactic anomaly detec... more L2 syntactic processing has been primarily investigated in the context of syntactic anomaly detection, but only sparsely with syntactic ambiguity. In the field of event-related potentials (ERPs) syntactic anomaly detection and syntactic ambiguity resolution is linked to the P600. The current ERP experiment examined L2 syntactic processing in highly proficient L1 Spanish-L2 English readers who had acquired English informally around

Research paper thumbnail of Lateralization of emotional prosody in the brain: an overview and synopsis on the impact of study design

Progress in Brain Research, 2006

Recently, research on the lateralization of linguistic and nonlinguistic (emotional) prosody has ... more Recently, research on the lateralization of linguistic and nonlinguistic (emotional) prosody has experienced a revival. However, both neuroimaging and patient evidence do not draw a coherent picture substantiating right-hemispheric lateralization of prosody and emotional prosody in particular. The current overview summarizes positions and data on the lateralization of emotion and emotional prosodic processing in the brain and proposes that: (1) the realization of emotional prosodic processing in the brain is based on differentially lateralized subprocesses and (2) methodological factors can influence the lateralization of emotional prosody in neuroimaging investigations. Latter evidence reveals that emotional valence effects are strongly right lateralized in studies using compact blocked presentation of emotional stimuli. In contrast, data obtained from event-related studies are indicative of bilateral or left-accented lateralization of emotional prosodic valence. These findings suggest a strong interaction between language and emotional prosodic processing.

Research paper thumbnail of Musical rhythm discrimination explains individual differences in grammar skills in children

Developmental Science, 2014

This study considered a relation between rhythm perception skills and individual differences in p... more This study considered a relation between rhythm perception skills and individual differences in phonological awareness and grammar abilities, which are two language skills crucial for academic achievement. Twenty-five typically developing 6-year-old children were given standardized assessments of rhythm perception, phonological awareness, morpho-syntactic competence, and non-verbal cognitive ability. Rhythm perception accounted for 48% of the variance in morpho-syntactic competence after controlling for non-verbal IQ, socioeconomic status, and prior musical activities. Children with higher phonological awareness scores were better able to discriminate complex rhythms than children with lower scores, but not after controlling for IQ. This study is the first to show a relation between rhythm perception skills and morpho-syntactic production in children with typical language development. These findings extend the literature showing substantial overlap of neurocognitive resources for processing music and language. A video abstract of this article can be viewed at: http://youtu.be/_lO692qHDNg.

Research paper thumbnail of On the role of attention for the processing of emotions in speech: sex differences revisited

Brain research. Cognitive brain research, 2005

In a previous cross-modal priming study [A. Schirmer, A.S. Kotz, A.D. Friederici, Sex differentia... more In a previous cross-modal priming study [A. Schirmer, A.S. Kotz, A.D. Friederici, Sex differentiates the role of emotional prosody during word processing, Cogn. Brain Res. 14 (2002) 228-233.], we found that women integrated emotional prosody and word valence earlier than men. Both sexes showed a smaller N400 in the event-related potential to emotional words when these words were preceded by a sentence with congruous compared to incongruous emotional prosody. However, women showed this effect with a 200-ms interval between prime sentence and target word whereas men showed the effect with a 750-ms interval. The present study was designed to determine whether these sex differences prevail when attention is directed towards the emotional content of prosody and word meaning. To this end, we presented the same prime sentences and target words as in our previous study. Sentences were spoken with happy or sad prosody and followed by a congruous or incongruous emotional word or pseudoword. T...

Research paper thumbnail of Bilateral Speech Comprehension Reflects Differential Sensitivity to Spectral and Temporal Features

Journal of Neuroscience, 2008

Speech comprehension has been shown to be a strikingly bilateral process, but the differential co... more Speech comprehension has been shown to be a strikingly bilateral process, but the differential contributions of the subfields of left and right auditory cortices have remained elusive. The hypothesis that left auditory areas engage predominantly in decoding fast temporal perturbations of a signal whereas the right areas are relatively more driven by changes of the frequency spectrum has not been directly tested in speech or music. This brain-imaging study independently manipulated the speech signal itself along the spectral and the temporal domain using noise-band vocoding. In a parametric design with five temporal and five spectral degradation levels in word comprehension, a functional distinction of the left and right auditory association cortices emerged: increases in the temporal detail of the signal were most effective in driving brain activation of the left anterolateral superior temporal sulcus (STS), whereas the right homolog areas exhibited stronger sensitivity to the variations in spectral detail. In accordance with behavioral measures of speech comprehension acquired in parallel, change of spectral detail exhibited a stronger coupling with the STS BOLD signal. The relative pattern of lateralization (quantified using lateralization quotients) proved reliable in a jack-knifed iterative reanalysis of the group functional magnetic resonance imaging model. This study supplies direct evidence to the often implied functional distinction of the two cerebral hemispheres in speech processing. Applying direct manipulations to the speech signal rather than to low-level surrogates, the results lend plausibility to the notion of complementary roles for the left and right superior temporal sulci in comprehending the speech signal.