Effects of reverberation and noise on speech comprehension by native and non-native English-speaking listeners (original) (raw)

High second-language proficency protects against the effects of reverberation on listening comprehension

The purpose of this experiment was to investigate whether classroom reverberation influences second-language (L2) listening comprehension. Moreover, we investigated whether individual differences in baseline L2 proficiency and in working memory capacity modulate the effect of reverberation time on L2 listening comprehension. The results showed that L2 listening comprehension decreased as reverberation time increased. Participants with higher baseline L2 proficiency were less susceptible to this effect. Working memory capacity had a similar relation to the effect of reverberation (although just barely significant) but baseline L2 proficiency was a stronger predictor. Taken together, the results suggest that top-down cognitive capabilities support listening in adverse conditions. Potential implications for the Swedish national tests in English are discussed.

Cognition and Neurosciences High second-language proficiency protects against the effects of reverberation on listening comprehension

S€ orqvist, P., Hurtig, A., Ljung, R. & R€ onnberg, J. (2014). High second-language proficiency protects against the effects of reverberation on listening comprehension. Scandinavian Journal of Psychology 55, 91–96. The purpose of this experiment was to investigate whether classroom reverberation influences second-language (L2) listening comprehension. Moreover, we investigated whether individual differences in baseline L2 proficiency and in working memory capacity (WMC) modulate the effect of reverberation time on L2 listening comprehension. The results showed that L2 listening comprehension decreased as reverberation time increased. Participants with higher baseline L2 proficiency were less susceptible to this effect. WMC was also related to the effect of reverberation (although just barely significant), but the effect of WMC was eliminated when baseline L2 proficiency was statistically controlled. Taken together, the results suggest that top-down cognitive capabilities support listening in adverse conditions. Potential implications for the Swedish national tests in English are discussed.

Steady-state suppression in reverberation: a comparison of native and nonnative speech perception

This study investigated whether the steady-state suppression method proposed by improved consonant identification for nonnative listeners in reverberation. It also compared the effect of steady-state suppression on consonant identification by native and nonnative listeners in reverberation. We used steady-state suppression as a preprocessing technique which processes speech signals before they are radiated from loudspeakers in order to reduce the amount of overlap-masking. Participants were 24 native English (native listeners) and 24 Japanese speakers (nonnative listeners), both with normal hearing. A diotic Modified Rhyme Test was conducted with and without steady-state suppression for reverberation times of 0.4, 0.7 and 1.1 s and a nonreverberant condition. The results showed that native listeners performed better than nonnative listeners, and that the mean percentage of correct answers in initial consonants was higher than in final consonants. The results also showed that processed and unprocessed speech was comparable for word initial and final consonants. These findings indicate that parameters of steady-state suppression would need adjustment to accommodate speech materials and reverberant conditions. They also suggest that the difficulties that nonnative listeners have might not be due to the actual acoustic-phonetic information from the signal.

Children's Recall of Words Spoken in Their First and Second Language: Effects of Signal-to-Noise Ratio and Reverberation Time

Speech perception runs smoothly and automatically when there is silence in the background, but when the speech signal is degraded by background noise or by reverberation, effortful cognitive processing is needed to compensate for the signal distortion. Previous research has typically investigated the effects of signal-to-noise ratio (SNR) and reverberation time in isolation, whilst few have looked at their interaction. In this study, we probed how reverberation time and SNR influence recall of words presented in participants' first-(L1) and second-language (L2). A total of 72 children (10 years old) participated in this study. The to-be-recalled wordlists were played back with two different reverberation times (0.3 and 1.2 s) crossed with two different SNRs (+3 dBA and +12 dBA). Children recalled fewer words when the spoken words were presented in L2 in comparison with recall of spoken words presented in L1. Words that were presented with a high SNR (+12 dBA) improved recall compared to a low SNR (+3 dBA). Reverberation time interacted with SNR to the effect that at +12 dB the shorter reverberation time improved recall, but at +3 dB it impaired recall. The effects of the physical sound variables (SNR and reverberation time) did not interact with language.

Effects of training, style, and rate of speaking on speech perception of young people in reverberation

Journal of the Acoustical Society of America, 2008

Because of the difficulty of listening to speech in reverberation (e.g., at train stations), we need to find characteristics of intelligible speech sounds that are appropriate for announcements by spoken messages over loudspeakers in public spaces. This study investigated the effects of training (seven talkers who have received speech training or not), style (conversational/clear) and rate (normal/slow) of speaking on speech perception of young people in simulated reverberant environments. The talkers were instructed to speak nonsense words embedded within a carrier sentence clearly or normally in an anechoic room, and listening tests were carried out with young people in simulated reverberant environments. Results showed that correct rates significantly differed among the talkers, but no difference in correct rates was found between the two speaking rates, and conversational speech had significantly higher correct rates than clear speech. Casual inspections of the stimuli indicate that vowels are enhanced as well as consonants in clear speech so that clear speech had lower correct rates than conversational speech. This difference may be due to increased reverberant masking in clear speech compared to that in conversational speech.

Contribution of Low-Level Acoustic and Higher-Level Lexical-Semantic Cues to Speech Recognition in Noise and Reverberation

Frontiers in Built Environment, 2021

Masking noise and reverberation strongly influence speech intelligibility and decrease listening comfort. To optimize acoustics for ensuring a comfortable environment, it is crucial to understand the respective contribution of bottom-up signal-driven cues and top-down linguistic-semantic cues to speech recognition in noise and reverberation. Since the relevance of these cues differs across speech test materials and training status of the listeners, we investigate the influence of speech material type on speech recognition in noise, reverberation and combinations of noise and reverberation. We also examine the influence of training on the performance for a subset of measurement conditions. Speech recognition is measured with an open-set, everyday Plomp-type sentence test and compared to the recognition scores for a closed-set Matrix-type test consisting of syntactically fixed and semantically unpredictable sentences (c.f. data by Rennies et al., J. Acoust. Soc. America, 2014, 136, 26...

The effect of the steady-state suppression on consonant identification by native and non-native listeners in reverberant environments

This study investigated whether the steady-state suppression proposed by Arai et al. (Proc. Autumn Meet. Acoust. Soc. Jpn., 2001; Acoust. Sci. Tech., 2002) improved consonant identification for non-native listeners in reverberation. This study also compared the effect of steady-state suppression on consonant identification by native and non-native listeners in reverberant environments. We used steady-state suppression as a pre-processing technique which processes speech signals before they are radiated from loudspeakers in order to reduce the amount of overlap-masking. Participants were 24 native English (native listeners) and 24 Japanese speakers (non-native listeners), both with normal hearing. A diotic Modified Rhyme Test (MRT) was conducted under 2 processing conditions (with or without steady-state suppression) for 3 reverberant conditions (reverberation times of 0.4, 0.7 and 1.1 s) and a dry condition. The results showed that native listeners performed better than non-native listeners in all conditions used in this study. Although there were no significant differences between unprocessed and steady-state suppressed stimuli, and no significant interaction between the effect of the steady-state suppression and listener group under the reverberant conditions used in the current study, the effect of the steady-state suppression differed in consonant position, reverberation time and listener group. These findings imply that a pre-processing technique would be required which helps non-native listeners to identify consonants as well as native listeners do.

Effect of reverberation on speech intelligibility, logatom test “in situ”

2015

The influence of acoustic parameters of lecture halls is essential for the quality of reception and the understanding of the content. One of the basic acoustic parameters is the reverberation time in the room. The study demonstrates that the room which is not properly designed for acoustics, causes difficulty in understanding the delivered text. Selected acoustic parameters were analyzed, such as reverberation time, acoustic background, and delivered text intelligibility. A possible solution was proposed, using reflecting and absorbing surfaces appropriately positioned in the room.

The Effect of the Steady-State Suppression on Consonant Identification by Native and Non-Native Listeners in Reverberant Environments (国際ワークショップ Frontiers in Speech and Hearing Research)

電子情報通信学会技術研究報告 Sp 音声, 2006

This study investigated whether the steady-state suppression proposed by Arai et al. (Proc. Autumn Meet. Acoust. Soc. Jpn., 2001; Acoust. Sci. Tech., 2002) improved consonant identification for non-native listeners in reverberation. This study also compared the effect of steady-state suppression on consonant identification by native and non-native listeners in reverberant environments. We used steady-state suppression as a pre-processing technique which processes speech signals before they are radiated from loudspeakers in order to reduce the amount of overlap-masking. Participants were 24 native English (native listeners) and 24 Japanese speakers (non-native listeners), both with normal hearing. A diotic Modified Rhyme Test (MRT) was conducted under 2 processing conditions (with or without steady-state suppression) for 3 reverberant conditions (reverberation times of 0.4, 0.7 and 1.1 s) and a dry condition. The results showed that native listeners performed better than non-native listeners in all conditions used in this study. Although there were no significant differences between unprocessed and steady-state suppressed stimuli, and no significant interaction between the effect of the steady-state suppression and listener group under the reverberant conditions used in the current study, the effect of the steady-state suppression differed in consonant position, reverberation time and listener group. These findings imply that a pre-processing technique would be required which helps non-native listeners to identify consonants as well as native listeners do.