Björn Lidestam | Linköping University (original) (raw)

Papers by Björn Lidestam

Research paper thumbnail of AVSP 2001 International Conference on Auditory-Visual Speech Processing SPEECHREADING ESSENTIALS: SIGNAL, PARALINGUISTIC CUES, AND SKILL

An overview of some results from our own studies on speechreading is presented. Focus is on paral... more An overview of some results from our own studies on speechreading is presented. Focus is on paralinguistic cues that assist in perception of the spoken signal. We elaborate on how the contextual, and especially the emotional cues are utilised in speechreading. We further discuss which skills are important in integrating paralinguistic cues and the linguistic signal in speechreading. As a basis for this overview, empirical data as well as hypotheses under scrutiny in a present design are discussed. Finally, some practical implications regarding synthetic faces as communicative aids are discussed: how the speech should be presented, incorporation of paralinguistic cues into the design, and how to match user and aid. 1.

Research paper thumbnail of Visual Phonemic Ambiguity and

Purpose: To study the role of visual perception of phonemes in visual perception of sentences and... more Purpose: To study the role of visual perception of phonemes in visual perception of sentences and words among normal-hearing individuals. Method: Twenty-four normal-hearing adults identified consonants, words, and sentences, spoken by either a human or a synthetic talker. The synthetic talker was programmed with identical parameters within phoneme groups, hypothetically resulting in simplified articulation. Proportions of correctly identified phonemes per participant, condition, and task, as well as sensitivity to single consonants and clusters of consonants, were measured. Groups of mutually exclusive consonants were used for sensitivity analyses and hierarchical cluster analyses. Results: Consonant identification performance did not differ as a function of talker, nor did average sensitivity to single consonants. The bilabial and labiodental clusters were most readily identified and cohesive for both talkers. Word and sentence identification was better for the human talker than th...

Research paper thumbnail of A, v, and AV discrimination of vowel duration

Discrimination of vowel duration was explored with regard to JNDs, error bias, and effects of mod... more Discrimination of vowel duration was explored with regard to JNDs, error bias, and effects of modality and consonant context. 90 normal-hearing participants discriminated either auditorily, visually, or audiovisually between pairs of stimuli differing with regard to duration of the vowel /a/. Duration differences varied in 24 steps: 12 with the first token longer and 12 with the second token longer (33–400 ms). Results: accuracy was lower for V than A and AV; step difference affected performance in all modalities; error bias was affected by modality and consonant context; and JNDs (> 50% correct) were not possible to establish.

Research paper thumbnail of Simulatorbaserad träning för utryckningsförare : En förstudie

Syftet med forstudien var att undersoka forutsattningar for simulatorbaserad traning av utrycknin... more Syftet med forstudien var att undersoka forutsattningar for simulatorbaserad traning av utryckningskorning samt initiera anvandarcentrerad utveckling av simulatorer for utryckningskorning. Med hjal ...

Research paper thumbnail of Hearing loss and a supportive tactile signal in a navigation system : effects on driving behavior and eye movements

An on-road study was conducted to evaluate a complementary tactile navigation signal on driving b... more An on-road study was conducted to evaluate a complementary tactile navigation signal on driving behaviour and eye movements for drivers with hearing loss (HL) compared to drivers with normal hearing (NH). 32 participants (16 HL and 16 NH) performed two preprogrammed navigation tasks. In one, participants received only visual information, while the other also included a vibration in the seat to guide them in the correct direction. SMI glasses were used for eye tracking, recording the point of gaze within the scene. Analysis was performed on predefined regions. A questionnaire examined participant's experience of the navigation systems. Hearing loss was associated with lower speed, higher satisfaction with the tactile signal and more glances in the rear view mirror. Additionally, tactile support led to less time spent viewing the navigation display.

Research paper thumbnail of Effects of Hearing Impairment and Cognitive Capacity Gated Auditory Speech Perception in Elderly Hearing Aid Users and Elderly Normal-Hearing Individuals

This study compared elderly hearing aid (EHA) users and elderly normal-hearing (ENH) individuals ... more This study compared elderly hearing aid (EHA) users and elderly normal-hearing (ENH) individuals on identification of auditory speech stimuli (consonants, words, and final word in sentences) that were different when considering their linguistic properties. We measured the accuracy with which the target speech stimuli were identified, as well as the isolation points (IPs: the shortest duration, from onset, required to correctly identify the speech target). The relationships between working memory capacity, the IPs, and speech accuracy were also measured. Twenty-four EHA users (with mild to moderate hearing impairment) and 24 ENH individuals participated in the present study. Despite the use of their regular hearing aids, the EHA users had delayed IPs and were less accurate in identifying consonants and words compared with the ENH individuals. The EHA users also had delayed IPs for final word identification in sentences with lower predictability; however, no significant between-group ...

Research paper thumbnail of The impact of Noise on the Amount of Time Required for Correct Identification of Audiovisual Speech tasks

This aim of this study was to investigate the degree to which audiovisual pre-sentation impacts o... more This aim of this study was to investigate the degree to which audiovisual pre-sentation impacts on the isolation point (IPs, the time required for the correct identification of speech stimuli) in silence and noise using gating paradigm. The audiovisual-gated stimuli (consonants, words, and final words in sentences) were presented to 24 (11 men, 33 women) university students. The results showed that noise delays identification of consonants and words, but not for final-word identi-fication in highly and less predictable sentences.

Research paper thumbnail of Auditory, signal processing, and cognitive factors influencing speech perception in persons with hearing loss fitted with hearing aids – the N200 study

Objective: The aim of the current study was to assess aided speech-in-noise outcomes and relate t... more Objective: The aim of the current study was to assess aided speech-in-noise outcomes and relate those measures to auditory sensitivity and processing, different types of cognitive processing abilit ...

Research paper thumbnail of Bullerskydd med hastighetsdämpande egenskaper

Syftet var att pavisa att tatnande intervall av vertikala markeringar pa vagnara bullerskydd kan ... more Syftet var att pavisa att tatnande intervall av vertikala markeringar pa vagnara bullerskydd kan sanka medelhastigheten. Konceptet har potential for att utgora ett kostnadseffektivt alternativ elle ...

Research paper thumbnail of Factors in speechreading of natural and synthetic faces

Research paper thumbnail of Hörselnedsättning, trafiksäkerhet och mobilitet : En enkätstudie

Objective: To examine how road users with different degree of hearing loss experience safety and ... more Objective: To examine how road users with different degree of hearing loss experience safety and mobility in transport situations compared to road users without hearing loss.Methods: The participan ...

Research paper thumbnail of Comparison of gated audiovisual speech perception between elderly hearing-aid users and elderly normal-hearing listeners

The addition of visual cues to amplified auditory signals by hearing aids resulted in better iden... more The addition of visual cues to amplified auditory signals by hearing aids resulted in better identification of speech stimuli relative to unaided audiovisual or aided auditory-only conditions (Walden et al., 2001). An important question that remains unexplored is whether hearing-aid users have the same level of ability for audiovisual speech perception relative to their age-matched normal hearing counterparts.Here we present the preliminary findings from collected data of 18 elderly hearing-aid users and 18 normal-hearing listeners in gated-audiovisual identification of different types of speech stimuli (consonants, words, and final words in low-predictable and high-predictable sentences). In terms of isolation point (IP; the shortest time from the onset of an speech stimulus required for correct identification of that speech stimulus), results showed that elderly hearing-aid users needed more IPs for identification of consonants and words than elderly normal-hearing individuals und...

Research paper thumbnail of Trafikanter med hörselnedsättning : En enkätstudie

Horselintryckens betydelse for trafikanten ar ett tamligen outforskat omrade. Studier som genomfo... more Horselintryckens betydelse for trafikanten ar ett tamligen outforskat omrade. Studier som genomforts har framst handlat om exklusionskriterier for korkort och man har da kommit fram till att horsel ...

Research paper thumbnail of Drivers with hearing loss and the design of driver support system. A simulator study

Research paper thumbnail of Utformning av syntetiska ansikten som kommunikationsstöd

Research paper thumbnail of Fonologiska och lexikala färdigheter samt arbetsminneskapacitet hos barn med ushers syndrom typ 1 och cochleaimplantat

Fonologiska och lexikala fardigheter samt arbetsminneskapacitet hos barn med ushers syndrom typ 1... more Fonologiska och lexikala fardigheter samt arbetsminneskapacitet hos barn med ushers syndrom typ 1 och cochleaimplantat

Research paper thumbnail of Effects of displayed emotion on attitude and impression formation in visual speech–reading

Scandinavian Journal of Psychology, 2002

In two experiments on visual speech-reading, with a total of 132 normal-hearing participants, the... more In two experiments on visual speech-reading, with a total of 132 normal-hearing participants, the effects of displayed emotion and task specificity on speech-reading performance, on attitude toward the task and on person impression were explored, as well as associations between speechreading performance, attitude, and person impression. The results show that displayed emotion increased speech-reading performance and attitude ratings, and that the actor was perceived as more extraverted both when displaying emotions, and when his speech was high in specificity. The main conclusion was that displayed emotion enhances speech-reading performance by providing information that is useful to the speechreader.

Research paper thumbnail of Visual Phonemic Ambiguity and Speechreading

Journal of Speech, Language, and Hearing Research, 2006

Purpose To study the role of visual perception of phonemes in visual perception of sentences and ... more Purpose To study the role of visual perception of phonemes in visual perception of sentences and words among normal-hearing individuals. Method Twenty-four normal-hearing adults identified consonants, words, and sentences, spoken by either a human or a synthetic talker. The synthetic talker was programmed with identical parameters within phoneme groups, hypothetically resulting in simplified articulation. Proportions of correctly identified phonemes per participant, condition, and task, as well as sensitivity to single consonants and clusters of consonants, were measured. Groups of mutually exclusive consonants were used for sensitivity analyses and hierarchical cluster analyses. Results Consonant identification performance did not differ as a function of talker, nor did average sensitivity to single consonants. The bilabial and labiodental clusters were most readily identified and cohesive for both talkers. Word and sentence identification was better for the human talker than the s...

Research paper thumbnail of Gated audiovisual speech identification in silence vs. noise: effects on time and accuracy

Frontiers in Psychology, 2013

This study investigated the degree to which audiovisual presentation (compared to auditory-only p... more This study investigated the degree to which audiovisual presentation (compared to auditory-only presentation) affected isolation point (IPs, the amount of time required for the correct identification of speech stimuli using a gating paradigm) in silence and noise conditions. The study expanded on the findings of Moradi et al. (under revision), using the same stimuli, but presented in an audiovisual instead of an auditory-only manner. The results showed that noise impeded the identification of consonants and words (i.e., delayed IPs and lowered accuracy), but not the identification of final words in sentences. In comparison with the previous study by Moradi et al., it can be concluded that the provision of visual cues expedited IPs and increased the accuracy of speech stimuli identification in both silence and noise. The implication of the results is discussed in terms of models for speech understanding.

Research paper thumbnail of The influence of hearing loss on transport safety and mobility

European Transport Research Review, 2012

Research paper thumbnail of AVSP 2001 International Conference on Auditory-Visual Speech Processing SPEECHREADING ESSENTIALS: SIGNAL, PARALINGUISTIC CUES, AND SKILL

An overview of some results from our own studies on speechreading is presented. Focus is on paral... more An overview of some results from our own studies on speechreading is presented. Focus is on paralinguistic cues that assist in perception of the spoken signal. We elaborate on how the contextual, and especially the emotional cues are utilised in speechreading. We further discuss which skills are important in integrating paralinguistic cues and the linguistic signal in speechreading. As a basis for this overview, empirical data as well as hypotheses under scrutiny in a present design are discussed. Finally, some practical implications regarding synthetic faces as communicative aids are discussed: how the speech should be presented, incorporation of paralinguistic cues into the design, and how to match user and aid. 1.

Research paper thumbnail of Visual Phonemic Ambiguity and

Purpose: To study the role of visual perception of phonemes in visual perception of sentences and... more Purpose: To study the role of visual perception of phonemes in visual perception of sentences and words among normal-hearing individuals. Method: Twenty-four normal-hearing adults identified consonants, words, and sentences, spoken by either a human or a synthetic talker. The synthetic talker was programmed with identical parameters within phoneme groups, hypothetically resulting in simplified articulation. Proportions of correctly identified phonemes per participant, condition, and task, as well as sensitivity to single consonants and clusters of consonants, were measured. Groups of mutually exclusive consonants were used for sensitivity analyses and hierarchical cluster analyses. Results: Consonant identification performance did not differ as a function of talker, nor did average sensitivity to single consonants. The bilabial and labiodental clusters were most readily identified and cohesive for both talkers. Word and sentence identification was better for the human talker than th...

Research paper thumbnail of A, v, and AV discrimination of vowel duration

Discrimination of vowel duration was explored with regard to JNDs, error bias, and effects of mod... more Discrimination of vowel duration was explored with regard to JNDs, error bias, and effects of modality and consonant context. 90 normal-hearing participants discriminated either auditorily, visually, or audiovisually between pairs of stimuli differing with regard to duration of the vowel /a/. Duration differences varied in 24 steps: 12 with the first token longer and 12 with the second token longer (33–400 ms). Results: accuracy was lower for V than A and AV; step difference affected performance in all modalities; error bias was affected by modality and consonant context; and JNDs (> 50% correct) were not possible to establish.

Research paper thumbnail of Simulatorbaserad träning för utryckningsförare : En förstudie

Syftet med forstudien var att undersoka forutsattningar for simulatorbaserad traning av utrycknin... more Syftet med forstudien var att undersoka forutsattningar for simulatorbaserad traning av utryckningskorning samt initiera anvandarcentrerad utveckling av simulatorer for utryckningskorning. Med hjal ...

Research paper thumbnail of Hearing loss and a supportive tactile signal in a navigation system : effects on driving behavior and eye movements

An on-road study was conducted to evaluate a complementary tactile navigation signal on driving b... more An on-road study was conducted to evaluate a complementary tactile navigation signal on driving behaviour and eye movements for drivers with hearing loss (HL) compared to drivers with normal hearing (NH). 32 participants (16 HL and 16 NH) performed two preprogrammed navigation tasks. In one, participants received only visual information, while the other also included a vibration in the seat to guide them in the correct direction. SMI glasses were used for eye tracking, recording the point of gaze within the scene. Analysis was performed on predefined regions. A questionnaire examined participant's experience of the navigation systems. Hearing loss was associated with lower speed, higher satisfaction with the tactile signal and more glances in the rear view mirror. Additionally, tactile support led to less time spent viewing the navigation display.

Research paper thumbnail of Effects of Hearing Impairment and Cognitive Capacity Gated Auditory Speech Perception in Elderly Hearing Aid Users and Elderly Normal-Hearing Individuals

This study compared elderly hearing aid (EHA) users and elderly normal-hearing (ENH) individuals ... more This study compared elderly hearing aid (EHA) users and elderly normal-hearing (ENH) individuals on identification of auditory speech stimuli (consonants, words, and final word in sentences) that were different when considering their linguistic properties. We measured the accuracy with which the target speech stimuli were identified, as well as the isolation points (IPs: the shortest duration, from onset, required to correctly identify the speech target). The relationships between working memory capacity, the IPs, and speech accuracy were also measured. Twenty-four EHA users (with mild to moderate hearing impairment) and 24 ENH individuals participated in the present study. Despite the use of their regular hearing aids, the EHA users had delayed IPs and were less accurate in identifying consonants and words compared with the ENH individuals. The EHA users also had delayed IPs for final word identification in sentences with lower predictability; however, no significant between-group ...

Research paper thumbnail of The impact of Noise on the Amount of Time Required for Correct Identification of Audiovisual Speech tasks

This aim of this study was to investigate the degree to which audiovisual pre-sentation impacts o... more This aim of this study was to investigate the degree to which audiovisual pre-sentation impacts on the isolation point (IPs, the time required for the correct identification of speech stimuli) in silence and noise using gating paradigm. The audiovisual-gated stimuli (consonants, words, and final words in sentences) were presented to 24 (11 men, 33 women) university students. The results showed that noise delays identification of consonants and words, but not for final-word identi-fication in highly and less predictable sentences.

Research paper thumbnail of Auditory, signal processing, and cognitive factors influencing speech perception in persons with hearing loss fitted with hearing aids – the N200 study

Objective: The aim of the current study was to assess aided speech-in-noise outcomes and relate t... more Objective: The aim of the current study was to assess aided speech-in-noise outcomes and relate those measures to auditory sensitivity and processing, different types of cognitive processing abilit ...

Research paper thumbnail of Bullerskydd med hastighetsdämpande egenskaper

Syftet var att pavisa att tatnande intervall av vertikala markeringar pa vagnara bullerskydd kan ... more Syftet var att pavisa att tatnande intervall av vertikala markeringar pa vagnara bullerskydd kan sanka medelhastigheten. Konceptet har potential for att utgora ett kostnadseffektivt alternativ elle ...

Research paper thumbnail of Factors in speechreading of natural and synthetic faces

Research paper thumbnail of Hörselnedsättning, trafiksäkerhet och mobilitet : En enkätstudie

Objective: To examine how road users with different degree of hearing loss experience safety and ... more Objective: To examine how road users with different degree of hearing loss experience safety and mobility in transport situations compared to road users without hearing loss.Methods: The participan ...

Research paper thumbnail of Comparison of gated audiovisual speech perception between elderly hearing-aid users and elderly normal-hearing listeners

The addition of visual cues to amplified auditory signals by hearing aids resulted in better iden... more The addition of visual cues to amplified auditory signals by hearing aids resulted in better identification of speech stimuli relative to unaided audiovisual or aided auditory-only conditions (Walden et al., 2001). An important question that remains unexplored is whether hearing-aid users have the same level of ability for audiovisual speech perception relative to their age-matched normal hearing counterparts.Here we present the preliminary findings from collected data of 18 elderly hearing-aid users and 18 normal-hearing listeners in gated-audiovisual identification of different types of speech stimuli (consonants, words, and final words in low-predictable and high-predictable sentences). In terms of isolation point (IP; the shortest time from the onset of an speech stimulus required for correct identification of that speech stimulus), results showed that elderly hearing-aid users needed more IPs for identification of consonants and words than elderly normal-hearing individuals und...

Research paper thumbnail of Trafikanter med hörselnedsättning : En enkätstudie

Horselintryckens betydelse for trafikanten ar ett tamligen outforskat omrade. Studier som genomfo... more Horselintryckens betydelse for trafikanten ar ett tamligen outforskat omrade. Studier som genomforts har framst handlat om exklusionskriterier for korkort och man har da kommit fram till att horsel ...

Research paper thumbnail of Drivers with hearing loss and the design of driver support system. A simulator study

Research paper thumbnail of Utformning av syntetiska ansikten som kommunikationsstöd

Research paper thumbnail of Fonologiska och lexikala färdigheter samt arbetsminneskapacitet hos barn med ushers syndrom typ 1 och cochleaimplantat

Fonologiska och lexikala fardigheter samt arbetsminneskapacitet hos barn med ushers syndrom typ 1... more Fonologiska och lexikala fardigheter samt arbetsminneskapacitet hos barn med ushers syndrom typ 1 och cochleaimplantat

Research paper thumbnail of Effects of displayed emotion on attitude and impression formation in visual speech–reading

Scandinavian Journal of Psychology, 2002

In two experiments on visual speech-reading, with a total of 132 normal-hearing participants, the... more In two experiments on visual speech-reading, with a total of 132 normal-hearing participants, the effects of displayed emotion and task specificity on speech-reading performance, on attitude toward the task and on person impression were explored, as well as associations between speechreading performance, attitude, and person impression. The results show that displayed emotion increased speech-reading performance and attitude ratings, and that the actor was perceived as more extraverted both when displaying emotions, and when his speech was high in specificity. The main conclusion was that displayed emotion enhances speech-reading performance by providing information that is useful to the speechreader.

Research paper thumbnail of Visual Phonemic Ambiguity and Speechreading

Journal of Speech, Language, and Hearing Research, 2006

Purpose To study the role of visual perception of phonemes in visual perception of sentences and ... more Purpose To study the role of visual perception of phonemes in visual perception of sentences and words among normal-hearing individuals. Method Twenty-four normal-hearing adults identified consonants, words, and sentences, spoken by either a human or a synthetic talker. The synthetic talker was programmed with identical parameters within phoneme groups, hypothetically resulting in simplified articulation. Proportions of correctly identified phonemes per participant, condition, and task, as well as sensitivity to single consonants and clusters of consonants, were measured. Groups of mutually exclusive consonants were used for sensitivity analyses and hierarchical cluster analyses. Results Consonant identification performance did not differ as a function of talker, nor did average sensitivity to single consonants. The bilabial and labiodental clusters were most readily identified and cohesive for both talkers. Word and sentence identification was better for the human talker than the s...

Research paper thumbnail of Gated audiovisual speech identification in silence vs. noise: effects on time and accuracy

Frontiers in Psychology, 2013

This study investigated the degree to which audiovisual presentation (compared to auditory-only p... more This study investigated the degree to which audiovisual presentation (compared to auditory-only presentation) affected isolation point (IPs, the amount of time required for the correct identification of speech stimuli using a gating paradigm) in silence and noise conditions. The study expanded on the findings of Moradi et al. (under revision), using the same stimuli, but presented in an audiovisual instead of an auditory-only manner. The results showed that noise impeded the identification of consonants and words (i.e., delayed IPs and lowered accuracy), but not the identification of final words in sentences. In comparison with the previous study by Moradi et al., it can be concluded that the provision of visual cues expedited IPs and increased the accuracy of speech stimuli identification in both silence and noise. The implication of the results is discussed in terms of models for speech understanding.

Research paper thumbnail of The influence of hearing loss on transport safety and mobility

European Transport Research Review, 2012