Björn Lidestam - Academia.edu (original) (raw)
Papers by Björn Lidestam
International Journal of Audiology, 1996
In the present study, the role of facial expressions in visual speechreading (lipreading) was exa... more In the present study, the role of facial expressions in visual speechreading (lipreading) was examined. Speechreading was assessed by three different tests: sentence-based speechreading, word-decoding, and word discrimination. Twenty-seven individuals participated as subjects in the study. The results revealed that no general improvement as a function of expression was obtained across all tests. Nevertheless, skilled speechreaders could significantly improve their performance as a function of emotional expression in the word-decoding and word-discrimination conditions. Furthermore, a correlational analysis indicated that there was a significant relationship between the subjects' rating of confidence regarding their responses to each test-item and performance on speechreading tests where lexical analysis is a necessary task-demand. The results are discussed with respect to how information from facial expressions is integrated with the information given by the lip movements in visual speechreading, and also with respect to general models of face-processing (i.e., Bruce & Young, 1986; Young & Bruce, 1991).
18th International Conference Road Safety on Five Continents (RS5C 2018), Jeju Island, South Korea, May 16-18, 2018, 2018
AVSP, 2008
Discrimination of vowel duration was explored with regard to JNDs, error bias, and effects of mod... more Discrimination of vowel duration was explored with regard to JNDs, error bias, and effects of modality and consonant context. 90 normal-hearing participants discriminated either auditorily, visually, or audiovisually between pairs of stimuli differing with regard to duration of the vowel /a/. Duration differences varied in 24 steps: 12 with the first token longer and 12 with the second token longer (33-400 ms). Results: accuracy was lower for V than A and AV; step difference affected performance in all modalities; error bias was affected by modality and consonant context; and JNDs (> 50% correct) were not possible to establish.
VTI NOTAT, 2019
Syftet var att pavisa att tatnande intervall av vertikala markeringar pa vagnara bullerskydd kan ... more Syftet var att pavisa att tatnande intervall av vertikala markeringar pa vagnara bullerskydd kan sanka medelhastigheten. Konceptet har potential for att utgora ett kostnadseffektivt alternativ elle ...
Transportation Research Part F-traffic Psychology and Behaviour, Aug 1, 2019
Two experiments were carried out to test speed perception dependency on field of view (FoV), virt... more Two experiments were carried out to test speed perception dependency on field of view (FoV), virtual road markings (VRMs), and presentation orders. The primary purpose was to examine how the extent of the optic flow (foremost peripherally-vertically) informs the driver about egospeed. A second purpose was to examine different task demands and stimulus characteristics supporting rhythm-based versus energy-based processing. A third purpose was to examine speed changes indicative of changes in motion sensitivity. Participants were tested in a car simulator, with FoV resembling low front-door windows, and with VRMs inside the car. Three main results were found. Larger FoV, both horizontally and peripherally-vertically, significantly reduced participants' speed, as did VRMs. Delineator posts and road center lines were used for participants' rhythm-based processing, when the task was to drive at target speeds. Rich motion-flow cues presented initially resulted in lower egospeed in subsequent conditions with relatively less motion-flow cues. The practical implication is that non-iconic, naturalistic and intuitive interfaces can effectively instill spontaneous speed adaptation in drivers.
Transportation Research Part F-traffic Psychology and Behaviour, Nov 1, 2013
To compare the effect of cognitive workload in individuals with and without hearing loss, respect... more To compare the effect of cognitive workload in individuals with and without hearing loss, respectively, in driving situations with varying degree of complexity. Methods: 24 participants with moderate hearing loss (HL) and 24 with normal hearing (NH) experienced three different driving conditions: Baseline driving; Critical events with a need to act fast; and a Parked car event with the possibility to adapt the workload to the situation. Additionally, a Secondary task (observation and recalling of 4 visually displayed letters) was present during the drive, with two levels of difficulty in terms of load on the phonological loop. A tactile signal, presented by means of a vibration in the seat, was used to announce the Secondary task and thereby simultaneously evaluated in terms of effectiveness when calling for driver attention. Objective driver behavior measures (M and SD of driving speed, M and SD of lateral position, time to line crossing) were accompanied by subjective ratings during and after the test drive. Results: HL had no effect on driving behavior at Baseline driving, where no events occurred. Both during Secondary task and at the Parked car event HL was associated with decreased mean driving speed compared to baseline driving. The effect of HL on the Secondary task performance, both at Baseline driving and at the lower Difficulty Level at Critical events, was more skipped letters and fewer correctly recalled letters. At Critical events, task difficulty affected participants with HL more. Participants were generally positive to use vibrations in the seat as a means for announcing the Secondary task. Conclusions: Differences in terms of driving behavior and task performance related to HL appear when the driving complexity exceeds Baseline driving either in the driving task, Secondary task or a combination of both. This leads to a more cautious driving behavior with a decreased mean driving speed and less focus on the Secondary task, which could be a way of compensating for the increasing driving complexity. Seat vibration was found to be a feasible way to alert drivers with or without HL.
Ear and Hearing, Apr 1, 2005
This case study tested the threshold hypothesis (Rönnberg et al., 1998), which states that superi... more This case study tested the threshold hypothesis (Rönnberg et al., 1998), which states that superior speechreading skill is possible only if high-order cognitive functions, such as capacious verbal working memory, enable efficient strategies. In a case study, a speechreading expert (AA) was tested on a number of speechreading and cognitive tasks and compared with control groups (z scores). Sentence-based speechreading tests, a word-decoding test, and a phoneme identification task were used to assess speechreading skill at different analytical levels. The cognitive test battery used included tasks of working memory (e.g., reading span), inference-making, phonological processing (e.g., rhyme-judgment), and central-executive functions (verbal fluency, Stroop task). Contrary to previous cases of extreme speechreading skill, AA excels on both low-order (phoneme identification: z = +2.83) and high-order (sentence-based: z = +8.12 and word-decoding: z = +4.21) speechreading tasks. AA does not display superior verbal inference-making ability (sentence-completion task: z = -0.36). Neither does he possess a superior working memory (reading span: z = +0.80). However, AA outperforms the controls on two measures of executive retrieval functions, the semantic (z = +3.77) and phonological verbal fluency tasks (z = +3.55). The performance profile is inconsistent with the threshold hypothesis. Extreme speechreading accuracy can be obtained in ways other than via well-developed high-order cognitive functions. It is suggested that AA's extreme speechreading skill, which capitalizes on low-order functions in combination with efficient central executive functions, is due to early onset of hearing impairment.
Journal of Occupational and Environmental Medicine, Jun 12, 2023
Transportation Research Record, Nov 30, 2022
Many European train drivers face major changes in their work with the introduction of the new tra... more Many European train drivers face major changes in their work with the introduction of the new train-protection system, the European Rail Traffic Management System (ERTMS), as information retrieval shifts from outside to in-cab, and a new rulebook is introduced. Therefore, many train drivers have to be educated in a short time, to make the transition safe and efficient. The purpose was to find out how a successful ERTMS practice can be designed in a physically low-fidelity but highly functional train-driving simulator. An experimental design was used, with 16 drivers divided into two groups: one group practiced in a simulator, and the other in reality. Standard training methodology was used, and the learning outcome was assessed by both measuring driving errors and via instructor evaluation of a simulator test. The drivers also filled in a questionnaire to capture how different factors, such as repeated practice, experience, and self-estimated confidence, correlate with performance. Results show that the simulator group committed significantly fewer driving errors and received significantly higher scores from the instructor. In addition, the simulator group's better performance is mostly caused by the possibility of repeated training of different special cases. The findings also imply that several of the more common special cases on the ERTMS can hardly be provoked in real train driving. Furthermore, this work strengthens the theory that novices can hardly estimate their own ability. Therefore, we argue that this type of low-fidelity simulator is well suited for research purposes, for practicing special cases, and for train operation companies to assess drivers’ skills.
Scandinavian Audiology, 1996
In the present study, the role of facial expressions in visual speechreading (lipreading) was exa... more In the present study, the role of facial expressions in visual speechreading (lipreading) was examined. Speechreading was assessed by three different tests: sentence-based speechreading, word-decoding, and word discrimination. Twenty-seven individuals participated as subjects in the study. The results revealed that no general improvement as a function of expression was obtained across all tests. Nevertheless, skilled speechreaders could significantly improve their performance as a function of emotional expression in the word-decoding and word-discrimination conditions. Furthermore, a correlational analysis indicated that there was a significant relationship between the subjects' rating of confidence regarding their responses to each test-item and performance on speechreading tests where lexical analysis is a necessary task-demand. The results are discussed with respect to how information from facial expressions is integrated with the information given by the lip movements in visual speechreading, and also with respect to general models of face-processing (i.e., Bruce & Young, 1986; Young & Bruce, 1991).
Journal of Speech Language and Hearing Research, Sep 18, 2017
We sought to examine the contribution of visual cues in audiovisual identification of consonants ... more We sought to examine the contribution of visual cues in audiovisual identification of consonants and vowelsin terms of isolation points (the shortest time required for correct identification of a speech stimulus), accuracy, and cognitive demands-in listeners with hearing impairment using hearing aids. Method: The study comprised 199 participants with hearing impairment (mean age = 61.1 years) with bilateral, symmetrical, mild-to-severe sensorineural hearing loss. Gated Swedish consonants and vowels were presented aurally and audiovisually to participants. Linear amplification was adjusted for each participant to assure audibility. The reading span test was used to measure participants' working memory capacity. Results: Audiovisual presentation resulted in shortened isolation points and improved accuracy for consonants and vowels relative to auditory-only presentation. This benefit was more evident for consonants than vowels. In addition, correlations and subsequent analyses revealed that listeners with higher scores on the reading span test identified both consonants and vowels earlier in auditoryonly presentation, but only vowels (not consonants) in audiovisual presentation. Conclusion: Consonants and vowels differed in terms of the benefits afforded from their associative visual cues, as indicated by the degree of audiovisual benefit and reduction in cognitive demands linked to the identification of consonants and vowels presented audiovisually.
Journal of transport and health, Jun 1, 2017
Background Drivers with Attention Deficit Hyperactivity Disorder (ADHD) have been considered to h... more Background Drivers with Attention Deficit Hyperactivity Disorder (ADHD) have been considered to have a 3-4 times higher crash risk than control drivers without ADHD. A core issue which has not been properly dealt with from then on is the role of co-morbid diagnoses which frequently appear together with ADHD, especially Oppositional Defiant Disorder (ODD) and Conduct Disorder (CD), sometimes generically referred to as “conduct problems”. The increased crash risk associated with ADHD diagnosis presented in the literature is often based on studies performed with participants with more than one diagnosis. This means that the co-morbidity may be high and, consequently, the effect of ADHD on traffic safety could be overestimated. This has been shown in a meta-analysis presenting a relative risk of 1.30 instead. The existing research on drivers with ADHD is unsatisfying when it comes to methodology and specifically concerning inclusion and exclusion criteria for participants. This has led to a misunderstanding of the driving ability for people with ADHD, which has been cited and spread in the literature for two decades. People with ADHD diagnosis might suffer from this misinterpretation and the specific effects of ADHD on driving behavior remain unclear. There is a potential for better control for confounding factors, for exposure (mileage) and for co-morbidity, especially CD and ODD. The aim of this project was to examine differences in driving behavior between drivers with ADHD and a control group with drivers without ADHD. Methods In this study conducted in a driving simulator at VTI, 40 drivers diagnosed with ADHD and 20 drivers without ADHD participated, both men and women. The route included urban road, rural road and motorway. No secondary tasks were included and the data collected was driving speed, attention/ reaction time to other road users, and questionnaire. Results Analyses are ongoing and will be presented at the conference. Conclusions Recruiting participants and performing the study was successful. Further conclusions will be presented at the conference.
Ear and Hearing, Mar 1, 2019
We have previously shown that the gain provided by prior audiovisual (AV) speech exposure for sub... more We have previously shown that the gain provided by prior audiovisual (AV) speech exposure for subsequent auditory (A) sentence identification in noise is relatively larger than that provided by prior A speech exposure. We have called this effect "perceptual doping." Specifically, prior AV speech processing dopes (recalibrates) the phonological and lexical maps in the mental lexicon, which facilitates subsequent phonological and lexical access in the A modality, separately from other learning and priming effects. In this article, we use data from the n200 study and aim to replicate and extend the perceptual doping effect using two different A and two different AV speech tasks and a larger sample than in our previous studies. The participants were 200 hearing aid users with bilateral, symmetrical, mild-to-severe sensorineural hearing loss. There were four speech tasks in the n200 study that were presented in both A and AV modalities (gated consonants, gated vowels, vowel duration discrimination, and sentence identification in noise tasks). The modality order of speech presentation was counterbalanced across participants: half of the participants completed the A modality first and the AV modality second (A1-AV2), and the other half completed the AV modality and then the A modality (AV1-A2). Based on the perceptual doping hypothesis, which assumes that the gain of prior AV exposure will be relatively larger relative to that of prior A exposure for subsequent processing of speech stimuli, we predicted that the mean A scores in the AV1-A2 modality order would be better than the mean A scores in the A1-AV2 modality order. We therefore expected a significant difference in terms of the identification of A speech stimuli between the two modality orders (A1 versus A2). As prior A exposure provides a smaller gain than AV exposure, we also predicted that the difference in AV speech scores between the two modality orders (AV1 versus AV2) may not be statistically significantly different. In the gated consonant and vowel tasks and the vowel duration discrimination task, there were significant differences in A performance of speech stimuli between the two modality orders. The participants' mean A performance was better in the AV1-A2 than in the A1-AV2 modality order (i.e., after AV processing). In terms of mean AV performance, no significant difference was observed between the two orders. In the sentence identification in noise task, a significant difference in the A identification of speech stimuli between the two orders was observed (A1 versus A2). In addition, a significant difference in the AV identification of speech stimuli between the two orders was also observed (AV1 versus AV2). This finding was most likely because of a procedural learning effect due to the greater complexity of the sentence materials or a combination of procedural learning and perceptual learning due to the presentation of sentential materials in noisy conditions. The findings of the present study support the perceptual doping hypothesis, as prior AV relative to A speech exposure resulted in a larger gain for the subsequent processing of speech stimuli. For complex speech stimuli that were presented in degraded listening conditions, a procedural learning effect (or a combination of procedural learning and perceptual learning effects) also facilitated the identification of speech stimuli, irrespective of whether the prior modality was A or AV.This is an open access article distributed under the Creative Commons Attribution License 4.0 (CCBY), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Scandinavian Journal of Psychology, Oct 1, 2009
Discrimination of vowel duration was explored with regard to discrimination threshold, error bias... more Discrimination of vowel duration was explored with regard to discrimination threshold, error bias, and effects of modality and consonant context. A total of 122 normal-hearing participants were presented with disyllabic-like items such as /lal-lal/ or /mam-mam/ in which the lengths of the vowels were systematically varied and were asked to judge whether the first or second vowel was longer. Presentation was either visual, auditory, or audiovisual. Vowel duration differences varied in 24 steps: 12 with a longer first /a/ and 12 with a longer last /a/ (range: ±33-400 ms). Results: 50% JNDs were smaller than the lowest tested step size (33 ms); 75% JNDs were in the 33-66 ms range for all conditions but V /lal/, with a 75% JND at 66-100 ms. Errors were greatest for visual presentation and for /lal-lal/ tokens. There was an error bias towards reporting the first vowel as longer, and this was strongest for /mam-mam/ and when both vowels were short, possibly reflecting a sublinguistic processing strategy.
Frontiers in Psychology, Mar 13, 2017
This study aimed to examine the efficacy and maintenance of short-term (one-session) gated audiov... more This study aimed to examine the efficacy and maintenance of short-term (one-session) gated audiovisual speech training for improving auditory sentence identification in noise in experienced elderly hearing-aid users. Twenty-five hearing aid users (16 men and 9 women), with an average age of 70.8 years, were randomly divided into an experimental (audiovisual training, n = 14) and a control (auditory training, n = 11) group. Participants underwent gated speech identification tasks comprising Swedish consonants and words presented at 65 dB sound pressure level with a 0 dB signalto-noise ratio (steady-state broadband noise), in audiovisual or auditory-only training conditions. The Hearing-in-Noise Test was employed to measure participants' auditory sentence identification in noise before the training (pre-test), promptly after training (post-test), and 1 month after training (one-month follow-up). The results showed that audiovisual training improved auditory sentence identification in noise promptly after the training (post-test vs. pre-test scores); furthermore, this improvement was maintained 1 month after the training (one-month follow-up vs. pre-test scores). Such improvement was not observed in the control group, neither promptly after the training nor at the onemonth follow-up. However, no significant between-groups difference nor an interaction between groups and session was observed. Conclusion: Audiovisual training may be considered in aural rehabilitation of hearing aid users to improve listening capabilities in noisy conditions. However, the lack of a significant between-groups effect (audiovisual vs. auditory) or an interaction between group and session calls for further research.
Trends in hearing, 2016
The present study compared elderly hearing aid (EHA) users (n ¼ 20) with elderly normal-hearing (... more The present study compared elderly hearing aid (EHA) users (n ¼ 20) with elderly normal-hearing (ENH) listeners (n ¼ 20) in terms of isolation points (IPs, the shortest time required for correct identification of a speech stimulus) and accuracy of audiovisual gated speech stimuli (consonants, words, and final words in highly and less predictable sentences) presented in silence. In addition, we compared the IPs of audiovisual speech stimuli from the present study with auditory ones extracted from a previous study, to determine the impact of the addition of visual cues. Both participant groups achieved ceiling levels in terms of accuracy in the audiovisual identification of gated speech stimuli; however, the EHA group needed longer IPs for the audiovisual identification of consonants and words. The benefit of adding visual cues to auditory speech stimuli was more evident in the EHA group, as audiovisual presentation significantly shortened the IPs for consonants, words, and final words in less predictable sentences; in the ENH group, audiovisual presentation only shortened the IPs for consonants and words. In conclusion, although the audiovisual benefit was greater for EHA group, this group had inferior performance compared with the ENH group in terms of IPs when supportive semantic context was lacking. Consequently, EHA users needed the initial part of the audiovisual speech signal to be longer than did their counterparts with normal hearing to reach the same level of accuracy in the absence of a semantic context.
International Journal of Audiology, 1996
In the present study, the role of facial expressions in visual speechreading (lipreading) was exa... more In the present study, the role of facial expressions in visual speechreading (lipreading) was examined. Speechreading was assessed by three different tests: sentence-based speechreading, word-decoding, and word discrimination. Twenty-seven individuals participated as subjects in the study. The results revealed that no general improvement as a function of expression was obtained across all tests. Nevertheless, skilled speechreaders could significantly improve their performance as a function of emotional expression in the word-decoding and word-discrimination conditions. Furthermore, a correlational analysis indicated that there was a significant relationship between the subjects' rating of confidence regarding their responses to each test-item and performance on speechreading tests where lexical analysis is a necessary task-demand. The results are discussed with respect to how information from facial expressions is integrated with the information given by the lip movements in visual speechreading, and also with respect to general models of face-processing (i.e., Bruce & Young, 1986; Young & Bruce, 1991).
18th International Conference Road Safety on Five Continents (RS5C 2018), Jeju Island, South Korea, May 16-18, 2018, 2018
AVSP, 2008
Discrimination of vowel duration was explored with regard to JNDs, error bias, and effects of mod... more Discrimination of vowel duration was explored with regard to JNDs, error bias, and effects of modality and consonant context. 90 normal-hearing participants discriminated either auditorily, visually, or audiovisually between pairs of stimuli differing with regard to duration of the vowel /a/. Duration differences varied in 24 steps: 12 with the first token longer and 12 with the second token longer (33-400 ms). Results: accuracy was lower for V than A and AV; step difference affected performance in all modalities; error bias was affected by modality and consonant context; and JNDs (> 50% correct) were not possible to establish.
VTI NOTAT, 2019
Syftet var att pavisa att tatnande intervall av vertikala markeringar pa vagnara bullerskydd kan ... more Syftet var att pavisa att tatnande intervall av vertikala markeringar pa vagnara bullerskydd kan sanka medelhastigheten. Konceptet har potential for att utgora ett kostnadseffektivt alternativ elle ...
Transportation Research Part F-traffic Psychology and Behaviour, Aug 1, 2019
Two experiments were carried out to test speed perception dependency on field of view (FoV), virt... more Two experiments were carried out to test speed perception dependency on field of view (FoV), virtual road markings (VRMs), and presentation orders. The primary purpose was to examine how the extent of the optic flow (foremost peripherally-vertically) informs the driver about egospeed. A second purpose was to examine different task demands and stimulus characteristics supporting rhythm-based versus energy-based processing. A third purpose was to examine speed changes indicative of changes in motion sensitivity. Participants were tested in a car simulator, with FoV resembling low front-door windows, and with VRMs inside the car. Three main results were found. Larger FoV, both horizontally and peripherally-vertically, significantly reduced participants' speed, as did VRMs. Delineator posts and road center lines were used for participants' rhythm-based processing, when the task was to drive at target speeds. Rich motion-flow cues presented initially resulted in lower egospeed in subsequent conditions with relatively less motion-flow cues. The practical implication is that non-iconic, naturalistic and intuitive interfaces can effectively instill spontaneous speed adaptation in drivers.
Transportation Research Part F-traffic Psychology and Behaviour, Nov 1, 2013
To compare the effect of cognitive workload in individuals with and without hearing loss, respect... more To compare the effect of cognitive workload in individuals with and without hearing loss, respectively, in driving situations with varying degree of complexity. Methods: 24 participants with moderate hearing loss (HL) and 24 with normal hearing (NH) experienced three different driving conditions: Baseline driving; Critical events with a need to act fast; and a Parked car event with the possibility to adapt the workload to the situation. Additionally, a Secondary task (observation and recalling of 4 visually displayed letters) was present during the drive, with two levels of difficulty in terms of load on the phonological loop. A tactile signal, presented by means of a vibration in the seat, was used to announce the Secondary task and thereby simultaneously evaluated in terms of effectiveness when calling for driver attention. Objective driver behavior measures (M and SD of driving speed, M and SD of lateral position, time to line crossing) were accompanied by subjective ratings during and after the test drive. Results: HL had no effect on driving behavior at Baseline driving, where no events occurred. Both during Secondary task and at the Parked car event HL was associated with decreased mean driving speed compared to baseline driving. The effect of HL on the Secondary task performance, both at Baseline driving and at the lower Difficulty Level at Critical events, was more skipped letters and fewer correctly recalled letters. At Critical events, task difficulty affected participants with HL more. Participants were generally positive to use vibrations in the seat as a means for announcing the Secondary task. Conclusions: Differences in terms of driving behavior and task performance related to HL appear when the driving complexity exceeds Baseline driving either in the driving task, Secondary task or a combination of both. This leads to a more cautious driving behavior with a decreased mean driving speed and less focus on the Secondary task, which could be a way of compensating for the increasing driving complexity. Seat vibration was found to be a feasible way to alert drivers with or without HL.
Ear and Hearing, Apr 1, 2005
This case study tested the threshold hypothesis (Rönnberg et al., 1998), which states that superi... more This case study tested the threshold hypothesis (Rönnberg et al., 1998), which states that superior speechreading skill is possible only if high-order cognitive functions, such as capacious verbal working memory, enable efficient strategies. In a case study, a speechreading expert (AA) was tested on a number of speechreading and cognitive tasks and compared with control groups (z scores). Sentence-based speechreading tests, a word-decoding test, and a phoneme identification task were used to assess speechreading skill at different analytical levels. The cognitive test battery used included tasks of working memory (e.g., reading span), inference-making, phonological processing (e.g., rhyme-judgment), and central-executive functions (verbal fluency, Stroop task). Contrary to previous cases of extreme speechreading skill, AA excels on both low-order (phoneme identification: z = +2.83) and high-order (sentence-based: z = +8.12 and word-decoding: z = +4.21) speechreading tasks. AA does not display superior verbal inference-making ability (sentence-completion task: z = -0.36). Neither does he possess a superior working memory (reading span: z = +0.80). However, AA outperforms the controls on two measures of executive retrieval functions, the semantic (z = +3.77) and phonological verbal fluency tasks (z = +3.55). The performance profile is inconsistent with the threshold hypothesis. Extreme speechreading accuracy can be obtained in ways other than via well-developed high-order cognitive functions. It is suggested that AA's extreme speechreading skill, which capitalizes on low-order functions in combination with efficient central executive functions, is due to early onset of hearing impairment.
Journal of Occupational and Environmental Medicine, Jun 12, 2023
Transportation Research Record, Nov 30, 2022
Many European train drivers face major changes in their work with the introduction of the new tra... more Many European train drivers face major changes in their work with the introduction of the new train-protection system, the European Rail Traffic Management System (ERTMS), as information retrieval shifts from outside to in-cab, and a new rulebook is introduced. Therefore, many train drivers have to be educated in a short time, to make the transition safe and efficient. The purpose was to find out how a successful ERTMS practice can be designed in a physically low-fidelity but highly functional train-driving simulator. An experimental design was used, with 16 drivers divided into two groups: one group practiced in a simulator, and the other in reality. Standard training methodology was used, and the learning outcome was assessed by both measuring driving errors and via instructor evaluation of a simulator test. The drivers also filled in a questionnaire to capture how different factors, such as repeated practice, experience, and self-estimated confidence, correlate with performance. Results show that the simulator group committed significantly fewer driving errors and received significantly higher scores from the instructor. In addition, the simulator group's better performance is mostly caused by the possibility of repeated training of different special cases. The findings also imply that several of the more common special cases on the ERTMS can hardly be provoked in real train driving. Furthermore, this work strengthens the theory that novices can hardly estimate their own ability. Therefore, we argue that this type of low-fidelity simulator is well suited for research purposes, for practicing special cases, and for train operation companies to assess drivers’ skills.
Scandinavian Audiology, 1996
In the present study, the role of facial expressions in visual speechreading (lipreading) was exa... more In the present study, the role of facial expressions in visual speechreading (lipreading) was examined. Speechreading was assessed by three different tests: sentence-based speechreading, word-decoding, and word discrimination. Twenty-seven individuals participated as subjects in the study. The results revealed that no general improvement as a function of expression was obtained across all tests. Nevertheless, skilled speechreaders could significantly improve their performance as a function of emotional expression in the word-decoding and word-discrimination conditions. Furthermore, a correlational analysis indicated that there was a significant relationship between the subjects' rating of confidence regarding their responses to each test-item and performance on speechreading tests where lexical analysis is a necessary task-demand. The results are discussed with respect to how information from facial expressions is integrated with the information given by the lip movements in visual speechreading, and also with respect to general models of face-processing (i.e., Bruce & Young, 1986; Young & Bruce, 1991).
Journal of Speech Language and Hearing Research, Sep 18, 2017
We sought to examine the contribution of visual cues in audiovisual identification of consonants ... more We sought to examine the contribution of visual cues in audiovisual identification of consonants and vowelsin terms of isolation points (the shortest time required for correct identification of a speech stimulus), accuracy, and cognitive demands-in listeners with hearing impairment using hearing aids. Method: The study comprised 199 participants with hearing impairment (mean age = 61.1 years) with bilateral, symmetrical, mild-to-severe sensorineural hearing loss. Gated Swedish consonants and vowels were presented aurally and audiovisually to participants. Linear amplification was adjusted for each participant to assure audibility. The reading span test was used to measure participants' working memory capacity. Results: Audiovisual presentation resulted in shortened isolation points and improved accuracy for consonants and vowels relative to auditory-only presentation. This benefit was more evident for consonants than vowels. In addition, correlations and subsequent analyses revealed that listeners with higher scores on the reading span test identified both consonants and vowels earlier in auditoryonly presentation, but only vowels (not consonants) in audiovisual presentation. Conclusion: Consonants and vowels differed in terms of the benefits afforded from their associative visual cues, as indicated by the degree of audiovisual benefit and reduction in cognitive demands linked to the identification of consonants and vowels presented audiovisually.
Journal of transport and health, Jun 1, 2017
Background Drivers with Attention Deficit Hyperactivity Disorder (ADHD) have been considered to h... more Background Drivers with Attention Deficit Hyperactivity Disorder (ADHD) have been considered to have a 3-4 times higher crash risk than control drivers without ADHD. A core issue which has not been properly dealt with from then on is the role of co-morbid diagnoses which frequently appear together with ADHD, especially Oppositional Defiant Disorder (ODD) and Conduct Disorder (CD), sometimes generically referred to as “conduct problems”. The increased crash risk associated with ADHD diagnosis presented in the literature is often based on studies performed with participants with more than one diagnosis. This means that the co-morbidity may be high and, consequently, the effect of ADHD on traffic safety could be overestimated. This has been shown in a meta-analysis presenting a relative risk of 1.30 instead. The existing research on drivers with ADHD is unsatisfying when it comes to methodology and specifically concerning inclusion and exclusion criteria for participants. This has led to a misunderstanding of the driving ability for people with ADHD, which has been cited and spread in the literature for two decades. People with ADHD diagnosis might suffer from this misinterpretation and the specific effects of ADHD on driving behavior remain unclear. There is a potential for better control for confounding factors, for exposure (mileage) and for co-morbidity, especially CD and ODD. The aim of this project was to examine differences in driving behavior between drivers with ADHD and a control group with drivers without ADHD. Methods In this study conducted in a driving simulator at VTI, 40 drivers diagnosed with ADHD and 20 drivers without ADHD participated, both men and women. The route included urban road, rural road and motorway. No secondary tasks were included and the data collected was driving speed, attention/ reaction time to other road users, and questionnaire. Results Analyses are ongoing and will be presented at the conference. Conclusions Recruiting participants and performing the study was successful. Further conclusions will be presented at the conference.
Ear and Hearing, Mar 1, 2019
We have previously shown that the gain provided by prior audiovisual (AV) speech exposure for sub... more We have previously shown that the gain provided by prior audiovisual (AV) speech exposure for subsequent auditory (A) sentence identification in noise is relatively larger than that provided by prior A speech exposure. We have called this effect "perceptual doping." Specifically, prior AV speech processing dopes (recalibrates) the phonological and lexical maps in the mental lexicon, which facilitates subsequent phonological and lexical access in the A modality, separately from other learning and priming effects. In this article, we use data from the n200 study and aim to replicate and extend the perceptual doping effect using two different A and two different AV speech tasks and a larger sample than in our previous studies. The participants were 200 hearing aid users with bilateral, symmetrical, mild-to-severe sensorineural hearing loss. There were four speech tasks in the n200 study that were presented in both A and AV modalities (gated consonants, gated vowels, vowel duration discrimination, and sentence identification in noise tasks). The modality order of speech presentation was counterbalanced across participants: half of the participants completed the A modality first and the AV modality second (A1-AV2), and the other half completed the AV modality and then the A modality (AV1-A2). Based on the perceptual doping hypothesis, which assumes that the gain of prior AV exposure will be relatively larger relative to that of prior A exposure for subsequent processing of speech stimuli, we predicted that the mean A scores in the AV1-A2 modality order would be better than the mean A scores in the A1-AV2 modality order. We therefore expected a significant difference in terms of the identification of A speech stimuli between the two modality orders (A1 versus A2). As prior A exposure provides a smaller gain than AV exposure, we also predicted that the difference in AV speech scores between the two modality orders (AV1 versus AV2) may not be statistically significantly different. In the gated consonant and vowel tasks and the vowel duration discrimination task, there were significant differences in A performance of speech stimuli between the two modality orders. The participants' mean A performance was better in the AV1-A2 than in the A1-AV2 modality order (i.e., after AV processing). In terms of mean AV performance, no significant difference was observed between the two orders. In the sentence identification in noise task, a significant difference in the A identification of speech stimuli between the two orders was observed (A1 versus A2). In addition, a significant difference in the AV identification of speech stimuli between the two orders was also observed (AV1 versus AV2). This finding was most likely because of a procedural learning effect due to the greater complexity of the sentence materials or a combination of procedural learning and perceptual learning due to the presentation of sentential materials in noisy conditions. The findings of the present study support the perceptual doping hypothesis, as prior AV relative to A speech exposure resulted in a larger gain for the subsequent processing of speech stimuli. For complex speech stimuli that were presented in degraded listening conditions, a procedural learning effect (or a combination of procedural learning and perceptual learning effects) also facilitated the identification of speech stimuli, irrespective of whether the prior modality was A or AV.This is an open access article distributed under the Creative Commons Attribution License 4.0 (CCBY), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Scandinavian Journal of Psychology, Oct 1, 2009
Discrimination of vowel duration was explored with regard to discrimination threshold, error bias... more Discrimination of vowel duration was explored with regard to discrimination threshold, error bias, and effects of modality and consonant context. A total of 122 normal-hearing participants were presented with disyllabic-like items such as /lal-lal/ or /mam-mam/ in which the lengths of the vowels were systematically varied and were asked to judge whether the first or second vowel was longer. Presentation was either visual, auditory, or audiovisual. Vowel duration differences varied in 24 steps: 12 with a longer first /a/ and 12 with a longer last /a/ (range: ±33-400 ms). Results: 50% JNDs were smaller than the lowest tested step size (33 ms); 75% JNDs were in the 33-66 ms range for all conditions but V /lal/, with a 75% JND at 66-100 ms. Errors were greatest for visual presentation and for /lal-lal/ tokens. There was an error bias towards reporting the first vowel as longer, and this was strongest for /mam-mam/ and when both vowels were short, possibly reflecting a sublinguistic processing strategy.
Frontiers in Psychology, Mar 13, 2017
This study aimed to examine the efficacy and maintenance of short-term (one-session) gated audiov... more This study aimed to examine the efficacy and maintenance of short-term (one-session) gated audiovisual speech training for improving auditory sentence identification in noise in experienced elderly hearing-aid users. Twenty-five hearing aid users (16 men and 9 women), with an average age of 70.8 years, were randomly divided into an experimental (audiovisual training, n = 14) and a control (auditory training, n = 11) group. Participants underwent gated speech identification tasks comprising Swedish consonants and words presented at 65 dB sound pressure level with a 0 dB signalto-noise ratio (steady-state broadband noise), in audiovisual or auditory-only training conditions. The Hearing-in-Noise Test was employed to measure participants' auditory sentence identification in noise before the training (pre-test), promptly after training (post-test), and 1 month after training (one-month follow-up). The results showed that audiovisual training improved auditory sentence identification in noise promptly after the training (post-test vs. pre-test scores); furthermore, this improvement was maintained 1 month after the training (one-month follow-up vs. pre-test scores). Such improvement was not observed in the control group, neither promptly after the training nor at the onemonth follow-up. However, no significant between-groups difference nor an interaction between groups and session was observed. Conclusion: Audiovisual training may be considered in aural rehabilitation of hearing aid users to improve listening capabilities in noisy conditions. However, the lack of a significant between-groups effect (audiovisual vs. auditory) or an interaction between group and session calls for further research.
Trends in hearing, 2016
The present study compared elderly hearing aid (EHA) users (n ¼ 20) with elderly normal-hearing (... more The present study compared elderly hearing aid (EHA) users (n ¼ 20) with elderly normal-hearing (ENH) listeners (n ¼ 20) in terms of isolation points (IPs, the shortest time required for correct identification of a speech stimulus) and accuracy of audiovisual gated speech stimuli (consonants, words, and final words in highly and less predictable sentences) presented in silence. In addition, we compared the IPs of audiovisual speech stimuli from the present study with auditory ones extracted from a previous study, to determine the impact of the addition of visual cues. Both participant groups achieved ceiling levels in terms of accuracy in the audiovisual identification of gated speech stimuli; however, the EHA group needed longer IPs for the audiovisual identification of consonants and words. The benefit of adding visual cues to auditory speech stimuli was more evident in the EHA group, as audiovisual presentation significantly shortened the IPs for consonants, words, and final words in less predictable sentences; in the ENH group, audiovisual presentation only shortened the IPs for consonants and words. In conclusion, although the audiovisual benefit was greater for EHA group, this group had inferior performance compared with the ENH group in terms of IPs when supportive semantic context was lacking. Consequently, EHA users needed the initial part of the audiovisual speech signal to be longer than did their counterparts with normal hearing to reach the same level of accuracy in the absence of a semantic context.