Lisa Mendel - Academia.edu (original) (raw)
Papers by Lisa Mendel
<b>Purpose: </b>Although the speech intelligibility index (SII) has been widely appli... more <b>Purpose: </b>Although the speech intelligibility index (SII) has been widely applied in the field of audiology and other related areas, application of this metric to cochlear implants (CIs) has yet to be investigated. In this study, SIIs for CI users were calculated to investigate whether the SII could be an effective tool for predicting speech perception performance in a population with CI.<b>Method:</b> Fifteen pre- and postlingually deafened adults with CI participated. Speech recognition scores were measured using the AzBio sentence lists. CI users also completed questionnaires and performed psychoacoustic (spectral and temporal resolution) and cognitive function (digit span) tests. Obtained SIIs were compared with predicted SIIs using a transfer function curve. Correlation and regression analyses were conducted on perceptual and demographic predictor variables to investigate the association between these factors and speech perception performance.<b>Result: </b>Because of the considerably poor hearing and large individual variability in performance, the SII did not predict speech performance for this CI group using the traditional calculation. However, new SII models were developed incorporating predictive factors, which improved the accuracy of SII predictions in listeners with CI.<b>Conclusion: </b>Conventional SII models are not appropriate for predicting speech perception scores for CI users. Demographic variables (aided audibility and duration of deafness) and perceptual–cognitive skills (gap detection and auditory digit span outcomes) are needed to improve the use of the SII for listeners with CI. Future studies are needed to improve our CI-corrected SII model by considering additional predictive factors.<br><b>Supplemental Material S1. </b>Auditory thresholds and speech perception scores obtained from 15 cochlear implant (CI) participants.<br>Lee, S., Mendel, L. L., & Bidelman, G. M. (2019). Prediction speech recognition using the speech intelligibility index and other variables for cochlear imp [...]
Cochlear implants (CIs) are an effective intervention for individuals with severe-to-profound sen... more Cochlear implants (CIs) are an effective intervention for individuals with severe-to-profound sensorineural hearing loss. Currently, no tuning procedure exists that can fully exploit the technology. We propose online unsupervised algorithms to learn features from the speech of a severely-to-profoundly hearing-impaired patient round-the-clock and compare the features to those learned from the normal hearing population using a set of neurophysiological metrics. Experimental results are presented. The information from comparison can be exploited to modify the signal processing in a patient’s CI to enhance his audibility of speech.
American Journal of Audiology
Purpose The purpose of this study was to construct and validate a recorded word recognition test ... more Purpose The purpose of this study was to construct and validate a recorded word recognition test for monolingual Spanish-speaking children utilizing a picture board and a picture-pointing task. Design The Spanish Pediatric Picture Identification Test was developed and validated in this study. Test construction steps included (a) producing new digital recordings of word lists created by Comstock and Martin (1984) using a bilingual Spanish–English female, (b) obtaining list equivalency, (c) creating digitally illustrated pictures representing the word lists, (d) validating the pictures using monolingual Spanish-speaking and bilingual Spanish–English children, and (e) re-establishing list equivalency and obtaining performance–intensity functions using a picture-pointing task with monolingual Spanish-speaking children and bilingual Spanish–English adults. Results Normative data for three Spanish word recognition lists were established. Performance–intensity functions at sensation levels...
The Journal of the Acoustical Society of America
This study investigated whether speech intelligibility in cochlear implant (CI) users is affected... more This study investigated whether speech intelligibility in cochlear implant (CI) users is affected by semantic context. Three groups participated in two experiments: Two groups of listeners with normal hearing (NH) listened to either full spectrum speech or vocoded speech, and one CI group listened to full spectrum speech. Experiment 1 measured participants' sentence recognition as a function of target-to-masker ratio (four-talker babble masker), and experiment 2 measured perception of interrupted speech as a function of duty cycles (long/short uninterrupted speech). Listeners were presented with both semantic congruent/incongruent targets. Results from the two experiments suggested that NH listeners benefitted more from the semantic cues as the listening conditions became more challenging (lower signal-to-noise ratios and interrupted speech with longer silent intervals). However, the CI group received minimal benefit from context, and therefore performed poorly in such conditions. On the contrary, in the conditions that were less challenging, CI users benefitted greatly from the semantic context, and NH listeners did not rely on such cues. The results also confirmed that such differential use of semantic cues appears to originate from the spectro-temporal degradations experienced by CI users, which could be a contributing factor for their poor performance in suboptimal environments.
Journal of Speech, Language, and Hearing Research
Purpose Although the speech intelligibility index (SII) has been widely applied in the field of a... more Purpose Although the speech intelligibility index (SII) has been widely applied in the field of audiology and other related areas, application of this metric to cochlear implants (CIs) has yet to be investigated. In this study, SIIs for CI users were calculated to investigate whether the SII could be an effective tool for predicting speech perception performance in a population with CI. Method Fifteen pre- and postlingually deafened adults with CI participated. Speech recognition scores were measured using the AzBio sentence lists. CI users also completed questionnaires and performed psychoacoustic (spectral and temporal resolution) and cognitive function (digit span) tests. Obtained SIIs were compared with predicted SIIs using a transfer function curve. Correlation and regression analyses were conducted on perceptual and demographic predictor variables to investigate the association between these factors and speech perception performance. Result Because of the considerably poor hea...
Journal of speech, language, and hearing research : JSLHR, Jan 22, 2018
The main goal of this study was to investigate the minimum amount of sensory information required... more The main goal of this study was to investigate the minimum amount of sensory information required to recognize spoken words (isolation points [IPs]) in listeners with cochlear implants (CIs) and investigate facilitative effects of semantic contexts on the IPs. Listeners with CIs as well as those with normal hearing (NH) participated in the study. In Experiment 1, the CI users listened to unprocessed (full-spectrum) stimuli and individuals with NH listened to full-spectrum or vocoder processed speech. IPs were determined for both groups who listened to gated consonant-nucleus-consonant words that were selected based on lexical properties. In Experiment 2, the role of semantic context on IPs was evaluated. Target stimuli were chosen from the Revised Speech Perception in Noise corpus based on the lexical properties of the final words. The results indicated that spectrotemporal degradations impacted IPs for gated words adversely, and CI users as well as participants with NH listening to...
The Journal of the Acoustical Society of America, 2017
Although the AzBio test is well validated, has effective standardization data available, and is h... more Although the AzBio test is well validated, has effective standardization data available, and is highly recommended for Cochlear Implant (CI) evaluation, no attempt has been made to derive a Frequency Importance Function (FIF) for its stimuli. This study derived FIFs for the AzBio sentence lists using listeners with normal hearing. Traditional procedures described in studies by Studebaker and Sherbecoe [(1991). J. Speech. Lang. Hear. Res. 34, 427-438] were applied for this purpose. Participants with normal hearing listened to a large number of AzBio sentences that were high- and low-pass filtered under speech-spectrum shaped noise at various signal-to-noise ratios. Frequency weights for the AzBio sentences were greatest in the 1.5 to 2 kHz frequency regions as is the case with other speech materials. A cross-procedure comparison was conducted between the traditional procedure [Studebaker and Sherbecoe (1991). J. Speech. Lang. Hear. Res. 34, 427-438] and the nonlinear optimization pro...
American journal of audiology, Jan 12, 2018
Compared to photon-based radiotherapy, protons deliver less radiation to healthy tissue resulting... more Compared to photon-based radiotherapy, protons deliver less radiation to healthy tissue resulting in the potential reduction of late complications such as sensorineural hearing loss (SNHL). We report early auditory outcomes in children treated with proton radiotherapy (PRT) for craniopharyngioma. Conventional frequency (CF = 0.25-8.0 kHz) audiometry, extended high-frequency (EHF = 9.0-16.0 kHz) audiometry, distortion product otoacoustic emission (DPOAE) testing, and speech-in-noise (SIN) assessments were prospectively and longitudinally conducted on 74 children with a median of 2 post-PRT evaluations (range, 1-5) per patient. The median age at PRT initiation was 10 years, and median follow-up time was 2 years. Ototoxicity was classified using the Chang Ototoxicity Grading Scale (Chang & Chinosornvatana, 2010) and the American Speech-Language-Hearing Association (ASHA) criteria (ASHA, 1994). Comparisons were made between baseline and most recent DPOAE levels, with evidence of ototoxi...
The Journal of the Acoustical Society of America
A corpus of recordings of deaf speech is introduced. Adults who were pre-or post-lingually deafen... more A corpus of recordings of deaf speech is introduced. Adults who were pre-or post-lingually deafened as well as those with normal hearing read standardized speech passages totaling 11 h of .wav recordings. Preliminary acoustic analyses are included to provide a glimpse of the kinds of analyses that can be conducted with this corpus of recordings. Long term average speech spectra as well as spectral moment analyses provide considerable insight into differences observed in the speech of talkers judged to have low, medium, or high speech intelligibility.
The Journal of the Acoustical Society of America
Understanding speech within an auditory scene is constantly challenged by interfering noise in su... more Understanding speech within an auditory scene is constantly challenged by interfering noise in suboptimal listening environments when noise hinders the continuity of the speech stream. In such instances, a typical auditory-cognitive system perceptually integrates available speech information and "fills in" missing information in the light of semantic context. However, individuals with cochlear implants (CIs) find it difficult and effortful to understand interrupted speech compared to their normal hearing counterparts. This inefficiency in perceptual integration of speech could be attributed to further degradations in the spectral-temporal domain imposed by CIs making it difficult to utilize the contextual evidence effectively. To address these issues, 20 normal hearing adults listened to speech that was spectrally reduced and spectrally reduced interrupted in a manner similar to CI processing. The Revised Speech Perception in Noise test, which includes contextually rich and contextually poor sentences, was used to evaluate the influence of semantic context on speech perception. Results indicated that listeners benefited more from semantic context when they listened to spectrally reduced speech alone. For the spectrally reduced interrupted speech, contextual information was not as helpful under significant spectral reductions, but became beneficial as the spectral resolution improved. These results suggest top-down processing facilitates speech perception up to a point, and it fails to facilitate speech understanding when the speech signals are significantly degraded.
Clinical Archives of Communication Disorders
Maximizing speech perception for cochlear implant (CI) users can be achieved by adjusting mapping... more Maximizing speech perception for cochlear implant (CI) users can be achieved by adjusting mapping parameters. The object of this study was to investigate optimal sets of parameters of stimulation rate and the number of maxima in the CI system. Methods: Listeners' consonant and vowel perception was measured for different combinations of the number of maxima and stimulation rate using cochlear implant simulated stimuli. Twelve sets of speech stimuli were systematically created by changing the number of maxima and stimulus rate and were presented to 18 listeners with normal hearing. Results: The group mean percent correct scores indicated only two pairs of parameter combinations showed significantly different results. A rate of 1,800 pps and 6 maxima resulted in significantly better consonant performance compared to a rate of 500 pps and 20 maxima. In addition, the 900 pps/8 maxima condition was significantly better compared to 500 pps/20 maxima for the vowel test. Analysis of listeners' confusion patterns revealed they were more likely to make perception errors for the consonants
2016 15th IEEE International Conference on Machine Learning and Applications (ICMLA), 2016
Does a hearing-impaired individual's speech reflect his hearing loss, and if it does, can the... more Does a hearing-impaired individual's speech reflect his hearing loss, and if it does, can the nature of hearing loss be inferred from his speech? To investigate these questions, at least four hours of speech data were recorded from each of 37 adult individuals, both male and female, belonging to four classes: 7 normal, and 30 severely-to-profoundly hearing impaired with high, medium or low speech intelligibility. Acoustic kernels were learned for each individual by capturing the distribution of his speech data points represented as 20 ms duration windows. These kernels were evaluated using a set of neurophysiological metrics, namely, distribution of characteristic frequencies, equal loudness contour, bandwidth and Q10 value of tuning curve. Our experimental results reveal that a hearing-impaired individual's speech does reflect his hearing loss provided his loss of hearing has considerably affected the intelligibility of his speech. For such individuals, the lack of tuning in any frequency range can be inferred from his learned speech kernels.
Journal of the American Academy of Audiology, Nov 1, 1991
This study evaluated the effects of stimulus presentation level on 12 adult 3M/House singlechanne... more This study evaluated the effects of stimulus presentation level on 12 adult 3M/House singlechannel cochlear implant users' speech perception performance. Dynamic ranges and loudness growth functions were measured for meaningful speech, and performanceintensity functions were plotted for VCV and CVC nonsense stimuli to determine the presentation level(s) that produced maximum speech perception performance for each subject. Considerable variability was found in the subjects' dynamic ranges. Generally, loudness growth functions were steep for subjects having restricted dynamic ranges and more gradual for those having wide dynamic ranges. No single optimal presentation level was determined ; instead, a range of levels produced maximum performance for each subject. Mean levels producing peak scores in equivalent dB SPL were 80 for VCVs and 72 for CVCs. Presentation levels producing optimal performance varied with type of speech stimulus .
American Journal of Audiology
Purpose The purpose of this study was to construct a recorded speech recognition threshold (SRT) ... more Purpose The purpose of this study was to construct a recorded speech recognition threshold (SRT) test for Spanish-speaking children utilizing a picture board and a picture-pointing task. Design The Spanish Pediatric Speech Recognition Threshold (SPSRT) test was developed and validated in this study. Test construction steps included (a) stimulus selection, (b) assessment of familiarity, (c) digital recording, (d) creation of pictures that accurately depicted the target word from the stimulus set, and (e) validation of the test and recordings. SRTs were obtained from 24 Spanish-speaking children whose 1st language was Spanish. Results Normative data are presented that validate the SPSRT and establish the baseline relationship between the pure-tone average and the SRT obtained with the SPSRT. Results indicated that the SPSRT obtained using this test should be within 2–12 dB of an individual's pure-tone average for Spanish-speaking children with normal hearing and minimal hearing lo...
Journal of the American Academy of Audiology, 2017
It is generally well known that speech perception is often improved with integrated audiovisual i... more It is generally well known that speech perception is often improved with integrated audiovisual input whether in quiet or in noise. In many health-care environments, however, conventional surgical masks block visual access to the mouth and obscure other potential facial cues. In addition, these environments can be noisy. Although these masks may not alter the acoustic properties, the presence of noise in addition to the lack of visual input can have a deleterious effect on speech understanding. A transparent ("see-through") surgical mask may help to overcome this issue. To compare the effect of noise and various visual input conditions on speech understanding for listeners with normal hearing (NH) and hearing impairment using different surgical masks. Participants were assigned to one of three groups based on hearing sensitivity in this quasi-experimental, cross-sectional study. A total of 31 adults participated in this study: one talker, ten listeners with NH, ten listene...
Seminars in Hearing, 1996
Seminars in Hearing, 2006
... of a continuum model (Anderson J. The Supervisory Process in Speech-Language Pathology and ..... more ... of a continuum model (Anderson J. The Supervisory Process in Speech-Language Pathology and ... more collaborative; and by the end of the process, the supervisor's role is consultative. ... Specific types of effective and constructive feedback are described along with how often ...
Seminars in Hearing, 2006
... of a continuum model (Anderson J. The Supervisory Process in Speech-Language Pathology and ..... more ... of a continuum model (Anderson J. The Supervisory Process in Speech-Language Pathology and ... more collaborative; and by the end of the process, the supervisor's role is consultative. ... Specific types of effective and constructive feedback are described along with how often ...
<b>Purpose: </b>Although the speech intelligibility index (SII) has been widely appli... more <b>Purpose: </b>Although the speech intelligibility index (SII) has been widely applied in the field of audiology and other related areas, application of this metric to cochlear implants (CIs) has yet to be investigated. In this study, SIIs for CI users were calculated to investigate whether the SII could be an effective tool for predicting speech perception performance in a population with CI.<b>Method:</b> Fifteen pre- and postlingually deafened adults with CI participated. Speech recognition scores were measured using the AzBio sentence lists. CI users also completed questionnaires and performed psychoacoustic (spectral and temporal resolution) and cognitive function (digit span) tests. Obtained SIIs were compared with predicted SIIs using a transfer function curve. Correlation and regression analyses were conducted on perceptual and demographic predictor variables to investigate the association between these factors and speech perception performance.<b>Result: </b>Because of the considerably poor hearing and large individual variability in performance, the SII did not predict speech performance for this CI group using the traditional calculation. However, new SII models were developed incorporating predictive factors, which improved the accuracy of SII predictions in listeners with CI.<b>Conclusion: </b>Conventional SII models are not appropriate for predicting speech perception scores for CI users. Demographic variables (aided audibility and duration of deafness) and perceptual–cognitive skills (gap detection and auditory digit span outcomes) are needed to improve the use of the SII for listeners with CI. Future studies are needed to improve our CI-corrected SII model by considering additional predictive factors.<br><b>Supplemental Material S1. </b>Auditory thresholds and speech perception scores obtained from 15 cochlear implant (CI) participants.<br>Lee, S., Mendel, L. L., & Bidelman, G. M. (2019). Prediction speech recognition using the speech intelligibility index and other variables for cochlear imp [...]
Cochlear implants (CIs) are an effective intervention for individuals with severe-to-profound sen... more Cochlear implants (CIs) are an effective intervention for individuals with severe-to-profound sensorineural hearing loss. Currently, no tuning procedure exists that can fully exploit the technology. We propose online unsupervised algorithms to learn features from the speech of a severely-to-profoundly hearing-impaired patient round-the-clock and compare the features to those learned from the normal hearing population using a set of neurophysiological metrics. Experimental results are presented. The information from comparison can be exploited to modify the signal processing in a patient’s CI to enhance his audibility of speech.
American Journal of Audiology
Purpose The purpose of this study was to construct and validate a recorded word recognition test ... more Purpose The purpose of this study was to construct and validate a recorded word recognition test for monolingual Spanish-speaking children utilizing a picture board and a picture-pointing task. Design The Spanish Pediatric Picture Identification Test was developed and validated in this study. Test construction steps included (a) producing new digital recordings of word lists created by Comstock and Martin (1984) using a bilingual Spanish–English female, (b) obtaining list equivalency, (c) creating digitally illustrated pictures representing the word lists, (d) validating the pictures using monolingual Spanish-speaking and bilingual Spanish–English children, and (e) re-establishing list equivalency and obtaining performance–intensity functions using a picture-pointing task with monolingual Spanish-speaking children and bilingual Spanish–English adults. Results Normative data for three Spanish word recognition lists were established. Performance–intensity functions at sensation levels...
The Journal of the Acoustical Society of America
This study investigated whether speech intelligibility in cochlear implant (CI) users is affected... more This study investigated whether speech intelligibility in cochlear implant (CI) users is affected by semantic context. Three groups participated in two experiments: Two groups of listeners with normal hearing (NH) listened to either full spectrum speech or vocoded speech, and one CI group listened to full spectrum speech. Experiment 1 measured participants' sentence recognition as a function of target-to-masker ratio (four-talker babble masker), and experiment 2 measured perception of interrupted speech as a function of duty cycles (long/short uninterrupted speech). Listeners were presented with both semantic congruent/incongruent targets. Results from the two experiments suggested that NH listeners benefitted more from the semantic cues as the listening conditions became more challenging (lower signal-to-noise ratios and interrupted speech with longer silent intervals). However, the CI group received minimal benefit from context, and therefore performed poorly in such conditions. On the contrary, in the conditions that were less challenging, CI users benefitted greatly from the semantic context, and NH listeners did not rely on such cues. The results also confirmed that such differential use of semantic cues appears to originate from the spectro-temporal degradations experienced by CI users, which could be a contributing factor for their poor performance in suboptimal environments.
Journal of Speech, Language, and Hearing Research
Purpose Although the speech intelligibility index (SII) has been widely applied in the field of a... more Purpose Although the speech intelligibility index (SII) has been widely applied in the field of audiology and other related areas, application of this metric to cochlear implants (CIs) has yet to be investigated. In this study, SIIs for CI users were calculated to investigate whether the SII could be an effective tool for predicting speech perception performance in a population with CI. Method Fifteen pre- and postlingually deafened adults with CI participated. Speech recognition scores were measured using the AzBio sentence lists. CI users also completed questionnaires and performed psychoacoustic (spectral and temporal resolution) and cognitive function (digit span) tests. Obtained SIIs were compared with predicted SIIs using a transfer function curve. Correlation and regression analyses were conducted on perceptual and demographic predictor variables to investigate the association between these factors and speech perception performance. Result Because of the considerably poor hea...
Journal of speech, language, and hearing research : JSLHR, Jan 22, 2018
The main goal of this study was to investigate the minimum amount of sensory information required... more The main goal of this study was to investigate the minimum amount of sensory information required to recognize spoken words (isolation points [IPs]) in listeners with cochlear implants (CIs) and investigate facilitative effects of semantic contexts on the IPs. Listeners with CIs as well as those with normal hearing (NH) participated in the study. In Experiment 1, the CI users listened to unprocessed (full-spectrum) stimuli and individuals with NH listened to full-spectrum or vocoder processed speech. IPs were determined for both groups who listened to gated consonant-nucleus-consonant words that were selected based on lexical properties. In Experiment 2, the role of semantic context on IPs was evaluated. Target stimuli were chosen from the Revised Speech Perception in Noise corpus based on the lexical properties of the final words. The results indicated that spectrotemporal degradations impacted IPs for gated words adversely, and CI users as well as participants with NH listening to...
The Journal of the Acoustical Society of America, 2017
Although the AzBio test is well validated, has effective standardization data available, and is h... more Although the AzBio test is well validated, has effective standardization data available, and is highly recommended for Cochlear Implant (CI) evaluation, no attempt has been made to derive a Frequency Importance Function (FIF) for its stimuli. This study derived FIFs for the AzBio sentence lists using listeners with normal hearing. Traditional procedures described in studies by Studebaker and Sherbecoe [(1991). J. Speech. Lang. Hear. Res. 34, 427-438] were applied for this purpose. Participants with normal hearing listened to a large number of AzBio sentences that were high- and low-pass filtered under speech-spectrum shaped noise at various signal-to-noise ratios. Frequency weights for the AzBio sentences were greatest in the 1.5 to 2 kHz frequency regions as is the case with other speech materials. A cross-procedure comparison was conducted between the traditional procedure [Studebaker and Sherbecoe (1991). J. Speech. Lang. Hear. Res. 34, 427-438] and the nonlinear optimization pro...
American journal of audiology, Jan 12, 2018
Compared to photon-based radiotherapy, protons deliver less radiation to healthy tissue resulting... more Compared to photon-based radiotherapy, protons deliver less radiation to healthy tissue resulting in the potential reduction of late complications such as sensorineural hearing loss (SNHL). We report early auditory outcomes in children treated with proton radiotherapy (PRT) for craniopharyngioma. Conventional frequency (CF = 0.25-8.0 kHz) audiometry, extended high-frequency (EHF = 9.0-16.0 kHz) audiometry, distortion product otoacoustic emission (DPOAE) testing, and speech-in-noise (SIN) assessments were prospectively and longitudinally conducted on 74 children with a median of 2 post-PRT evaluations (range, 1-5) per patient. The median age at PRT initiation was 10 years, and median follow-up time was 2 years. Ototoxicity was classified using the Chang Ototoxicity Grading Scale (Chang & Chinosornvatana, 2010) and the American Speech-Language-Hearing Association (ASHA) criteria (ASHA, 1994). Comparisons were made between baseline and most recent DPOAE levels, with evidence of ototoxi...
The Journal of the Acoustical Society of America
A corpus of recordings of deaf speech is introduced. Adults who were pre-or post-lingually deafen... more A corpus of recordings of deaf speech is introduced. Adults who were pre-or post-lingually deafened as well as those with normal hearing read standardized speech passages totaling 11 h of .wav recordings. Preliminary acoustic analyses are included to provide a glimpse of the kinds of analyses that can be conducted with this corpus of recordings. Long term average speech spectra as well as spectral moment analyses provide considerable insight into differences observed in the speech of talkers judged to have low, medium, or high speech intelligibility.
The Journal of the Acoustical Society of America
Understanding speech within an auditory scene is constantly challenged by interfering noise in su... more Understanding speech within an auditory scene is constantly challenged by interfering noise in suboptimal listening environments when noise hinders the continuity of the speech stream. In such instances, a typical auditory-cognitive system perceptually integrates available speech information and "fills in" missing information in the light of semantic context. However, individuals with cochlear implants (CIs) find it difficult and effortful to understand interrupted speech compared to their normal hearing counterparts. This inefficiency in perceptual integration of speech could be attributed to further degradations in the spectral-temporal domain imposed by CIs making it difficult to utilize the contextual evidence effectively. To address these issues, 20 normal hearing adults listened to speech that was spectrally reduced and spectrally reduced interrupted in a manner similar to CI processing. The Revised Speech Perception in Noise test, which includes contextually rich and contextually poor sentences, was used to evaluate the influence of semantic context on speech perception. Results indicated that listeners benefited more from semantic context when they listened to spectrally reduced speech alone. For the spectrally reduced interrupted speech, contextual information was not as helpful under significant spectral reductions, but became beneficial as the spectral resolution improved. These results suggest top-down processing facilitates speech perception up to a point, and it fails to facilitate speech understanding when the speech signals are significantly degraded.
Clinical Archives of Communication Disorders
Maximizing speech perception for cochlear implant (CI) users can be achieved by adjusting mapping... more Maximizing speech perception for cochlear implant (CI) users can be achieved by adjusting mapping parameters. The object of this study was to investigate optimal sets of parameters of stimulation rate and the number of maxima in the CI system. Methods: Listeners' consonant and vowel perception was measured for different combinations of the number of maxima and stimulation rate using cochlear implant simulated stimuli. Twelve sets of speech stimuli were systematically created by changing the number of maxima and stimulus rate and were presented to 18 listeners with normal hearing. Results: The group mean percent correct scores indicated only two pairs of parameter combinations showed significantly different results. A rate of 1,800 pps and 6 maxima resulted in significantly better consonant performance compared to a rate of 500 pps and 20 maxima. In addition, the 900 pps/8 maxima condition was significantly better compared to 500 pps/20 maxima for the vowel test. Analysis of listeners' confusion patterns revealed they were more likely to make perception errors for the consonants
2016 15th IEEE International Conference on Machine Learning and Applications (ICMLA), 2016
Does a hearing-impaired individual's speech reflect his hearing loss, and if it does, can the... more Does a hearing-impaired individual's speech reflect his hearing loss, and if it does, can the nature of hearing loss be inferred from his speech? To investigate these questions, at least four hours of speech data were recorded from each of 37 adult individuals, both male and female, belonging to four classes: 7 normal, and 30 severely-to-profoundly hearing impaired with high, medium or low speech intelligibility. Acoustic kernels were learned for each individual by capturing the distribution of his speech data points represented as 20 ms duration windows. These kernels were evaluated using a set of neurophysiological metrics, namely, distribution of characteristic frequencies, equal loudness contour, bandwidth and Q10 value of tuning curve. Our experimental results reveal that a hearing-impaired individual's speech does reflect his hearing loss provided his loss of hearing has considerably affected the intelligibility of his speech. For such individuals, the lack of tuning in any frequency range can be inferred from his learned speech kernels.
Journal of the American Academy of Audiology, Nov 1, 1991
This study evaluated the effects of stimulus presentation level on 12 adult 3M/House singlechanne... more This study evaluated the effects of stimulus presentation level on 12 adult 3M/House singlechannel cochlear implant users' speech perception performance. Dynamic ranges and loudness growth functions were measured for meaningful speech, and performanceintensity functions were plotted for VCV and CVC nonsense stimuli to determine the presentation level(s) that produced maximum speech perception performance for each subject. Considerable variability was found in the subjects' dynamic ranges. Generally, loudness growth functions were steep for subjects having restricted dynamic ranges and more gradual for those having wide dynamic ranges. No single optimal presentation level was determined ; instead, a range of levels produced maximum performance for each subject. Mean levels producing peak scores in equivalent dB SPL were 80 for VCVs and 72 for CVCs. Presentation levels producing optimal performance varied with type of speech stimulus .
American Journal of Audiology
Purpose The purpose of this study was to construct a recorded speech recognition threshold (SRT) ... more Purpose The purpose of this study was to construct a recorded speech recognition threshold (SRT) test for Spanish-speaking children utilizing a picture board and a picture-pointing task. Design The Spanish Pediatric Speech Recognition Threshold (SPSRT) test was developed and validated in this study. Test construction steps included (a) stimulus selection, (b) assessment of familiarity, (c) digital recording, (d) creation of pictures that accurately depicted the target word from the stimulus set, and (e) validation of the test and recordings. SRTs were obtained from 24 Spanish-speaking children whose 1st language was Spanish. Results Normative data are presented that validate the SPSRT and establish the baseline relationship between the pure-tone average and the SRT obtained with the SPSRT. Results indicated that the SPSRT obtained using this test should be within 2–12 dB of an individual's pure-tone average for Spanish-speaking children with normal hearing and minimal hearing lo...
Journal of the American Academy of Audiology, 2017
It is generally well known that speech perception is often improved with integrated audiovisual i... more It is generally well known that speech perception is often improved with integrated audiovisual input whether in quiet or in noise. In many health-care environments, however, conventional surgical masks block visual access to the mouth and obscure other potential facial cues. In addition, these environments can be noisy. Although these masks may not alter the acoustic properties, the presence of noise in addition to the lack of visual input can have a deleterious effect on speech understanding. A transparent ("see-through") surgical mask may help to overcome this issue. To compare the effect of noise and various visual input conditions on speech understanding for listeners with normal hearing (NH) and hearing impairment using different surgical masks. Participants were assigned to one of three groups based on hearing sensitivity in this quasi-experimental, cross-sectional study. A total of 31 adults participated in this study: one talker, ten listeners with NH, ten listene...
Seminars in Hearing, 1996
Seminars in Hearing, 2006
... of a continuum model (Anderson J. The Supervisory Process in Speech-Language Pathology and ..... more ... of a continuum model (Anderson J. The Supervisory Process in Speech-Language Pathology and ... more collaborative; and by the end of the process, the supervisor's role is consultative. ... Specific types of effective and constructive feedback are described along with how often ...
Seminars in Hearing, 2006
... of a continuum model (Anderson J. The Supervisory Process in Speech-Language Pathology and ..... more ... of a continuum model (Anderson J. The Supervisory Process in Speech-Language Pathology and ... more collaborative; and by the end of the process, the supervisor's role is consultative. ... Specific types of effective and constructive feedback are described along with how often ...