Visual speech processing: Word-decoding and word-discrimination related to sentence-based speechreading and hearing-impairment (original) (raw)
Related papers
Investigating Speechreading and Deafness
Journal of the American Academy of Audiology, 2010
Background: The visual speech signal can provide sufficient information to support successful communication. However, individual differences in the ability to appreciate that information are large, and relatively little is known about their sources. Purpose: Here a body of research is reviewed regarding the development of a theoretical framework in which to study speechreading and individual differences in that ability. Based on the hypothesis that visual speech is processed via the same perceptual-cognitive machinery as auditory speech, a theoretical framework was developed by adapting a theoretical framework originally developed for auditory spoken word recognition. Conclusion: The evidence to date is consistent with the conclusion that visual spoken word recognition is achieved via a process similar to auditory word recognition provided differences in perceptual similarity are taken into account. Words perceptually similar to many other words and that occur infrequently in the in...
Visual input is crucial for understanding speech under noisy conditions, but there are hardly any tools to assess the individual ability to lip read. With this study, we wanted to (1) investigate how linguistic characteristics of language on the one hand and hearing impairment on the other hand have an impact on lip reading abilities and (2) provide a tool to assess lip reading abilities for German speakers. 170 participants (22 prelingually deaf) completed the online assessment, which consisted of a subjective hearing impairment scale and silent videos in which different item categories (numbers, words, and sentences) were spoken and the task for our participants was to recognize the spoken stimuli just by visual inspection. We used different versions of one test and investigated the impact of item categories, word frequency in the spoken language, articulation, sentence frequency in the spoken language, sentence length, and differences between speakers on the recognition score. We...
Review of visual speech perception by hearing and hearingāimpaired people: clinical implications
International Journal of Language & Communication Disorders, 2009
Background: Speech perception is often considered specific to the auditory modality, despite convincing evidence that speech processing is bimodal. The theoretical and clinical roles of speech-reading for speech perception, however, have received little attention in speech-language therapy. Aims: The role of speech-read information for speech perception is evaluated by considering evidence from hearing infants and adults, people with speech disorders, and those born profoundly hearing impaired.
Word discrimination and chronological age related to sentence-based speech-reading skill
British Journal of Audiology, 1991
Two aspects of visual speech processing in speechreading (word decoding and word discrimination) were tested in a group of 24 normal hearing and a group of 20 hearing-impaired subjects. Word decoding arid word discrimination performance were independent of facton related to the impairment, both in a quantitative and a qualitative sense. Decoding skill, but not discrimination skill, was associated with sentence-bad speechreading. The results were interpreted such that, in order to represent a critical component process in sentence-based speechreading, the visual speech perception task must entail lexically induced processing as a task-demand. The theoretical status of the word decoding task as one operationalization of a speech decoding module was discussed (Fodor, 1983). An error analysis of performance in the word decoding/discrimination tasks suggested that the perception of heard stimuli, as well as the perception of lipped stimuli, were critically dependent on the same features; that is, the temporally initial phonetic segment of the word (cf. Marslen-Wilson, 1987). Implications for a theory of visual speech perception were discussed.
Visual Phonemic Ambiguity and Speechreading
Journal of Speech, Language, and Hearing Research, 2006
Purpose To study the role of visual perception of phonemes in visual perception of sentences and words among normal-hearing individuals. Method Twenty-four normal-hearing adults identified consonants, words, and sentences, spoken by either a human or a synthetic talker. The synthetic talker was programmed with identical parameters within phoneme groups, hypothetically resulting in simplified articulation. Proportions of correctly identified phonemes per participant, condition, and task, as well as sensitivity to single consonants and clusters of consonants, were measured. Groups of mutually exclusive consonants were used for sensitivity analyses and hierarchical cluster analyses. Results Consonant identification performance did not differ as a function of talker, nor did average sensitivity to single consonants. The bilabial and labiodental clusters were most readily identified and cohesive for both talkers. Word and sentence identification was better for the human talker than the s...
Impaired Speech Perception in Poor Readers: Evidence from Hearing and Speech Reading
Brain and Language, 1998
The performance of 14 poor readers on an audiovisual speech perception task was compared with 14 normal subjects matched on chronological age (CA) and 14 subjects matched on reading age (RA). The task consisted of identifying synthetic speech varying in place of articulation on an acoustic 9-point continuum between / ba/ and /da/ . The acoustic speech events were factorially combined with the visual articulation of /ba/, /da/, or none. In addition, the visualonly articulation of /ba/ or /da/ was presented. The results showed that (1) poor readers were less categorical than CA and RA in the identification of the auditory speech events and (2) that they were worse in speech reading. This convergence between the deficits clearly suggests that the auditory speech processing difficulty of poor readers is speech specific and relates to the processing of phonological information.
Phonological Activation During Visual Word Recognition in Deaf and Hearing Children
Journal of Speech, Language, and Hearing Research, 2010
Purpose: Phonological activation during visual word recognition was studied in deaf and hearing children under two circumstances: (a) when the use of phonology was not required for task performance and might even hinder it and ( b) when the use of phonology was critical for task performance. Method: Deaf children mastering written Dutch and Sign Language of the Netherlands were compared with hearing children. Two word-picture verification experiments were conducted, both of which included pseudohomophones. In Experiment 1, the task was to indicate whether the word was spelled correctly and whether it corresponded to the picture. The presence of pseudohomophones was expected to hinder performance only when phonological recoding occurred. In Experiment 2, the task was to indicate whether the word sounded like the picture, which now made phonological recoding essential in order to enable the acceptance of pseudohomophones. Results: The hearing children showed automatic activation of phonology during visual word recognition, regardless of whether they were instructed to focus on orthographic information ( Experiment 1) or phonological information ( Experiment 2). The deaf children showed little automatic phonological activation in either experiment. Conclusion: Deaf children do not use phonological information during word reading.
Visual Word Recognition in Deaf Readers: Lexicality Is Modulated by Communication Mode
PLoS ONE, 2013
Evidence indicates that adequate phonological abilities are necessary to develop proficient reading skills and that later in life phonology also has a role in the covert visual word recognition of expert readers. Impairments of acoustic perception, such as deafness, can lead to atypical phonological representations of written words and letters, which in turn can affect reading proficiency. Here, we report an experiment in which young adults with different levels of acoustic perception (i.e., hearing and deaf individuals) and different modes of communication (i.e., hearing individuals using spoken language, deaf individuals with a preference for sign language, and deaf individuals using the oral modality with less or no competence in sign language) performed a visual lexical decision task, which consisted of categorizing real words and consonant strings. The lexicality effect was restricted to deaf signers who responded faster to real words than consonant strings, showing overreliance on whole word lexical processing of stimuli. No effect of stimulus type was found in deaf individuals using the oral modality or in hearing individuals. Thus, mode of communication modulates the lexicality effect. This suggests that learning a sign language during development shapes visuo-motor representations of words, which are tuned to the actions used to express them (phono-articulatory movements vs. hand movements) and to associated perceptions. As these visuo-motor representations are elicited during on-line linguistic processing and can overlap with the perceptual-motor processes required to execute the task, they can potentially produce interference or facilitation effects. Citation: Barca L, Pezzulo G, Castrataro M, Rinaldi P, Caselli MC (2013) Visual Word Recognition in Deaf Readers: Lexicality Is Modulated by Communication Mode. PLoS ONE 8(3): e59080.