Speech training aids for hearing-impaired individuals: I. Overview and aims (original) (raw)
Related papers
Computer‐based speech training aids
The Journal of the Acoustical Society of America, 1988
Sensory aids can be subdivided in two important ways: by modality (auditory, visual, tactile, or direct electrical stimulation) and by degree of signal processing (nonspeech, speech-specific, feature-extraction, and speech-recognition). Nonspeech processing aids are designed to make maximum use of the impaired sensory system regardless of whether communication is by speech or other means. Speech-specific processing is designed to match the average spectral and temporal characteristics of the speech signal to the characteristics of the impaired auditory system. Feature-extraction systems involve the automatic extraction of phonetic or articulatory features from the speech signal. Speech-recognition processing makes use of automatic speech recognition techniques to facilitate the communication process. The application of each of these different forms of signal processing to sensory aids of various kinds will be described. [ Work supported by NINCDS. ] 2:25 P2. Enhancing the clarity of speech for listeners with hearing impairments.
Computerized system to aid deaf children in speech learning
2001 Conference Proceedings of the 23rd Annual International Conference of the IEEE Engineering in Medicine and Biology Society, 2001
This paper describes a voice analyzer, whose purpose is deaf children's assistance in the process of speech learning. The processing of the user's speech signal is performed in real time in order to get an instantaneous feedback of the result of speech training. The aim of this analyzer is not to find the distinction between spoken words, main objective of a speech recognizer but to calculate a level of correctness in the toggle of a specific word. Voice signal analysis was developed through a digital signal processor (DSP), applying spectral analysis processes, extraction of voice's formants, adaptation of formants to the standard levels in domain of time and frequency and statistical matching of the acquired speech signal and the standard one, resulted from training. After calculating the correctness coefficient, the system goes off a visual feedback to the user in the form of a graphical animation, in accordance with the matching ratio, that will determine the progression on speech training. held in Istanbul, Turkey. See also ADM001351 for entire conference on cd-rom., The original document contains color images.
Speech training for deaf and hearing-impaired people
Sixth European …, 1999
Ramón García Gómez (1), Ricardo López Barquilla (1), José Ignacio Puertas Tera (1), José Parera Bermudez (1), Marie-Christine Haton (2), Jean-Paul Haton (2), Pierre Alinat (4), Sofia Moreno (3), Wolfgang Hess (5), Ma Araceli Sanchez Raya (6), Eduardo Alberto Martínez ...
Principles of electronic speech processing with applications for people with disabilities
Technology and Disability, 2008
In the first part of this paper the principles and the state of the art of speech processing, and especially speech synthesis and recognition, are explained. Then, a speech-based human-computer dialogue system is discussed. The next section gives a brief overview of the available recommendations, guidelines and standards that are directly related with the application of speech technologies. The last part of the paper is dedicated to applications of speech technology for the disabled. The main focus is on blind and partially sighted people and those with hearing loss. Concerning the blind, many multilingual text-to-speech synthesis systems exist, and some polyglot ones, that can convert printed and electronic documents to audio, but further research is needed for structured text, tables and above all graphics to be efficiently transformed into speech. For the deaf persons, there are still big challenges in the development of adequate communication aids. Although a high-speed transformation of speech into text is possible with state-of-the-art speech recognizers (and thus a quasi real-time information transfer from a hearing to a deaf person), the automatic gesture recognition, needed for the reverse transfer, is still in research state. Other applications discussed in this paper include speech-based cursor control for those with physical disabilities, transformation of dysarthric speech into intelligible speech, voice output communication aids for the language impaired and those without speech, and accessibility options for public terminals and Automated Teller Machines through the incorporation of speech technologies. The paper concludes with an outlook and recommendations for research areas that need further study.
The Design of Speech Recoding Devices for the Deaf
1974
The paper reviews the present status of speech recoding (frequency transposition) devices and concludes that convincing evidence for the superiority of recoding devices over, for example, selective amplification, does not yet exist. A number of fundamental questions requiring answers are then outlined, upon which the design of some ‘ideal’ recoding device appears to be contingent. Some interim design principles are, however, proposed and a description is given of a recoding device designed with these principles in mind. Finally, some initial, encouraging results with the device are reported, and various questions relating to the utility of the device, requiring further investigation, are indicated.
Automatic Generation of Cued Speech for the Deaf: Status and Outlook
1998
Manual Cued Speech is a system of hand ges- tures designed to help deaf speechreaders distinguish among ambiguous speech elements. We have devel- oped a computerized cueing system that uses auto- matic speech recognition to determine and display cues to the cue receiver. Keyword scores of 66% in low-context sentences have been obtained with this system, almost double the speechreading-alone
Advanced Speech Communication System for Deaf People
2010
This paper describes the development of an Advanced Speech Communication System for Deaf People and its field evaluation in a real application domain: the renewal of Driver's License. The system is composed of two modules. The first one is a Spanish into Spanish Sign Language (LSE: Lengua de Signos Española) translation module made up of a speech recognizer, a natural language translator (for converting a word sequence into a sequence of signs), and a 3D avatar animation module (for playing back the signs). The second module is a Spoken Spanish generator from signwriting composed of a visual interface (for specifying a sequence of signs), a language translator (for generating the sequence of words in Spanish), and finally, a text to speech converter. For language translation, the system integrates three technologies: an example-based strategy, a rule-based translation method and a statistical translator. This paper also includes a detailed description of the evaluation carried out in the Local Traffic Office in the city of Toledo (Spain) involving real government employees and deaf people. This evaluation includes objective measurements from the system and subjective information from questionnaires.