Vibrotactile Captioning of Musical Effects in Audio-Visual Media as an Alternative for Deaf and Hard of Hearing People: An EEG Study (original) (raw)

Limitations of Standard Accessible Captioning of Sounds and Music for Deaf and Hard of Hearing People: An EEG Study

Frontiers in Integrative Neuroscience

Captioning is the process of transcribing speech and acoustical information into text to help deaf and hard of hearing people accessing to the auditory track of audiovisual media. In addition to the verbal transcription, it includes information such as sound effects, speaker identification, or music tagging. However, it just takes into account a limited spectrum of the whole acoustic information available in the soundtrack, and hence, an important amount of emotional information is lost when attending just to the normative compliant captions. In this article, it is shown, by means of behavioral and EEG measurements, how emotional information related to sounds and music used by the creator in the audiovisual work is perceived differently by normal hearing group and hearing disabled group when applying standard captioning. Audio and captions activate similar processing areas, respectively, in each group, although not with the same intensity. Moreover, captions require higher activation of voluntary attentional circuits, as well as language-related areas. Captions transcribing musical information increase attentional activity, instead of emotional processing.

Auris System: Providing Vibrotactile Feedback for Hearing Impaired Population

BioMed Research International

Deafness, an issue that affects millions of people around the globe, is manifested in different intensities and related to many causes. This impairment negatively affects different aspects of the social life of the deaf people, and music-centered situations (concerts, religious events, etc.) are obviously not inviting for them. The Auris System was conceived to provide the musical experimentation for people who have some type of hearing loss. This system is able to extract musical information from audio and create a representation for music pieces using different stimuli, a new media format to be interpreted by other senses than the hearing. In addition, the system defines a testing methodology based on a noninvasive brain activity recording using an electroencephalographic (EEG) device. The results of the tests are being used to better understand the human musical cognition, in order to improve the accuracy of the Auris musical representation.

Emotion Elicitation through Vibrotactile Stimulation as an Alternative for Deaf and Hard of Hearing People: An EEG Study

Electronics

Despite technological and accessibility advances, the performing arts and their cultural offerings remain inaccessible to many people. By using vibrotactile stimulation as an alternative channel, we explored a different way to enhance emotional processes produced while watching audiovisual media and, thus, elicit a greater emotional reaction in hearing-impaired people. We recorded the brain activity of 35 participants with normal hearing and 8 participants with severe and total hearing loss. The results showed activation in the same areas both in participants with normal hearing while watching a video, and in hearing-impaired participants while watching the same video with synchronized soft vibrotactile stimulation in both hands, based on a proprietary stimulation glove. These brain areas (bilateral middle frontal orbitofrontal, bilateral superior frontal gyrus, and left cingulum) have been reported as emotional and attentional areas. We conclude that vibrotactile stimulation can el...

Interactive Performance for Musicians with a Hearing Impairment

How can we perceive music if we cannot hear it properly? The achievements of deaf musicians suggest it is possible not only to perceive music, but to perform with other musicians. Yet very little research exists to explain how this is possible. This thesis ddresses this problem and explores the premise that vibrations felt on the skin may facilitate interactive music making. An initial interview study found that, while vibrations are sometimes perceived, it is predominantly the use of visual and physical cues that are relied upon in group performance to help stay in time and in tune with other players. The findings informed the design of two observation studies exploring the effects of i) artificial attenuation of auditory information and ii) natural deafness on performance behaviours. It was shown that profound congenital deafness affected the players’ movements and their gazes/glances towards each other while mild or moderate levels of attenuation or deafness did not. Nonetheless, all players, regardless of hearing level, reciprocated the behaviours of co-performers suggesting the influence of social factors benefitting verbal and non-verbal communication between players. Finally, a series of three psychophysical experiments was designed to explore the perception of pitch on the skin using vibrations. The first study found that vibrotactile detection thresholds were not affected by hearing impairments. The second established that the relative pitches of intervals larger than a major 6th were easy to discriminate, but this was not possible for semitones. The third showed that tones an octave apart could be memorised and identified accurately, but were confused when less than a perfect 4th apart. The thesis concludes by evaluating the potential of vibrotactile technology to facilitate interactive performance for musicians with hearing impairments. By considering the psychophysical, behavioural and qualitative data together, it is suggested that signal processing strategies in vibrotactile technology should take social, cognitive and perceptual factors into account.

MUSIC TO MY EYES... CONVEYING MUSIC IN SUBTITLING FOR THE DEAF AND THE HARD OF HEARING

Perspectives on Audiovisual Translation, 2010

Anybody watching the opening and closing ceremonies of the 2008 Beijing Paralympic games or a concert by the percussionist Evelyn Glennie, the Japanese pop star Ayumi Hamasaki or by the opera singer Janine Roebuck will agree that music can be and is part of the lives of many deaf people around the world. A rapid incursion into the Deaf world will show that music is far more important in the lives of deaf people than it is given credit for. Music can be "heard" in numerous ways and medical and sociological reports prove that the world of the deaf is all but silent. It is far more vibrant than that of many hearing people because sound is perceived in intensity through all the senses in amplified versions of what hearers take in mainly through their auditory apparatus. Laborit (1998: 17) describes her kinaesthetic view of music in the following way: "Music is a rainbow of vibrant colours. It's a language beyond words. It's universal. The most beautiful form of art that exists. It's capable of making the human body physically vibrate." The "auditory nature" of such vibrations has been scientifically proven by assistant professor of radiology at the University of Washington, Dr. Dean Shibata, who testifies (in University of Washington 2001) that:

Composing vibrotactile music: A multi-sensory experience with the emoti-chair

2012 IEEE Haptics Symposium (HAPTICS), 2012

The Emoti-Chair is a novel technology to enhance entertainment through vibrotactile stimulation. We assessed the experience of this technology in two workshops. In the first workshop, deaf film-makers experimented with creating vibetracks for a movie clip using a professional movie editing software. In the second workshop, trained opera singers sang and 'felt' their voice through the Emoti-Chair. Participants in both workshops generally found the overall experience to be exciting and they were motivated to use the Chair for upcoming projects.

Designing the model human cochlea: An ambient crossmodal audio-tactile display

2009

We present a Model Human Cochlea (MHC), a sensory substitution technique and system that translates auditory information into vibrotactile stimuli using an ambient, tactile display. The model is used in the current study to translate music into discrete vibration signals displayed along the back of the body using a chair form factor. Voice coils facilitate the direct translation of auditory information onto the multiple discrete vibrotactile channels, which increases the potential to identify sections of the music that would otherwise be masked by the combined signal. One of the central goals of this work has been to improve accessibility to the emotional information expressed in music for users who are Deaf or hard of hearing. To this end, we present our prototype of the MHC, two models of sensory substitution to support the translation of existing and new music, and some of the design challenges encountered throughout the development process. Results of a series of experiments conducted to assess the effectiveness of the MHC are discussed, followed by an overview of future directions for this research. acceptance by K. Kahol, V. Hayward, and S. Brewster. For information on obtaining reprints of this article, please send e-mail to: toh@computer.org, and reference IEEECS Log Number

Enhancing Musical Experience for the Hearing-Impaired Using Visual and Haptic Displays

Human-computer Interaction, 2012

This article addresses the broad question of understanding whether and how a combination of tactile and visual information could be used to enhance the experience of music by the hearing impaired. Initially, a background survey was conducted with hearing-impaired people to find out the techniques they used to ''listen'' to music and how their listening experience might be enhanced. Information obtained from this survey and feedback received from two profoundly deaf musicians were used to guide the initial concept of exploring haptic and visual channels to augment a musical experience.