Optimal Audiovisual Integration in People with One Eye (original) (raw)
Related papers
Sensory compensation in sound localization in people with one eye
Experimental Brain Research, 2012
Some blind people are better at locating sounds than people with normal vision indicating cross-modal plasticity. People who have lost one eye have a unique form of visual deprivation that reduces visual afferent signals by half and can potentially also lead to cross-modal (as well as intra-modal) plasticity. To look for evidence of auditory-visual cross-modal compensation, we measured binaural and monaural sound localization in one-eyed people and compared them with normally sighted controls. One-eyed people showed significantly better binaural sound localization than controls in the central region of space (±78° from straight ahead), but they mislocalized sounds in the far periphery (on both the blind and intact side) by up to 15° towards the centre. One-eyed people showed significantly better monaural sound localization compared with controls. Controls' performance became asymmetric when they had one eye patched. Patching improved accuracy in the viewing field but decreased accuracy in the occluded field. These results are discussed in terms of cross-modal sensory compensation and the possible contribution of visual depth to interpreting sound localization cues.
No Colavita effect: equal auditory and visual processing in people with one eye
Experimental Brain Research, 2012
Previous research has shown that people with one eye have enhanced spatial vision implying intra-modal compensation for their loss of binocularity. The current experiments investigate whether monocular blindness from unilateral eye enucleation may lead to cross-modal sensory compensation for the loss of one eye. We measured speeded detection and discrimination of audiovisual targets presented as a stream of paired objects and familiar sounds in a group of individuals with monocular enucleation compared to controls viewing binocularly or monocularly. In Experiment 1, participants detected the presence of auditory, visual or audiovisual targets. All participant groups were equally able to detect the targets. In Experiment 2, participants discriminated between the visual, auditory or bimodal (audiovisual) targets. Both control groups showed the Colavita effect, that is, preferential processing of visual over auditory information for the bimodal stimuli. The monocular enucleation group, however, showed no Colavita effect, and further, they demonstrated equal processing of visual and auditory stimuli. This finding suggests a lack of visual dominance and equivalent auditory and visual processing in people with one eye. This may be an adaptive form of sensory compensation for the loss of one eye and could result from recruitment of deafferented visual cortical areas by inputs from other senses.
How Does Experience Modulate Auditory Spatial Processing in Individuals with Blindness?
Brain Topography, 2013
Comparing early-and late-onset blindness in individuals offers a unique model for studying the influence of visual experience on neural processing. This study investigated how prior visual experience would modulate auditory spatial processing among blind individuals. BOLD responses of early-and late-onset blind participants were captured while performing a sound localization task. The task required participants to listen to novel ''Bat-ears'' sounds, analyze the spatial information embedded in the sounds, and specify out of 15 locations where the sound would have been emitted. In addition to sound localization, participants were assessed on visuospatial working memory and general intellectual abilities. The results revealed common increases in BOLD responses in the middle occipital gyrus, superior frontal gyrus, precuneus, and precentral gyrus during sound localization for both groups. Between-group dissociations, however, were found in the right middle occipital gyrus and left superior frontal gyrus. The BOLD responses in the left superior frontal gyrus were significantly correlated with accuracy on sound localization and visuospatial working memory abilities among the lateonset blind participants. In contrast, the accuracy on sound localization only correlated with BOLD responses in the right middle occipital gyrus among the early-onset counterpart. The findings support the notion that early-onset blind individuals rely more on the occipital areas as a result of cross-modal plasticity for auditory spatial processing, while late-onset blind individuals rely more on the prefrontal areas which subserve visuospatial working memory.
Late development of audio-visual integration in the vertical plane
Current Research in Behavioral Sciences, 2021
It is not clear how multisensory skills develop and how visual experience impacts on multisensory spatial development. Conflicting results show that visual calibration precedes multisensory integration for the audiovisual spatial bisection task (Gori et al., 2012a, 2012b) while in other tasks such as spatial localization, visual calibration occurs after multisensory development (Rohlf et al., 2020). Results in blind individuals can say something about the role of vision on perceptual development. Scientific evidences show that blind individuals have impairments in bisecting the auditory space (Gori et al., 2014) but not in localizing auditory sources (Lessard et al., 1998). Such results suggest that sensory calibration and impairment are linked. We studied the development of audiovisual multisensory localization in the vertical plane in sighted individuals from 5 years to adulthood to address this hypothesis. We hypothesize that typical children would show late audiovisual integration for the vertical plane, preceded by visual dominance. Unimodal and bimodal audiovisual thresholds and PSEs were measured and compared with the Bayesian optimal-integration model (maximum likelihood estimation). Results show that the development of multisensory integration in the vertical plane is not evident at 5 years, suggesting visual dominance for vertical audiovisual localization. These results support the idea that multisensory perception in the vertical domain depends on sensory calibration. We discuss these scientific results proposing that the process of cross-sensory calibration is task-specific and highlighting the importance of linking the impairment and development to better determine how our brain works.
Development of multisensory spatial integration and perception in humans
Developmental Science, 2006
Previous studies have shown that adults respond faster and more reliably to bimodal compared to unimodal localization cues. The current study investigated for the first time the development of audiovisual (A-V) integration in spatial localization behavior in infants between 1 and 10 months of age. We observed infants' head and eye movements in response to auditory, visual, or both kinds of stimuli presented either 25 ° or 45 ° to the right or left of midline. Infants under 8 months of age intermittently showed response latencies significantly faster toward audiovisual targets than toward either auditory or visual targets alone They did so, however, without exhibiting a reliable violation of the Race Model, suggesting that probability summation alone could explain the faster bimodal response. In contrast, infants between 8 and 10 months of age exhibited bimodal response latencies significantly faster than unimodal latencies for both eccentricity conditions and their latencies violated the Race Model at 25 ° eccentricity. In addition to this main finding, we found age-dependent eccentricity and modality effects on response latencies. Together, these findings suggest that audiovisual integration emerges late in the first year of life and are consistent with neurophysiological findings from multisensory sites in the superior colliculus of infant monkeys showing that multisensory enhancement of responsiveness is not present at birth but emerges later in life.
SPATIAL AUDITORY REPRESENTATION IN THE CASE OF THE VISUALLY IMPAIRED PEOPLE
Over the years, it has been widely believed that the blind individuals possess enhanced sound localization abilities that help them to navigate and orient in space in the lack of visual stimuli. In addition to this, it has been argued that the visually impaired people develop increased capacities of the remaining senses (auditory skills, in particular) that exceed those of the normally sighted individuals. The following paper aims to present and compare the most notable sound localization experiments that involved the participation of both blind and sighted control subjects. As the results of these studies provided different results, they have been classified in experiments that show a better localization performance for the blind participants and on the other hand, experiments that yielded equal or worst localization accuracy in the case of the visually impaired subjects. The underlying purpose of our research is to understand the modality and the degree at which the presence or absence of visual stimuli affect the spatial auditory resolution for each of the two target groups.
Audiovisual integration in depth: multisensory binding and gain as a function of distance
Experimental brain research, 2018
The integration of information across sensory modalities is dependent on the spatiotemporal characteristics of the stimuli that are paired. Despite large variation in the distance over which events occur in our environment, relatively little is known regarding how stimulus-observer distance affects multisensory integration. Prior work has suggested that exteroceptive stimuli are integrated over larger temporal intervals in near relative to far space, and that larger multisensory facilitations are evident in far relative to near space. Here, we sought to examine the interrelationship between these previously established distance-related features of multisensory processing. Participants performed an audiovisual simultaneity judgment and redundant target task in near and far space, while audiovisual stimuli were presented at a range of temporal delays (i.e., stimulus onset asynchronies). In line with the previous findings, temporal acuity was poorer in near relative to far space. Furth...
Neuroscience Letters, 2018
Highlights The McGurk effect is a popular tool for studying multisensory integration People with one eye process audiovisual stimuli differently from binocular controls People with one eye do not perceive the McGurk effect unlike controls Sensory systems of people with one eye adaptively accommodate perception Evidence of neural plasticity after the loss of an eye early in life Abstract Previously, we have shown that people who have had one eye surgically removed early in life during visual development have enhanced sound localization [1] and lack visual dominance, commonly observed in binocular and monocular (eye-patched) viewing controls [2]. Despite these changes, people with one eye integrate auditory and visual