Claudio Campus | Istituto Italiano di Tecnologia / Italian Institute of Technology (original) (raw)
Papers by Claudio Campus
2021 43rd Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC), Nov 1, 2021
The present work aims to introduce a novel robotic platform suitable for investigating perception... more The present work aims to introduce a novel robotic platform suitable for investigating perception in multisensory motion tasks for individuals with and without sensory and motor disabilities. The system, called RoMAT, allows the study of how multisensory signals are integrated, taking into account the speed and direction of the stimuli. It is a robotic platform composed of a visual and tactile wheel mounted on two routable plates to be moved under the finger and the visual observation of the participants. We validated the system by implementing a rotation discrimination task considering two different sensory modalities: vision, touch and multisensory visual-tactile integration. Four healthy subjects were asked to report the length of motion rotation after perceiving a moving stimulus generated by the visual, tactile, or both stimuli. Results suggest that multisensory precision improves when multiple sensory stimulations are presented. The new system can therefore provide fundamental inputs in determining the perceptual principles of motion processing. Therefore, this device can be a potential system to design screening and rehabilitation protocols based on neuroscientific findings to be used in individuals with visual and motor impairments. Clinical relevance-This research presents a novel robotic motion simulator to deliver combined or independent stimulation of the visual and tactile sensory signals.
Neuropsychologia, Nov 1, 2022
Zenodo (CERN European Organization for Nuclear Research), Sep 5, 2020
Echolocation is a perceptual and navigational skill that can be acquired by some individuals. Reg... more Echolocation is a perceptual and navigational skill that can be acquired by some individuals. Regarding blind people, this skill can help them "see" the environment around them via a new form of auditory information based on echoes. Expert human echolocators benefit from using this technique not only in controlled environments but also in their everyday lives. In the current study, we investigate the effect of echolocation on blind people's auditory spatial abilities at the cortical level. In an auditory spatial bisection task, we tested people who are early blinds and early blind expert echolocators, along with sighted people. Our results showed that there is similar early activation (50-90 ms) in the posterior area of the scalp for both early blind expert echolocators and sighted participants, but not in the early blind group. This activation was related to sound stimulation, and it is contralateral to the position of the sound in space. These findings indicate that echolocation is a good substitute for the visual modality that enables the development of auditory spatial representations when vision is not available.
Vestibular input is required for accurate locomotion in the dark, yet blind subjects' vestibular ... more Vestibular input is required for accurate locomotion in the dark, yet blind subjects' vestibular function is unexplored. Such investigation may also identify visually dependent aspects of vestibular function. We assessed vestibular function perceptually in six congenitally blind (and 12 sighted) subjects. Cupula deflection by a transient angular, horizontal acceleration generates a related vestibular nerve signal that declines exponentially with time constant Ϸ4-7 s, which is prolonged to 15 s in the evoked vestibular-ocular reflex by the brain stem "velocity storage." We measured perceptual velocity storage in blind subjects following velocity steps (overall perceptual vestibular time constant, experiment 1) and found it to be significantly shorter (5.34 s; range: 2.39-8.58 s) than in control, sighted subjects (15.8 s; P Ͻ 0.001). Vestibular navigation was assessed by subjects steering a motorized Báránychair in response to imposed angular displacements in a path-reversal task, "go-back-to-start" (GBS: experiment 2); and a path-completion task, "complete-the-circle" (CTC: experiment 3). GBS performances (comparing response vs. stimulus displacement regression slopes and r 2) were equal between groups (P Ͼ 0.05), but the blind showed worse CTC performance (P Ͻ 0.05). Two blind individuals showed ultrashort perceptual time constants, high lifetime physical activity scores and superior CTC performances; we speculate that these factors may be interrelated. In summary, the vestibular velocity storage as measured perceptually is visually dependent. Early blindness does not affect path reversal performance but is associated with worse path completion, a task requiring an absolute spatial strategy. Although congenitally blind subjects are overall less able to utilize spatial mechanisms during vestibular navigation, prior extensive physical spatial activity may enhance vestibular navigation.
Frontiers in Human Neuroscience, Mar 15, 2023
Conclusion: Summing up, this study supports the existence of a specific link between the magnitud... more Conclusion: Summing up, this study supports the existence of a specific link between the magnitude of activation of motor and sensorimotor areas related to upper limb proprioceptive processing and the proprioceptive acuity at the joints.
Frontiers in Neuroscience, Sep 29, 2020
The human brain uses perceptual information to create a correct representation of the external wo... more The human brain uses perceptual information to create a correct representation of the external world. Converging data indicate that the perceptual processing of, space, and quantities frequently is based on a shared mental magnitude system, where low and high quantities are represented in the left and right space, respectively. The present study explores how the magnitude affects spatial representation in the tactile modality. We investigated these processes using stimulus-response (S-R) compatibility tasks (i.e., sensorimotor tasks that present an association/dissociation between the perception of a stimulus and the required action, generally increasing/decreasing accuracy and decreasing/increasing reaction times of the subject). In our study, the participant performed a discrimination task between high-and low-frequency vibrotactile stimuli, regardless of the stimulation's spatial position. When the response code was incompatible with the mental magnitude line (i.e., left button for high-frequency and right button for low-frequency responses), we found that the participants bypassed the spatial congruence, showing a magnitude S-R compatibility effect. We called this phenomenon the Spatial-Tactile Association of Response Codes (STARC) effect. Moreover, we observed that the internal frame of reference embodies the STARC effect. Indeed, the participants' performance reversed between uncrossed-and crossedhands posture, suggesting that spatial reference frames play a role in the process of expressing mental magnitude, at least in terms of the tactile modality.
Journal of Experimental Child Psychology, Nov 1, 2021
The ability to encode space is a crucial aspect of interacting with the external world. Therefore... more The ability to encode space is a crucial aspect of interacting with the external world. Therefore, this ability appears to be fundamental for the correct development of the capacity to integrate different spatial reference frames. The spatial reference frame seems to be present in all the sensory modalities. However, it has been demonstrated that different sensory modalities follow various developmental courses. Nevertheless, to date these courses have been investigated only in people with sensory impairments, where there is a possible bias due to compensatory strategies and it is complicated to assess the exact age when these skills emerge. For these reasons, we investigated the development of the allocentric frame in the auditory domain in a group of typically developing children aged 6-10 years. To do so, we used an auditory Simon task, a paradigm that involves implicit spatial processing, and we asked children to perform the task in both the uncrossed and crossed hands postures. We demonstrated that the crossed hands posture affected the performance only in younger children (6-7 years), whereas at 10 years of age children performed as adults and were not affected by such posture. Moreover, we found that this task's performance correlated with age and developmental differences in spatial abilities. Our results support the hypothesis that auditory spatial cognition's developmental course is similar to the visual modality development as reported in the literature.
Innovative research in the fields of prosthetic, neurorehabilitation, motor control and human phy... more Innovative research in the fields of prosthetic, neurorehabilitation, motor control and human physiology has been focusing on the study of proprioception, the sense through which we perceive the position and movement of our body, and great achievements have been obtained regarding its assessment and characterization. However, how proprioceptive signals are combined with other sensory modalities and processed by the central nervous system to form a conscious body image, is still unknown. Such a crucial question was addressed in this study, which involved 23 healthy subjects, by combining a robot-based proprioceptive test with a specific analysis of electroencephalographic activity (EEG) in the mu\mumu frequency band (8-12 Hz). We observed important activation in the motor area contralateral to the moving hand, and besides, a substantial bias in brain activation and proprioceptive acuity when visual feedback was provided in addition to the proprioceptive information during movement execution. In details, brain activation and proprioceptive acuity were both higher in case of movements performed with visual feedback. Remarkably, we also found a correlation between the level of activation in the brain motor area contralateral to the moving hand and the value of proprioceptive acuity.
PLOS ONE, Mar 8, 2023
Our brain constantly combines sensory information in unitary percept to build coherent representa... more Our brain constantly combines sensory information in unitary percept to build coherent representations of the environment. Even though this process could appear smooth, integrating sensory inputs from various sensory modalities must overcome several computational issues, such as recoding and statistical inferences problems. Following these assumptions, we developed a neural architecture replicating humans' ability to use audiovisual spatial representations. We considered the well-known ventriloquist illusion as a benchmark to evaluate its phenomenological plausibility. Our model closely replicated human perceptual behavior, proving a truthful approximation of the brain's ability to develop audiovisual spatial representations. Considering its ability to model audiovisual performance in a spatial localization task, we release our model in conjunction with the dataset we recorded for its validation. We believe it will be a powerful tool to model and better understand multisensory integration processes in experimental and rehabilitation environments.
Perception, 2016
Reference EPFL-CONF-225225View record in Web of Science Record created on 2017-01-24, modified on... more Reference EPFL-CONF-225225View record in Web of Science Record created on 2017-01-24, modified on 2017-01-24
Translational Psychiatry, Aug 12, 2022
It has been widely demonstrated that time processing is altered in patients with schizophrenia. T... more It has been widely demonstrated that time processing is altered in patients with schizophrenia. This perspective review delves into such temporal deficit and highlights its link to low-level sensory alterations, which are often overlooked in rehabilitation protocols for psychosis. However, if temporal impairment at the sensory level is inherent to the disease, new interventions should focus on this dimension. Beyond more traditional types of intervention, here we review the most recent digital technologies for rehabilitation and the most promising ones for sensory training. The overall aim is to synthesise existing literature on time in schizophrenia linking psychopathology, psychophysics, and technology to help future developments.
Scientific Reports, Nov 9, 2022
It is evident that the brain is capable of large-scale reorganization following sensory deprivati... more It is evident that the brain is capable of large-scale reorganization following sensory deprivation, but the extent of such reorganization is to date, not clear. The auditory modality is the most accurate to represent temporal information, and deafness is an ideal clinical condition to study the reorganization of temporal representation when the audio signal is not available. Here we show that hearing, but not deaf individuals, show a strong ERP response to visual stimuli in temporal areas during a timebisection task. This ERP response appears 50-90 ms after the flash and recalls some aspects of the N1 ERP component usually elicited by auditory stimuli. The same ERP is not evident for a visual space-bisection task, suggesting that the early recruitment of temporal cortex is specific for building a highly resolved temporal representation within the visual modality. These findings provide evidence that the lack of auditory input can interfere with typical development of complex visual temporal representations. It is not clear to date which are the principles at the base of cortical organization in our brain. If we consider blindness for example, in some cases, visual regions of the brain reorganize being recruited by auditory and tactile sensory inputs e.g. 1-3. This cross-sensory recruitment has been associated with the improvement of some auditory and tactile skills of blind individuals. However, we have recently showed that this reorganization does not occur for the auditory space-bisection task, for which in sighted but not in blind individuals the visual cortex processes the auditory spatial signals 4. A possible explanation for this result is that visual experience is crucial to develop some spatial properties and when it is not available the visual spatial cortical processing cannot properly develop. Some previous studies supported a sensory-independent supramodal organization of the visual cortex (see 5,6), suggesting that the supramodal principle might extend to other sensory regions. Although this kind of research is much more limited in deafness compared to blindness, several studies have shown sensory-independent task-selective recruitment of the auditory brain. For instance, the auditory language network mostly maintains its distinctive properties in the brain independently of the sensory modality being used as input. In deaf adults, researchers have repeatedly reported that the auditory regions typically recruited by spoken language processing, can be recruited during sign production e.g. 7,8 and sign comprehension e.g. 9,10. Apart from activations related to language, studies have only clearly documented task-selective recruitment in auditory cortices for the perception of visual rhythm 11. Specifically, regardless of the sensory modality involved, perception of rhythms peaked in the same anatomic auditory regions-that is, the posterior and lateral parts of the high-level auditory cortex. Similarly, there is evidence that face processing recruits the cortical territory associated with voice processing (i.e., the temporal voice area, TVA) in early deaf 12. Interestingly, other results showed that the large-scale topography of the auditory cortex does not differ between hearing and deaf individuals. Tonotopic-like large-scale functional connectivity patterns can emerge and be retained through life in prelingually deaf humans without auditory experience 13. In addition, studies in deaf cats revealed that the auditory cortex mostly preserves anatomic
Frontiers in Human Neuroscience
IntroductionPosition sense, which belongs to the sensory stream called proprioception, is pivotal... more IntroductionPosition sense, which belongs to the sensory stream called proprioception, is pivotal for proper movement execution. Its comprehensive understanding is needed to fill existing knowledge gaps in human physiology, motor control, neurorehabilitation, and prosthetics. Although numerous studies have focused on different aspects of proprioception in humans, what has not been fully investigated so far are the neural correlates of proprioceptive acuity at the joints.MethodsHere, we implemented a robot-based position sense test to elucidate the correlation between patterns of neural activity and the degree of accuracy and precision exhibited by the subjects. Eighteen healthy participants performed the test, and their electroencephalographic (EEG) activity was analyzed in its μ band (8–12 Hz), as the frequency band related to voluntary movement and somatosensory stimulation.ResultsWe observed a significant positive correlation between the matching error, representing proprioceptiv...
Human Brain Mapping
Clear evidence demonstrated a supramodal organization of sensory cortices with multisensory proce... more Clear evidence demonstrated a supramodal organization of sensory cortices with multisensory processing occurring even at early stages of information encoding. Within this context, early recruitment of sensory areas is necessary for the development of fine domain-specific (i.e., spatial or temporal) skills regardless of the sensory modality involved, with auditory areas playing a crucial role in temporal processing and visual areas in spatial processing. Given the domain-specificity and the multisensory nature of sensory areas, in this study, we hypothesized that preferential domains of representation (i.e., space and time) of visual and auditory cortices are also evident in the early processing of multisensory information. Thus, we measured the event-related potential (ERP) responses of 16 participants while performing multisensory spatial and temporal bisection tasks. Audiovisual stimuli occurred at three different spatial positions and time lags and participants had to evaluate whether the second stimulus was spatially (spatial bisection task) or temporally (temporal bisection task) farther from the first or third audiovisual stimulus. As predicted, the second audiovisual stimulus of both spatial and temporal bisection tasks elicited an early ERP response (time window 50-90 ms) in visual and auditory regions. However, this early ERP component was more substantial in the occipital areas during the spatial bisection task, and in the temporal regions during the temporal bisection task. Overall, these results confirmed the domain specificity of visual and auditory cortices and revealed that this aspect selectively modulates also the cortical activity in response to multisensory stimuli.
Translational Psychiatry
It has been widely demonstrated that time processing is altered in patients with schizophrenia. T... more It has been widely demonstrated that time processing is altered in patients with schizophrenia. This perspective review delves into such temporal deficit and highlights its link to low-level sensory alterations, which are often overlooked in rehabilitation protocols for psychosis. However, if temporal impairment at the sensory level is inherent to the disease, new interventions should focus on this dimension. Beyond more traditional types of intervention, here we review the most recent digital technologies for rehabilitation and the most promising ones for sensory training. The overall aim is to synthesise existing literature on time in schizophrenia linking psychopathology, psychophysics, and technology to help future developments.
Journal of Clinical Sleep Medicine
The mechanisms involved in the origin of dreams remain one of the great unknowns in science. In t... more The mechanisms involved in the origin of dreams remain one of the great unknowns in science. In the 21st century, studies in the field have focused on 3 main topics: functional networks that underlie dreaming, neural correlates of dream contents, and signal propagation. We review neuroscientific studies about dreaming processes, focusing on their cortical correlations. The involvement of frontoparietal regions in the dream-retrieval process allows us to discuss it in light of the Global Workspace theory of consciousness. However, dreaming in distinct sleep stages maintains relevant differences, suggesting that multiple generators are implicated. Then, given the strong influence of light perception on sleep regulation and the mostly visual content of dreams, we investigate the effect of blindness on the organization of dreams. Blind individuals represent a worthwhile population to clarify the role of perceptual systems in dream generation, and to make inferences about their top-down and/or bottom-up origin. Indeed, congenitally blind people maintain the ability to produce visual dreams, suggesting that bottom-up mechanisms could be associated with innate body schemes or multisensory integration processes. Finally, we propose the new dream-engineering technique as a tool to clarify the mechanisms of multisensory integration during sleep and related mental activity, presenting possible implications for rehabilitation in sensory-impaired individuals. The Theory of Proto-consciousness suggests that the interaction of brain states underlying waking and dreaming ensures the optimal functioning of both. Therefore, understanding the origin of dreams and capabilities of our brain during a dreamlike state, we could introduce it as a rehabilitative tool.
2019 IEEE International Symposium on Medical Measurements and Applications (MeMeA)
We live in a multisensory world where all our sensory systems are constantly stimulated and diffe... more We live in a multisensory world where all our sensory systems are constantly stimulated and different sensory signals need to be integrated. Many works show that the brain is able to integrate redundant signals of a particular property. However, to date, there are no solutions to investigate how auditory, visual and tactile information are integrated by considering spatial, temporal and body representations. The goal of this work is to present a new system that has been developed for assessing multisensory integration considering also the body and its movements in space. The system, called MSI Caterpillar, gives the opportunity to study how multisensory signals are integrated taking into account spatial and temporal features of the stimulus and considering both the external and the body space. It is a set of several audio-visual and tactile elements arranged in the form of an array, to be positioned on the body (e.g. in the arm) or in the external space. The aim of this innovative technology is to design novel ways and clinical procedures to train and study how multisensory signal in different parts of the body and space are processed and can be rehabilitated in typical and sensory impaired (e.g. visually impaired or deaf individuals) children and adults.
Background: The ability to process sensory information is an essential adaptive function, and hyp... more Background: The ability to process sensory information is an essential adaptive function, and hyper- or hypo-sensitive maladaptive profiles of repones to environmental stimuli generate sensory processing disorders linked to cognitive, affective, and behavioural alterations. The research on neuroradiological correlates of the sensory processing profiles is still in its infancy and is mainly limited to the young-age population or neurodevelopmental disorders. So, the knowledge concerning the impact of the different sensory profiles on the structural and functional characteristics of the typically developed adult brain remains largely obscure. In this framework, this study aims to examine the structural and functional MRI correlates of sensory profiles in a sample of healthy adults. Method: We investigated structural T1, Diffusion Tensor Imaging (DTI), and resting-state functional MRI (rs-fMRI) correlates of Adolescent/Adult Sensory Profile (AASP) questionnaire subscales in 57 typical ...
It is not clear how multisensory skills develop and how visual experience impacts on multisensory... more It is not clear how multisensory skills develop and how visual experience impacts on multisensory spatial development. Conflicting results show that visual calibration precedes multisensory integration for the audio-visual spatial bisection task (Gori et al., 2012a, 2012b) while in other tasks such as spatial localization, visual calibration occurs after multisensory development (Rohlf et al., 2020). Results in blind individuals can say something about the role of vision on perceptual development. Scientific evidences show that blind individuals have impairments in bisecting the auditory space (Gori et al., 2014) but not in localizing auditory sources (Lessard et al., 1998). Such results suggest that sensory calibration and impairment are linked. We studied the development of audio-visual multisensory localization in the vertical plane in sighted individuals from 5 years to adulthood to address this hypothesis. We hypothesize that typical children would show late audio-visual integr...
2021 43rd Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC), 2021
The present work aims to introduce a novel robotic platform suitable for investigating perception... more The present work aims to introduce a novel robotic platform suitable for investigating perception in multisensory motion tasks for individuals with and without sensory and motor disabilities. The system, called RoMAT, allows the study of how multisensory signals are integrated, taking into account the speed and direction of the stimuli. It is a robotic platform composed of a visual and tactile wheel mounted on two routable plates to be moved under the finger and the visual observation of the participants. We validated the system by implementing a rotation discrimination task considering two different sensory modalities: vision, touch and multisensory visual-tactile integration. Four healthy subjects were asked to report the length of motion rotation after perceiving a moving stimulus generated by the visual, tactile, or both stimuli. Results suggest that multisensory precision improves when multiple sensory stimulations are presented. The new system can therefore provide fundamental inputs in determining the perceptual principles of motion processing. Therefore, this device can be a potential system to design screening and rehabilitation protocols based on neuroscientific findings to be used in individuals with visual and motor impairments. Clinical relevance-This research presents a novel robotic motion simulator to deliver combined or independent stimulation of the visual and tactile sensory signals.
2021 43rd Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC), Nov 1, 2021
The present work aims to introduce a novel robotic platform suitable for investigating perception... more The present work aims to introduce a novel robotic platform suitable for investigating perception in multisensory motion tasks for individuals with and without sensory and motor disabilities. The system, called RoMAT, allows the study of how multisensory signals are integrated, taking into account the speed and direction of the stimuli. It is a robotic platform composed of a visual and tactile wheel mounted on two routable plates to be moved under the finger and the visual observation of the participants. We validated the system by implementing a rotation discrimination task considering two different sensory modalities: vision, touch and multisensory visual-tactile integration. Four healthy subjects were asked to report the length of motion rotation after perceiving a moving stimulus generated by the visual, tactile, or both stimuli. Results suggest that multisensory precision improves when multiple sensory stimulations are presented. The new system can therefore provide fundamental inputs in determining the perceptual principles of motion processing. Therefore, this device can be a potential system to design screening and rehabilitation protocols based on neuroscientific findings to be used in individuals with visual and motor impairments. Clinical relevance-This research presents a novel robotic motion simulator to deliver combined or independent stimulation of the visual and tactile sensory signals.
Neuropsychologia, Nov 1, 2022
Zenodo (CERN European Organization for Nuclear Research), Sep 5, 2020
Echolocation is a perceptual and navigational skill that can be acquired by some individuals. Reg... more Echolocation is a perceptual and navigational skill that can be acquired by some individuals. Regarding blind people, this skill can help them "see" the environment around them via a new form of auditory information based on echoes. Expert human echolocators benefit from using this technique not only in controlled environments but also in their everyday lives. In the current study, we investigate the effect of echolocation on blind people's auditory spatial abilities at the cortical level. In an auditory spatial bisection task, we tested people who are early blinds and early blind expert echolocators, along with sighted people. Our results showed that there is similar early activation (50-90 ms) in the posterior area of the scalp for both early blind expert echolocators and sighted participants, but not in the early blind group. This activation was related to sound stimulation, and it is contralateral to the position of the sound in space. These findings indicate that echolocation is a good substitute for the visual modality that enables the development of auditory spatial representations when vision is not available.
Vestibular input is required for accurate locomotion in the dark, yet blind subjects' vestibular ... more Vestibular input is required for accurate locomotion in the dark, yet blind subjects' vestibular function is unexplored. Such investigation may also identify visually dependent aspects of vestibular function. We assessed vestibular function perceptually in six congenitally blind (and 12 sighted) subjects. Cupula deflection by a transient angular, horizontal acceleration generates a related vestibular nerve signal that declines exponentially with time constant Ϸ4-7 s, which is prolonged to 15 s in the evoked vestibular-ocular reflex by the brain stem "velocity storage." We measured perceptual velocity storage in blind subjects following velocity steps (overall perceptual vestibular time constant, experiment 1) and found it to be significantly shorter (5.34 s; range: 2.39-8.58 s) than in control, sighted subjects (15.8 s; P Ͻ 0.001). Vestibular navigation was assessed by subjects steering a motorized Báránychair in response to imposed angular displacements in a path-reversal task, "go-back-to-start" (GBS: experiment 2); and a path-completion task, "complete-the-circle" (CTC: experiment 3). GBS performances (comparing response vs. stimulus displacement regression slopes and r 2) were equal between groups (P Ͼ 0.05), but the blind showed worse CTC performance (P Ͻ 0.05). Two blind individuals showed ultrashort perceptual time constants, high lifetime physical activity scores and superior CTC performances; we speculate that these factors may be interrelated. In summary, the vestibular velocity storage as measured perceptually is visually dependent. Early blindness does not affect path reversal performance but is associated with worse path completion, a task requiring an absolute spatial strategy. Although congenitally blind subjects are overall less able to utilize spatial mechanisms during vestibular navigation, prior extensive physical spatial activity may enhance vestibular navigation.
Frontiers in Human Neuroscience, Mar 15, 2023
Conclusion: Summing up, this study supports the existence of a specific link between the magnitud... more Conclusion: Summing up, this study supports the existence of a specific link between the magnitude of activation of motor and sensorimotor areas related to upper limb proprioceptive processing and the proprioceptive acuity at the joints.
Frontiers in Neuroscience, Sep 29, 2020
The human brain uses perceptual information to create a correct representation of the external wo... more The human brain uses perceptual information to create a correct representation of the external world. Converging data indicate that the perceptual processing of, space, and quantities frequently is based on a shared mental magnitude system, where low and high quantities are represented in the left and right space, respectively. The present study explores how the magnitude affects spatial representation in the tactile modality. We investigated these processes using stimulus-response (S-R) compatibility tasks (i.e., sensorimotor tasks that present an association/dissociation between the perception of a stimulus and the required action, generally increasing/decreasing accuracy and decreasing/increasing reaction times of the subject). In our study, the participant performed a discrimination task between high-and low-frequency vibrotactile stimuli, regardless of the stimulation's spatial position. When the response code was incompatible with the mental magnitude line (i.e., left button for high-frequency and right button for low-frequency responses), we found that the participants bypassed the spatial congruence, showing a magnitude S-R compatibility effect. We called this phenomenon the Spatial-Tactile Association of Response Codes (STARC) effect. Moreover, we observed that the internal frame of reference embodies the STARC effect. Indeed, the participants' performance reversed between uncrossed-and crossedhands posture, suggesting that spatial reference frames play a role in the process of expressing mental magnitude, at least in terms of the tactile modality.
Journal of Experimental Child Psychology, Nov 1, 2021
The ability to encode space is a crucial aspect of interacting with the external world. Therefore... more The ability to encode space is a crucial aspect of interacting with the external world. Therefore, this ability appears to be fundamental for the correct development of the capacity to integrate different spatial reference frames. The spatial reference frame seems to be present in all the sensory modalities. However, it has been demonstrated that different sensory modalities follow various developmental courses. Nevertheless, to date these courses have been investigated only in people with sensory impairments, where there is a possible bias due to compensatory strategies and it is complicated to assess the exact age when these skills emerge. For these reasons, we investigated the development of the allocentric frame in the auditory domain in a group of typically developing children aged 6-10 years. To do so, we used an auditory Simon task, a paradigm that involves implicit spatial processing, and we asked children to perform the task in both the uncrossed and crossed hands postures. We demonstrated that the crossed hands posture affected the performance only in younger children (6-7 years), whereas at 10 years of age children performed as adults and were not affected by such posture. Moreover, we found that this task's performance correlated with age and developmental differences in spatial abilities. Our results support the hypothesis that auditory spatial cognition's developmental course is similar to the visual modality development as reported in the literature.
Innovative research in the fields of prosthetic, neurorehabilitation, motor control and human phy... more Innovative research in the fields of prosthetic, neurorehabilitation, motor control and human physiology has been focusing on the study of proprioception, the sense through which we perceive the position and movement of our body, and great achievements have been obtained regarding its assessment and characterization. However, how proprioceptive signals are combined with other sensory modalities and processed by the central nervous system to form a conscious body image, is still unknown. Such a crucial question was addressed in this study, which involved 23 healthy subjects, by combining a robot-based proprioceptive test with a specific analysis of electroencephalographic activity (EEG) in the mu\mumu frequency band (8-12 Hz). We observed important activation in the motor area contralateral to the moving hand, and besides, a substantial bias in brain activation and proprioceptive acuity when visual feedback was provided in addition to the proprioceptive information during movement execution. In details, brain activation and proprioceptive acuity were both higher in case of movements performed with visual feedback. Remarkably, we also found a correlation between the level of activation in the brain motor area contralateral to the moving hand and the value of proprioceptive acuity.
PLOS ONE, Mar 8, 2023
Our brain constantly combines sensory information in unitary percept to build coherent representa... more Our brain constantly combines sensory information in unitary percept to build coherent representations of the environment. Even though this process could appear smooth, integrating sensory inputs from various sensory modalities must overcome several computational issues, such as recoding and statistical inferences problems. Following these assumptions, we developed a neural architecture replicating humans' ability to use audiovisual spatial representations. We considered the well-known ventriloquist illusion as a benchmark to evaluate its phenomenological plausibility. Our model closely replicated human perceptual behavior, proving a truthful approximation of the brain's ability to develop audiovisual spatial representations. Considering its ability to model audiovisual performance in a spatial localization task, we release our model in conjunction with the dataset we recorded for its validation. We believe it will be a powerful tool to model and better understand multisensory integration processes in experimental and rehabilitation environments.
Perception, 2016
Reference EPFL-CONF-225225View record in Web of Science Record created on 2017-01-24, modified on... more Reference EPFL-CONF-225225View record in Web of Science Record created on 2017-01-24, modified on 2017-01-24
Translational Psychiatry, Aug 12, 2022
It has been widely demonstrated that time processing is altered in patients with schizophrenia. T... more It has been widely demonstrated that time processing is altered in patients with schizophrenia. This perspective review delves into such temporal deficit and highlights its link to low-level sensory alterations, which are often overlooked in rehabilitation protocols for psychosis. However, if temporal impairment at the sensory level is inherent to the disease, new interventions should focus on this dimension. Beyond more traditional types of intervention, here we review the most recent digital technologies for rehabilitation and the most promising ones for sensory training. The overall aim is to synthesise existing literature on time in schizophrenia linking psychopathology, psychophysics, and technology to help future developments.
Scientific Reports, Nov 9, 2022
It is evident that the brain is capable of large-scale reorganization following sensory deprivati... more It is evident that the brain is capable of large-scale reorganization following sensory deprivation, but the extent of such reorganization is to date, not clear. The auditory modality is the most accurate to represent temporal information, and deafness is an ideal clinical condition to study the reorganization of temporal representation when the audio signal is not available. Here we show that hearing, but not deaf individuals, show a strong ERP response to visual stimuli in temporal areas during a timebisection task. This ERP response appears 50-90 ms after the flash and recalls some aspects of the N1 ERP component usually elicited by auditory stimuli. The same ERP is not evident for a visual space-bisection task, suggesting that the early recruitment of temporal cortex is specific for building a highly resolved temporal representation within the visual modality. These findings provide evidence that the lack of auditory input can interfere with typical development of complex visual temporal representations. It is not clear to date which are the principles at the base of cortical organization in our brain. If we consider blindness for example, in some cases, visual regions of the brain reorganize being recruited by auditory and tactile sensory inputs e.g. 1-3. This cross-sensory recruitment has been associated with the improvement of some auditory and tactile skills of blind individuals. However, we have recently showed that this reorganization does not occur for the auditory space-bisection task, for which in sighted but not in blind individuals the visual cortex processes the auditory spatial signals 4. A possible explanation for this result is that visual experience is crucial to develop some spatial properties and when it is not available the visual spatial cortical processing cannot properly develop. Some previous studies supported a sensory-independent supramodal organization of the visual cortex (see 5,6), suggesting that the supramodal principle might extend to other sensory regions. Although this kind of research is much more limited in deafness compared to blindness, several studies have shown sensory-independent task-selective recruitment of the auditory brain. For instance, the auditory language network mostly maintains its distinctive properties in the brain independently of the sensory modality being used as input. In deaf adults, researchers have repeatedly reported that the auditory regions typically recruited by spoken language processing, can be recruited during sign production e.g. 7,8 and sign comprehension e.g. 9,10. Apart from activations related to language, studies have only clearly documented task-selective recruitment in auditory cortices for the perception of visual rhythm 11. Specifically, regardless of the sensory modality involved, perception of rhythms peaked in the same anatomic auditory regions-that is, the posterior and lateral parts of the high-level auditory cortex. Similarly, there is evidence that face processing recruits the cortical territory associated with voice processing (i.e., the temporal voice area, TVA) in early deaf 12. Interestingly, other results showed that the large-scale topography of the auditory cortex does not differ between hearing and deaf individuals. Tonotopic-like large-scale functional connectivity patterns can emerge and be retained through life in prelingually deaf humans without auditory experience 13. In addition, studies in deaf cats revealed that the auditory cortex mostly preserves anatomic
Frontiers in Human Neuroscience
IntroductionPosition sense, which belongs to the sensory stream called proprioception, is pivotal... more IntroductionPosition sense, which belongs to the sensory stream called proprioception, is pivotal for proper movement execution. Its comprehensive understanding is needed to fill existing knowledge gaps in human physiology, motor control, neurorehabilitation, and prosthetics. Although numerous studies have focused on different aspects of proprioception in humans, what has not been fully investigated so far are the neural correlates of proprioceptive acuity at the joints.MethodsHere, we implemented a robot-based position sense test to elucidate the correlation between patterns of neural activity and the degree of accuracy and precision exhibited by the subjects. Eighteen healthy participants performed the test, and their electroencephalographic (EEG) activity was analyzed in its μ band (8–12 Hz), as the frequency band related to voluntary movement and somatosensory stimulation.ResultsWe observed a significant positive correlation between the matching error, representing proprioceptiv...
Human Brain Mapping
Clear evidence demonstrated a supramodal organization of sensory cortices with multisensory proce... more Clear evidence demonstrated a supramodal organization of sensory cortices with multisensory processing occurring even at early stages of information encoding. Within this context, early recruitment of sensory areas is necessary for the development of fine domain-specific (i.e., spatial or temporal) skills regardless of the sensory modality involved, with auditory areas playing a crucial role in temporal processing and visual areas in spatial processing. Given the domain-specificity and the multisensory nature of sensory areas, in this study, we hypothesized that preferential domains of representation (i.e., space and time) of visual and auditory cortices are also evident in the early processing of multisensory information. Thus, we measured the event-related potential (ERP) responses of 16 participants while performing multisensory spatial and temporal bisection tasks. Audiovisual stimuli occurred at three different spatial positions and time lags and participants had to evaluate whether the second stimulus was spatially (spatial bisection task) or temporally (temporal bisection task) farther from the first or third audiovisual stimulus. As predicted, the second audiovisual stimulus of both spatial and temporal bisection tasks elicited an early ERP response (time window 50-90 ms) in visual and auditory regions. However, this early ERP component was more substantial in the occipital areas during the spatial bisection task, and in the temporal regions during the temporal bisection task. Overall, these results confirmed the domain specificity of visual and auditory cortices and revealed that this aspect selectively modulates also the cortical activity in response to multisensory stimuli.
Translational Psychiatry
It has been widely demonstrated that time processing is altered in patients with schizophrenia. T... more It has been widely demonstrated that time processing is altered in patients with schizophrenia. This perspective review delves into such temporal deficit and highlights its link to low-level sensory alterations, which are often overlooked in rehabilitation protocols for psychosis. However, if temporal impairment at the sensory level is inherent to the disease, new interventions should focus on this dimension. Beyond more traditional types of intervention, here we review the most recent digital technologies for rehabilitation and the most promising ones for sensory training. The overall aim is to synthesise existing literature on time in schizophrenia linking psychopathology, psychophysics, and technology to help future developments.
Journal of Clinical Sleep Medicine
The mechanisms involved in the origin of dreams remain one of the great unknowns in science. In t... more The mechanisms involved in the origin of dreams remain one of the great unknowns in science. In the 21st century, studies in the field have focused on 3 main topics: functional networks that underlie dreaming, neural correlates of dream contents, and signal propagation. We review neuroscientific studies about dreaming processes, focusing on their cortical correlations. The involvement of frontoparietal regions in the dream-retrieval process allows us to discuss it in light of the Global Workspace theory of consciousness. However, dreaming in distinct sleep stages maintains relevant differences, suggesting that multiple generators are implicated. Then, given the strong influence of light perception on sleep regulation and the mostly visual content of dreams, we investigate the effect of blindness on the organization of dreams. Blind individuals represent a worthwhile population to clarify the role of perceptual systems in dream generation, and to make inferences about their top-down and/or bottom-up origin. Indeed, congenitally blind people maintain the ability to produce visual dreams, suggesting that bottom-up mechanisms could be associated with innate body schemes or multisensory integration processes. Finally, we propose the new dream-engineering technique as a tool to clarify the mechanisms of multisensory integration during sleep and related mental activity, presenting possible implications for rehabilitation in sensory-impaired individuals. The Theory of Proto-consciousness suggests that the interaction of brain states underlying waking and dreaming ensures the optimal functioning of both. Therefore, understanding the origin of dreams and capabilities of our brain during a dreamlike state, we could introduce it as a rehabilitative tool.
2019 IEEE International Symposium on Medical Measurements and Applications (MeMeA)
We live in a multisensory world where all our sensory systems are constantly stimulated and diffe... more We live in a multisensory world where all our sensory systems are constantly stimulated and different sensory signals need to be integrated. Many works show that the brain is able to integrate redundant signals of a particular property. However, to date, there are no solutions to investigate how auditory, visual and tactile information are integrated by considering spatial, temporal and body representations. The goal of this work is to present a new system that has been developed for assessing multisensory integration considering also the body and its movements in space. The system, called MSI Caterpillar, gives the opportunity to study how multisensory signals are integrated taking into account spatial and temporal features of the stimulus and considering both the external and the body space. It is a set of several audio-visual and tactile elements arranged in the form of an array, to be positioned on the body (e.g. in the arm) or in the external space. The aim of this innovative technology is to design novel ways and clinical procedures to train and study how multisensory signal in different parts of the body and space are processed and can be rehabilitated in typical and sensory impaired (e.g. visually impaired or deaf individuals) children and adults.
Background: The ability to process sensory information is an essential adaptive function, and hyp... more Background: The ability to process sensory information is an essential adaptive function, and hyper- or hypo-sensitive maladaptive profiles of repones to environmental stimuli generate sensory processing disorders linked to cognitive, affective, and behavioural alterations. The research on neuroradiological correlates of the sensory processing profiles is still in its infancy and is mainly limited to the young-age population or neurodevelopmental disorders. So, the knowledge concerning the impact of the different sensory profiles on the structural and functional characteristics of the typically developed adult brain remains largely obscure. In this framework, this study aims to examine the structural and functional MRI correlates of sensory profiles in a sample of healthy adults. Method: We investigated structural T1, Diffusion Tensor Imaging (DTI), and resting-state functional MRI (rs-fMRI) correlates of Adolescent/Adult Sensory Profile (AASP) questionnaire subscales in 57 typical ...
It is not clear how multisensory skills develop and how visual experience impacts on multisensory... more It is not clear how multisensory skills develop and how visual experience impacts on multisensory spatial development. Conflicting results show that visual calibration precedes multisensory integration for the audio-visual spatial bisection task (Gori et al., 2012a, 2012b) while in other tasks such as spatial localization, visual calibration occurs after multisensory development (Rohlf et al., 2020). Results in blind individuals can say something about the role of vision on perceptual development. Scientific evidences show that blind individuals have impairments in bisecting the auditory space (Gori et al., 2014) but not in localizing auditory sources (Lessard et al., 1998). Such results suggest that sensory calibration and impairment are linked. We studied the development of audio-visual multisensory localization in the vertical plane in sighted individuals from 5 years to adulthood to address this hypothesis. We hypothesize that typical children would show late audio-visual integr...
2021 43rd Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC), 2021
The present work aims to introduce a novel robotic platform suitable for investigating perception... more The present work aims to introduce a novel robotic platform suitable for investigating perception in multisensory motion tasks for individuals with and without sensory and motor disabilities. The system, called RoMAT, allows the study of how multisensory signals are integrated, taking into account the speed and direction of the stimuli. It is a robotic platform composed of a visual and tactile wheel mounted on two routable plates to be moved under the finger and the visual observation of the participants. We validated the system by implementing a rotation discrimination task considering two different sensory modalities: vision, touch and multisensory visual-tactile integration. Four healthy subjects were asked to report the length of motion rotation after perceiving a moving stimulus generated by the visual, tactile, or both stimuli. Results suggest that multisensory precision improves when multiple sensory stimulations are presented. The new system can therefore provide fundamental inputs in determining the perceptual principles of motion processing. Therefore, this device can be a potential system to design screening and rehabilitation protocols based on neuroscientific findings to be used in individuals with visual and motor impairments. Clinical relevance-This research presents a novel robotic motion simulator to deliver combined or independent stimulation of the visual and tactile sensory signals.