The Magnitude Effect on Tactile Spatial Representation: The Spatial–Tactile Association for Response Code (STARC) Effect (original) (raw)
Related papers
Different mechanisms of magnitude and spatial representation for tactile and auditory modalities
Experimental Brain Research
The human brain creates an external world representation based on magnitude judgments by estimating distance, numerosity, or size. The magnitude and spatial representation are hypothesized to rely on common mechanisms shared by different sensory modalities. We explored the relationship between magnitude and spatial representation using two different sensory systems. We hypothesize that the interaction between space and magnitude is combined differently depending on sensory modalities. Furthermore, we aimed to understand the role of the spatial reference frame in magnitude representation. We used stimulus–response compatibility (SRC) to investigate these processes assuming that performance is improved if stimulus and response share common features. We designed an auditory and tactile SRC task with conflicting spatial and magnitude mapping. Our results showed that sensory modality modulates the relationship between space and magnitude. A larger effect of magnitude over spatial congrue...
Response requirements modulate tactile spatial congruency effects
Experimental Brain Research, 2008
Several recent studies have provided support for the view that tactile stimuli/events are remapped into an abstract spatial frame of reference beyond the initial somatotopic representation present in the primary somatosensory cortex. Here, we demonstrate for the Wrst time that the extent to which this remapping of tactile stimuli takes place is dependent upon the particular demands imposed by the task that participants have to perform. Participants in the present study responded to either the elevation (up vs. down) or to the anatomical location (Wnger vs. thumb) of vibrotactile targets presented to one hand, while trying to ignore distractors presented simultaneously to the other hand. The magnitude and direction of the target-distractor congruency eVect was measured as participants adopted one of two diVerent postures with each hand (palm-up or palm-down). When the participants used footpedal responses (toe vs. heel; Experiment 1), congruency eVects were determined by the relative elevation of the stimuli in external coordinates (same vs. diVerent elevation), regardless of whether the relevant response feature was deWned externally or anatomically. Even when participants responded verbally (Experiment 2), the inXuence of the relative elevation of the stimuli in external space, albeit attenuated, was still observed. However, when the task involved responding with the stimulated Wnger (four-alternative forced choice; Experiment 3), congruency eVects were virtually eliminated. These Wndings support the view that tactile events can be remapped according to an abstract frame of reference resulting from multisensory integration, but that the frame of reference that is used while performing a particular task may depend to a large extent on the nature of the task demands.
Changing Reference Frames during the Encoding of Tactile Events
Current Biology, 2008
The mindless act of swatting a mosquito on the hand poses a remarkable challenge for the brain. Given that the primary somatosensory cortex maps skin location independently of arm posture , the brain must realign tactile coordinates in order to locate the origin of the stimuli in extrapersonal space. Previous studies have highlighted the behavioral relevance of such an external mapping of touch, which results from combining somatosensory input with proprioceptive and visual cues about body posture . However, despite the widely held assumption about the existence of this remapping process from somatotopic to external space and various findings indirectly suggesting its consequences , a demonstration of its changing time course and nature was lacking. We examined the temporal course of this multisensory interaction and its implications for tactile awareness in humans using a crossmodal cueing paradigm . What we show is that before tactile events are referred to external locations , a fleeting, unconscious image of the tactile sensation abiding to a somatotopic frame of reference rules performance. We propose that this early somatotopic ''glimpse'' arises from the initial feed-forward sweep of neural activity to the primary somatosensory cortex, whereas the later externally-based, conscious experience reflects the activity of a somatosensory network involving recurrent connections from association areas.
Multi-sensory feedback improves spatially compatible sensorimotor responses
HAL (Le Centre pour la Communication Scientifique Directe), 2022
To interact with machines, from computers to cars, we need to monitor multiple sensory stimuli, and respond to them with specific motor actions. It has been shown that our ability to react to a sensory stimulus is dependent on both the stimulus modality, as well as the spatial compatibility of the stimulus and the required response. However, the compatibility effects have been examined for sensory modalities individually, and rarely for scenarios requiring individuals to choose from multiple actions. Here, we compared response time of participants when they had to choose one of several spatially distinct, but compatible, responses to visual, tactile or simultaneous visual and tactile stimuli. We observed that the presence of both tactile and visual stimuli consistently improved the response time relative to when either stimulus was presented alone. While we did not observe a difference in response times of visual and tactile stimuli, the spatial stimulus localization was observed to be faster for visual stimuli compared to tactile stimuli. Humans interactions with their environment are modulated by the various stimuli that we receive in multiple sensory modalities. This is particularly so in the case of human machine interfaces which commonly use visual, tactile and auditory stimuli to transmit information to the user 1-4 . An example the parking assistance in a car which uses visual and audio stimuli to indicate the proximity of an obstacle to a particular side of the car. Another example found in the literature is that of a tactile flight envelope "display", where an airplane flight parameters (which are usually available visually in the cockpit) are fed-back to the pilot using tactile feedback 5 . Previous studies have shown that the speed and accuracy with which we react to a stimulus is not only dependent on the physical property of the stimulus and the nature of the information that is extracted from it 9 , but also on the action that is required as the response, with certain 'spatially compatible' stimuli-response couples enabling better performance than other non-compatible ones . In this study, we are interested specifically in compatible stimuli-response couples, and in the performance differences induced by the stimulus modality (specifically visual and tactile), when one has to choose from multiple compatible stimuli-response couples. A 1953 study, by Fitts and Seeger 15 in which participants had to react to a visual stimulus by moving a stylus towards it, showed that participants reacted faster and more accurately for visual stimuli presented in the same visual hemispace as the responding hand. This effect was termed as "stimulus-response compatibility (SRC)". This was followed by JR Simon, who showed in 1963 10 that SRC effects are induced even if the position of the stimulus is irrelevant to the task. Since then, spatial SRC effects have also been exhibited with auditive as well as tactile stimuli . In case of visuo-motor tasks, SRC is determined by the position of the stimulus and response in relation to a point of reference in visual space. For example in the Simon Task 10 , it was initially shown that one can respond faster using a hand in the same (compatible) hemispace as the visual stimulus, than with a hand in the opposite (incompatible) hemispace. On the other hand, it is known that for tactile stimuli, compatibility is determined by the relative distribution of stimuli and responses in the somatosensory hemispace (in relation to the body midline). However, these previous studies have used monomodal stimuli , in which only one sensory modality contains task relevant information, while a secondary sensory modality, if present, is used only as a distractor. Furthermore, these studies traditionally compare the performance between compatible and incompatible stimuli-response couples. Here we are interested in understanding how multimodal stimuli can affect the reaction time of participants in a scenario where the participant has to react to one of multiple
Stimulus-response compatibility in representational space
Neuropsychologia, 1998
Spatial stimulus*response "S!R# compatibility designates the observation that speeded reactions to unilateral stimuli are faster for the hand ipsilateral than for the hand contralateral to the sensory hemi_eld containing the stimulus[ In two experiments involving presentation of the numbers 0 to 00 in the center of the visual _eld we show "0# a left!hand reaction time "RT# advantage for numerals ³5 and a right!hand advantage for those ×5 for subjects who conceive of the numbers as distances on a ruler\ and "1# a reversal of this RT advantage for subjects who conceive of them as hours on a clock face[ While the results in the _rst task "RULER# replicate a robust _nding from the neuropsychology of number processing "the {{SNARC e}ect||# those in the second task "CLOCK# show that extension of the number scale from left to right in representational space cannot be the decisive factor for the observed interaction between hand and number size[ Taken together\ the _ndings in the two tasks are best accounted for in terms of an interaction between lateralized mental representations and lateralized motor outputs "i[e[ an analog of traditional spatial S!R compatibility e}ects in representational space# [ We discuss potential clinical applications of the two tasks in patients with neglect of representational space[ Þ 0887 Elsevier Science Ltd[ All rights reserved[ Key Words] Spatial S!R compatibility^choice reaction time task^number processing^SNARC e}ect^representational neglect[ Address for correspondence] Peter
Multisensory Research
Exploring the world through touch requires the integration of internal (e.g., anatomical) and external (e.g., spatial) reference frames — you only know what you touch when you know where your hands are in space. The deficit observed in tactile temporal-order judgements when the hands are crossed over the midline provides one tool to explore this integration. We used foot pedals and required participants to focus on either the hand that was stimulated first (an anatomical bias condition) or the location of the hand that was stimulated first (a spatiotopic bias condition). Spatiotopic-based responses produce a larger crossed-hands deficit, presumably by focusing observers on the external reference frame. In contrast, anatomical-based responses focus the observer on the internal reference frame and produce a smaller deficit. This manipulation thus provides evidence that observers can change the relative weight given to each reference frame. We quantify this effect using a probabilistic...
Journal of Physiology-paris, 2004
In order to determine precisely the location of a tactile stimulus presented to the hand it is necessary to know not only which part of the body has been stimulated, but also where that part of the body lies in space. This involves the multisensory integration of visual, tactile, proprioceptive, and even auditory cues regarding limb position. In recent years, researchers have become increasingly interested in the question of how these various sensory cues are weighted and integrated in order to enable people to localize tactile stimuli, as well as to give rise to the `felt' position of our limbs, and ultimately the multisensory representation of 3-D peripersonal space. We highlight recent research on this topic using the crossmodal congruency task, in which participants make speeded elevation discrimination responses to vibrotactile targets presented to the thumb or index finger, while simultaneously trying to ignore irrelevant visual distractors presented from either the same (i.e., congruent) or a different (i.e., incongruent) elevation. Crossmodal congruency effects (calculated as performance on incongruent − congruent trials) are greatest when visual and vibrotactile stimuli are presented from the same azimuthal location, thus providing an index of common position across different sensory modalities. The crossmodal congruency task has been used to investigate a number of questions related to the representation of space in both normal participants and brain-damaged patients. In this review, we detail the major findings from this research, and highlight areas of convergence with other cognitive neuroscience disciplines.
Tactile form and location processing in the human brain
Proceedings of the National Academy of Sciences, 2005
To elucidate the neural basis of the recognition of tactile form and location, we used functional MRI while subjects discriminated gratings delivered to the fingertip of either the right or left hand. Subjects were required to selectively attend to either grating orientation or grating location under identical stimulus conditions. Independent of the hand that was stimulated, grating orientation discrimination selectively activated the left intraparietal sulcus, whereas grating location discrimination selectively activated the right temporoparietal junction. Hence, hemispheric dominance appears to be an organizing principle for cortical processing of tactile form and location.
Spatial constraints on visual-tactile cross-modal distractor congruency effects
Cognitive Affective & Behavioral Neuroscience, 2004
Across three experiments, participants made speeded elevation discrimination responses to vibrotactile targets presented to the thumb (held in a lower position) or the index finger (upper position) of either hand, while simultaneously trying to ignore visual distractors presented independently from either the same or a different elevation. Performance on the vibrotactile elevation discrimination task was slower and less accurate when the visual distractor was incongruent with the elevation of the vibrotactile target (e.g., a lower light during the presentation of an upper vibrotactile target to the index finger) than when they were congruent, showing that people cannot completely ignore vision when selectively attending to vibrotactile information. We investigated the attentional, temporal, and spatial modulation of these cross-modal congruency effects by manipulating the direction of endogenous tactile spatial attention, the stimulus onset asynchrony between target and distractor, and the spatial separation between the vibrotactile target, any visual distractors, and the participant’s two hands within and across hemifields. Our results provide new insights into the spatiotemporal modulation of crossmodal congruency effects and highlight the utility of this paradigm for investigating the contributions of visual, tactile, and proprioceptive inputs to the multisensory representation of peripersonal space.
Spatial organisation in passive tactile perception: Is there a tactile field
Acta Psychologica, 2008
The perceptual field is a cardinal concept of sensory psychology. 'Field' refers to a representation in which perceptual contents have spatial properties and relations which derive from the spatial properties and relations of corresponding stimuli. It is a matter of debate whether a perceptual field exists in touch analogous to the visual field. To study this issue, we investigated whether tactile stimuli on the palm can be perceived as complex stimulus patterns, according to basic spatial principles.