Using eye movements to explore mental representations of space (original) (raw)

Visual mental imagery: what the head's eye tells the mind's eye

Brain research, 2011

The demonstration of an implication of attentional/eye gaze systems in visual mental imagery might help to understand why some patients with visual neglect, who suffer from severe attentional deficits, also show neglect for mental images. When normal participants generate mental images of previously explored visual scenes, their oculomotor behavior resembles that used during visual exploration. However, this could be a case of encoding specificity, whereby the probability of retrieving an event increases if some information encoded with the event (in this case its spatial location) is present at retrieval. In the present study, normal participants were invited to conjure up a mental image of the map of France and to say whether auditorily presented towns or regions were situated left or right of Paris.

The Role of Vision in Spatial Representation

Cortex, 2004

A complex link exists between vision and unilateral spatial neglect (USN). Firstly, USN is not a perceptual deficit, secondly, USN is not necessarily accompanied by a visual deficit and finally, USN can be observed in non-visual modalities as well as in mental spatial imagery. This apparent supramodality of USN stands in sharp contrast to the fact that neglect signs are often more severe and more durable in the visual than in other sensory modalities .

Interaction of cognitive and sensorimotor maps of visual space

Attention Perception & Psychophysics, 1997

Studies of saccadic suppression and induced motion have suggested separate representations of visual space for perception and visually guided behavior. Because these methods required stimulus motion, subjects might have confounded motion and position. We separated cognitive and sensorimotor maps without motion of target, background, or eye, with an “induced Roelofs effect”: a target inside an off-center frame appears biased opposite the direction of the frame. A frame displayed to the left of a subject’s center line, for example, will make a target inside the frame appear farther to the right than its actual position. The effect always influences perception, but in half of our subjects it did not influence pointing. Cognitive and sensorimotor maps interacted when the motor response was delayed; all subjects now showed a Roelofs effect for pointing, suggesting that the motor system was being fed from the biased cognitive map. A second experiment showed similar results when subjects made an open-ended cognitive response instead of a five-alternative forced choice. Experiment 3 showed that the results were not due to shifts in subjects’ perception of the felt straight-ahead position. In Experiment 4, subjects pointed to the target and judged its location on the same trial. Both measures showed a Roelofs effect, indicating that each trial was treated as a single event and that the cognitive representation was accessed to localize this event in both response modes.

Visual Space-Perception and Visually Directed Action

1992

The results of two types of experiments are reported. In 1 type, Ss matched depth intervals on the ground plane that appeared equal to frontal intervals at the same distance. The depth intervals had to be made considerably larger than the frontal intervals to appear equal in length, with this physical inequality of equal-appearing intervals increasing with egocentric distance of the intervals (4 m-12 m). In the other type of experiment, Ss viewed targets lying on the ground plane and then, with eyes closed, attempted either to walk directly to their locations or to point continuously toward them while walking along paths that passed off to the side. Performance was quite accurate in both motoric tasks, indicating that the distortion in the mapping from physical to visual space evident in the visual matching task does not manifest itself in the visually open-loop motoric tasks.

Illusions in the spatial sense of the eye: Geometrical–optical illusions and the neural representation of space

Vision Research, 2008

This article appeared in a journal published by Elsevier. The attached copy is furnished to the author for internal non-commercial research and education use, including for instruction at the authors institution and sharing with colleagues. Other uses, including reproduction and distribution, or selling or licensing copies, or posting to personal, institutional or third party websites are prohibited. In most cases authors are permitted to post their version of the article (e.g. in Word or Tex form) to their personal website or institutional repository. Authors requiring further information regarding Elsevier's archiving and manuscript policies are

Investigation of visual space using an exocentric pointing task

Perception & Psychophysics, 2000

Classically, it has been assumed that visual space can be represented by a metric. This means that the distance between points and the angle between lines can be uniquely defined. However, this assumption has never been tested. Also, measurements outdoors, where monocular cues are abundant, conflict with this model. This paper reports on two experiments in which the structure of visual space was investigated, using an exocentric pointing task. In the first experiment, we measured the influence of the separation between pointer and target and of the orientation of the stimuli with respect to the observer. This was done both monocularly and binocularly. It was found that the deviation of the pointer settings depended linearly on the orientation, indicating that visual space is anisotropic. The deviations for configurations that were symmetrical in the median plane were approximately the same, indicating that left/right symmetry was maintained. The results for monocular and binocular conditions were very different, which indicates that stereopsis was an important cue. In both conditions, there were large deviations from the veridical. In the second experiment, the relative distance of the pointer and the target with respect to the observer was varied in both the monocular and the binocular conditions. The relative distance turned out to be the main parameter for the ranges used (1-5 m). Any distance function must have an expanding and a compresslllg part in order to describe the data. In the binocular case, the results were much more consistent than in the monocular case and had a smaller standard deviation. Nevertheless, the systematic mispointings remained large. It can therefore be concluded that stereopsis improves space perception but does not improve veridicality.

Spatial Cognition Through A Nonvisual Experience

ICONARCH III, 2017

The notion examining the interaction between the individual and the physical setting is the concept of perception, which is evaluated by Hall (1966) as the main competence that living organisms possess for survival. In this manner, perceptual product can be defined as the result of perceptual processes through which the stimuli from the environment are converted into cognitive data by the receptor cells of sense organs-mainly the eye. According to Pallasmaa (2005), the eye became the centre of the perceptual world through the invention of perspectival representation, which turned into a symbolic form both describing and conditioning perception. Also, such concepts as Merleau-Ponty's (2005) bodily experience, a classification of perceptual modalities, have been partly replaced by more holistic approaches, considering the experience as the most essential factor of the physical setting which is defined as the collection of cognitive data of individuals formed by various information processing circumstances (Downs and Stea, 2011). Cognitive mapping is the process of a mental representation which people acquire, code, store, recall and decode information about the relative location and attributes of the physical setting (Downs and Stea 2011). This imaged information includes impressions about structure or appearance of a place, its relative location, its use and its values. On the other hand, a specific place's structure, value and relative relations can be analysed in a more analytical way. Space syntax is a method for describing and analysing the relationships between spaces and a set of techniques for the representation, quantification, and interpretation of spatial relations in buildings and settlements. Contributing to this debate, this paper explores the cognitive data generated by sighted people in a non-visual bodily experience, as they are guided through " Dialogue in the Dark " , a thematic environment consisting of completely dark rooms equipped with scent, sound, wind and tactile simulations of a specific urban setting and syntactic relations of that space. In this regard, a two-step methodology is applied: the first step comprises cognitive data from the cognitive maps drawn by participants just after their experience, while the second one comprises existing spatial data revealed by syntactic analyses. Finally, the correlation between the cognitive frequencies of the experienced nodes in each cognitive map and the syntactic values of the setting are statistically analysed. 2 Statistical outcomes show that without vision, no correlation is found between the syntactic values and the frequency of spaces but, auditive and tactile characteristics of the spaces are significantly correlated with the frequencies of the spaces. In conclusion, the results show that spatial cognition without vision is mainly dependent on bodily experience of the self which is stimulated mostly by auditive and tactile senses, and also that the effect of the syntactic characteristics of the space derived from visual parameters loosens the ties with the notion of spatial cognition.