Comparable Human Spatial Memory Distortions in Physical, Desktop Virtual and Immersive Virtual Environments (original) (raw)
Related papers
Frontiers in Virtual Reality, 2020
Virtual environments are commonly used to assess spatial cognition in humans. For the past few decades, researchers have used virtual environments to investigate how people navigate, learn, and remember their surrounding environment. In combination with tools such as electroencephalogram, neuroimaging, and electrophysiology, these virtual environments have proven invaluable in their ability to help elucidate the underlying neural mechanisms of spatial learning and memory in humans. However, a critical assumption that is made whenever using virtual experiences is that the spatial abilities used in the navigation of these virtual environments accurately represents the spatial abilities used in the real-world. The aim of the current study is to investigate the spatial relationships between real and virtual environments to better understand how well the virtual experiences parallel the same experiences in the real-world. Here, we performed three independent experiments to examine whether spatial information about object location, environment layout, and navigation strategy transfers between parallel real-world and virtual-world experiences. We show that while general spatial information does transfer between real and virtual environments, there are several limitations of the virtual experience. Compared to the real-world, the use of information in the virtual-world is less flexible, especially when testing spatial memory from a novel location, and the way in which we navigate these experiences are different as the perceptual and proprioceptive feedback gained from the real-world experience can influence navigation strategy.
2007
We present a summary of the development of a new virtual reality setup for behavioural experiments in the area of spatial cognition. Most previous virtual reality setups can either not provide accurate body motion cues when participants are moving in a virtual environment, or participants are hindered by cables while walking in virtual environments with a head-mounted display (HMD). Our new setup solves these issues by providing a large, fully trackable walking space, in which a participant with a HMD can walk freely, without being tethered by cables. Two experiments on spatial memory are described, which tested this setup. The results suggest that environmental spaces traversed during wayfinding are memorised in a view-dependent way, i.e., in the local orientation they were experienced, and not with respect to a global reference direction.
Studying Spatial Memory in Augmented and Virtual reality
Spatial memory is a crucial part of our lives. Spatial memory research and rehabilitation in humans is typically performed either in real environments, which is challenging practically, or in Virtual Reality (VR), which has limited realism. Here we explored the use of Augmented Reality (AR) for studying spatial cognition. AR combines the best features of real and VR paradigms by allowing subjects to learn spatial information in a flexible fashion while walking through a real-world environment. To compare these methods, we had subjects perform the same spatial memory task in VR and AR settings. Although subjects showed good performance in both, subjects reported that the AR task version was significantly easier, more immersive, and more fun than VR. Importantly, memory performance was significantly better in AR compared to VR. Our findings validate that integrating AR can lead to improved techniques for spatial memory research and suggest their potential for rehabilitation.
Psychonomic Bulletin & Review, 2008
Previous research has uncovered three primary cues that influence spatial memory organization: egocentric experience, intrinsic structure (object defined), and extrinsic structure (environment defined). In the present experiments, we assessed the relative importance of these cues when all three were available during learning. Participants learned layouts from two perspectives in immersive virtual reality. In Experiment 1, axes defined by intrinsic and extrinsic structures were in conflict, and learning occurred from two perspectives, each aligned with either the intrinsic or the extrinsic structure. Spatial memories were organized around a reference direction selected from the first perspective, regardless of its alignment with intrinsic or extrinsic structures. In Experiment 2, axes defined by intrinsic and extrinsic structures were congruent, and spatial memories were organized around reference axes defined by those congruent structures, rather than by the initially experienced view. The findings are discussed in the context of spatial memory theory as it relates to real and virtual environments.
The perception of spatial layout in real and virtual worlds
1997
As human-machine interfaces grow more immersive and graphically-oriented, virtual environment systems become more prominent as the medium for humanmachine communication. Often, virtual environments (VE) are built to provide exact metrical representations of existing or proposed physical spaces. H owever, it is not known how individu als develop representational models of these spaces in which they are immersed and how those models may be distorted with respect to both the virtual and real-world equivalents. To evaluate the process of model development, the present experiment examined participant's ability to reproduce a complex spatial layout of objects having experienced them previously under diOEerent viewing conditions. The layout consisted of nine common objects arranged on a¯at plane. These objects could be viewed in a free binocular virtual condition, a free binocular real-world condition, and in a static monocular view of the real world. The ®rst two allowed active exploration of the environment while the latter condition allowed the participant only a passive opportunity to observe from a single viewpoint. Viewing conditions were a between-subject variable with 10 participants randomly assigned to each condition. Performance was assessed using mapping accuracy and triadic comparisons of relative inter-object distances. M apping results showed a signi®cant eOEect of viewing condition where, interestingly, the static monocular condition was superior to both the active virtual and real binocular conditions. Results for the triadic comparisons showed a signi®cant interaction for gender by viewing condition in which males were more accurate than females. These results suggest that the situation model resulting from interaction with a virtual environment was indistinguishable from interaction with real objects at least within the constraints of the present procedure.
Cognitive transfer of spatial awareness states from immersive virtual environments to reality
ACM Transactions on Applied Perception, 2010
________________________________________________________________________ An individual's prior experience will influence how new visual information in a scene is perceived and remembered. Accuracy of memory performance per se is an imperfect reflection of the cognitive activity (awareness states) that underlies performance in memory tasks. The aim of this research is to investigate the effect of varied visual fidelity of training environments on the transfer of training to the real-world after exposure to immersive simulations representing a real-world scene. A between groups experiment was carried out to explore the effect of rendering quality on measurements of location-based recognition memory for objects and associated states of awareness. The immersive simulation, consisted of one room that was either rendered flat-shaded or using radiosity rendering. The simulation was displayed on a stereo head-tracked Head Mounted Display. Post exposure to the synthetic simulation, participants completed a memory recognition task conducted in a real-world scene by physically arranging objects in their physical form in a real world room. Participants also reported one of four states of awareness following object recognition. They were given several options of awareness states that reflected the level of visual mental imagery involved during retrieval, the familiarity of the recollection and related guesses. The scene incorporated objects that 'fitted' into the specific context of the real-world scene, referred to as consistent objects, and objects which were not related to the specific context of the real-world scene, referred to as inconsistent objects. A follow-up study was conducted a week after the initial test. Interestingly, results revealed a higher proportion of correct object recognition associated with mental imagery when participants were exposed to low fidelity flat-shaded training scenes rather than the radiosity rendered ones. Memory psychology indicates that awareness states based on visual imagery require stronger attentional processing in the first instance than those based on familiarity. A tentative claim would therefore be that those immersive environments that are distinctive because of their variation from 'real', such as flat-shaded environments, recruit stronger attentional resources. This additional attentional processing may bring about a change in participants' subjective experiences of 'remembering' when they later transfer the training from that environment into a real-world situation.
Three dimensional spatial memory and learning in real and virtual environments
Spatial cognition and computation, 2002
Human orientation and spatial cognition partly depends on our ability to remember sets of visual landmarks and imagine their relationship to us from a different viewpoint. We normally make large body rotations only about a single axis which is aligned with gravity. However, astronauts who try to recognize environments rotated in 3 dimensions report that their terrestrial ability to imagine the relative orientation of remembered landmarks does not easily generalize. The ability of human subjects to learn to mentally rotate a simple array of six objects around them was studied in 1-G laboratory experiments. Subjects were tested in a cubic chamber (n = 73) and a equivalent virtual environment (n = 24), analogous to the interior of a space station node module. A picture of an object was presented at the center of each wall. Subjects had to memorize the spatial relationships among the six objects and learn to predict the direction to a specific object if their body were in a specified 3D...
Virtual Reality as a Valuable Research Tool for Investigating Different Aspects of Spatial Cognition
Lecture Notes in Computer Science, 2008
The interdisciplinary research field of spatial cognition has benefited greatly from the use of advanced Virtual Reality (VR) technologies. Such tools have provided the ability to explicitly control specific experimental conditions, manipulate variables not possible in the real world, and provide a convincing, multimodal experience. Here we will first describe several of the VR facilities at the Max Planck Institute (MPI) for Biological Cybernetics that have been developed to optimize scientific investigations related to multi-modal self-motion perception and spatial cognition. Subsequently, we will present some recent empirical work contributing to these research areas.