Bayesian models of binocular 3-D motion perception (original) (raw)
Related papers
Testing probabilistic models of binocular 3D motion perception
2014
Geometric constraints for the perception of three-dimensional (3D) binocular motion are discussed in a probabilistic framework. Two alternative Bayesian models of binocular integration are put forward to explain perceptual bias under uncertainty. The models exploit biologically plausible constraints of local motion and disparity processing in a binocular viewing geometry. Results from computer simulations and model selection support the idea that disparity processing rather than motion processing introduces perceptual bias in 3D motion. This suggests that the binocular motion system combines motion and disparity constraints relatively late when establishing a 3D motion percept.
A Bayesian approach to the aperture problem of 3D motion perception
2012 International Conference on 3D Imaging (IC3D), 2012
We suggest a geometric-statistical approach that can be applied to the 3D aperture problem of motion perception. In simulations and psychophysical experiments we study perceived 3D motion direction in a binocular viewing geometry by systematically varying 3D orientation of a line stimulus moving behind a circular aperture. Although motion direction is inherently ambiguous perceived directions show systematic trends and a Bayesian model with a prior for small depth followed by slow motion in 3D gives reasonable fits to individual data. We conclude that the visual system tries to minimize velocity in 3D but that earlier disparity processing strongly influences perceived 3D motion direction. We discuss implications for the integration of disparity and motion cues in the human visual system.
Systematic misperceptions of 3-D motion explained by Bayesian inference
People make surprising but reliable perceptual errors. Here, we provide a unified explanation for systematic errors in the perception of three-dimensional (3-D) motion. To do so, we characterized the binocular retinal motion signals produced by objects moving through arbitrary locations in 3-D. Next, we developed a Bayesian model, treating 3-D motion perception as optimal inference given sensory noise in the measurement of retinal motion. The model predicts a set of systematic perceptual errors, which depend on stimulus distance, contrast, and eccentricity. We then used a virtual-reality headset as well as a standard 3-D desktop stereoscopic display to test these predictions in a series of perceptual experiments. As predicted, we found evidence that errors in 3-D motion perception depend on the contrast, viewing distance, and eccentricity of a stimulus. These errors include a lateral bias in perceived motion direction and a surprising tendency to misreport approaching motion as receding and vice versa. In sum, we present a Bayesian model that provides a parsimonious account for a range of systematic misperceptions of motion in naturalistic environments.
On the Aperture Problem of Binocular 3D Motion Perception
Vision
Like many predators, humans have forward-facing eyes that are set a short distance apart so that an extensive region of the visual field is seen from two different points of view. The human visual system can establish a three-dimensional (3D) percept from the projection of images into the left and right eye. How the visual system integrates local motion and binocular depth in order to accomplish 3D motion perception is still under investigation. Here, we propose a geometric-statistical model that combines noisy velocity constraints with a spherical motion prior to solve the aperture problem in 3D. In two psychophysical experiments, it is shown that instantiations of this model can explain how human observers disambiguate 3D line motion direction behind a circular aperture. We discuss the implications of our results for the processing of motion and dynamic depth in the visual system.
Binocular processing of motion: Some unresolved questions
Spatial vision, 2009
The unresolved questions relating to binocular processing of motion include: Is the perceived speed of the motion in depth (MID) of an approaching object inversely proportional to the time to collision?; What visual information supports judgements of the direction of MID?; What is the relation between binocular and monocular processing in the perception of MID? We review whether the perception of stereomotion in depth of a monocularly visible object is caused entirely by a rate of change of disparity, and conclude that the difference between the horizontal velocities of the object's left and right retinal images makes at most only a small contribution to speed discrimination, but conclusions may be different for detection, perceived speed and directional discrimination. We review laboratory evidence on the relative importance of binocular and monocular information for interceptive action and collision avoidance and conclude that, in addition to the effect of considerable intersubject variability, the relative importance depends on the physical size of the approaching object, its distance and, if nonspherical, its direction of motion and whether it is rotating. We compare attempts to find whether the human visual system contains a mechanism specialized for the speed of cyclopean motion within a frontoparallel plane, and find the question ill-posed.
Vision Research, 1996
We examine how various sources of information contribute to the percept of motion in depth. Subjects were presented with targets moving in depth, and were asked to judge their velocities and final positions. On each presentation, the target's position relative to the two eyes (target vergence), the size of the target's retinal image and the difference in this image's position relative to that of the background in the two eyes (relative disparity), each either changed as they normally would for a target moving at a fixed speed towards the observer, or did not change at all. Subjects' judgements for various such combinations show that all three sources of information influence both the perceived velocity of motion in depth and the final perceived position, but in different ways. This is not too surprising, because the assumptions that the use of each source of information are based on, are different for the two tasks. We propose that the way the different sources are combined is governed by the likelihood of the assumptions, that are required to use that information, being true under the given circumstances.
Prior depth information can bias motion perception
Journal of Vision, 2004
Previous studies of the motion aperture problem have shown that the direction of grating motion can be biased by using binocular disparity to designate borders of the aperture as intrinsic (belonging to the grating) or extrinsic (resulting from occlusion of the grating). Observers report motion in the direction of the extrinsic border, as if the grating was extended and moving underneath an occluding surface. Here we investigate whether prior information about depth ordering, given by structure-from-motion, can bias the perceived motion direction of a subsequent moving grating in a similar manner. We presented an aperture stimulus that rotated about its vertical and horizontal axes, revealing the depth relationships (intrinsic and extrinsic) of the aperture borders. The grating then translated within the aperture and observers reported the direction of perceived motion. The test stimulus contained no information about the depth ordering of the scene. We found that observers' reported motion shifted toward the direction of the occluding edges, consistent with the intrinsic-extrinsic border predictions. These results indicate that prior scene information, not just depth information explicitly defined in the test stimulus, is used to help solve the motion aperture problem.