Discovery and representation of human strategies for visual search (original) (raw)
Related papers
Orientation anisotropies in visual search revealed by noise
Journal of Vision, 2007
The human visual system is remarkably adept at finding objects of interest in cluttered visual environments, a task termed visual search. Because the human eye is highly foveated, it accomplishes this by making many discrete fixations linked by rapid eye movements called saccades. In such naturalistic tasks, we know very little about how the brain selects saccadic targets (the fixation loci). In this paper, we use a novel technique akin to psychophysical reverse correlation and stimuli that emulate the natural visual environment to measure observers' ability to locate a low-contrast target of unknown orientation. We present three main discoveries. First, we provide strong evidence for saccadic selectivity for spatial frequencies close to the target's central frequency. Second, we demonstrate that observers have distinct, idiosyncratic biases to certain orientations in saccadic programming, although there were no priors imposed on the target's orientation. These orientation biases cover a subset of the near-cardinal (horizontal/vertical) and near-oblique orientations, with orientations near vertical being the most common across observers. Further, these idiosyncratic biases were stable across time. Third, within observers, very similar biases exist for foveal target detection accuracy. These results suggest that saccadic targeting is tuned for known stimulus dimensions (here, spatial frequency) and also has some preference or default tuning for uncertain stimulus dimensions (here, orientation).
Human search for a target on a textured background is consistent with a stochastic model
Journal of Vision, 2016
Previous work has demonstrated that search for a target in noise is consistent with the predictions of the optimal search strategy, both in the spatial distribution of fixation locations and in the number of fixations observers require to find the target. In this study we describe a challenging visual-search task and compare the number of fixations required by human observers to find the target to predictions made by a stochastic search model. This model relies on a target-visibility map based on human performance in a separate detection task. If the model does not detect the target, then it selects the next saccade by randomly sampling from the distribution of saccades that human observers made. We find that a memoryless stochastic model matches human performance in this task. Furthermore, we find that the similarity in the distribution of fixation locations between human observers and the ideal observer does not replicate: Rather than making the signature doughnut-shaped distribution predicted by the ideal search strategy, the fixations made by observers are best described by a central bias. We conclude that, when searching for a target in noise, humans use an essentially random strategy, which achieves near optimal behavior due to biases in the distributions of saccades we have a tendency to make. The findings reconcile the existence of highly efficient human search performance with recent studies demonstrating clear failures of optimality in single and multiple saccade tasks.
Journal of Vision, 2006
Visual search experiments have usually involved the detection of a salient target in the presence of distracters against a blank background. In such high signal-to-noise scenarios, observers have been shown to use visual cues such as color, size, and shape of the target to program their saccades during visual search. The degree to which these features affect search performance is usually measured using reaction times and detection accuracy. We asked whether human observers are able to use target features to succeed in visual search tasks in stimuli with very low signal-to-noise ratios. Using the classification image analysis technique, we investigated whether observers used structural cues to direct their fixations as they searched for simple geometric targets embedded at very low signal-to-noise ratios in noise stimuli that had the spectral characteristics of natural images. By analyzing properties of the noise stimulus at observers' fixations, we were able to reveal idiosyncratic, target-dependent features used by observers in our visual search task. We demonstrate that even in very noisy displays, observers do not search randomly, but in many cases they deploy their fixations to regions in the stimulus that resemble some aspect of the target in their local image features.
Psychological Research-psychologische Forschung, 1983
In research on visual search within a single eye fixation, a number of different tasks are used and referred to interchangeably. In a previous study, we showed that there are differences between "go-no go" tasks and "yes-no" tasks, and we introduced a tentative model in order to explain these differences. In the present study a "go-no go" task and a "detection" task are compared under conditions which are as equal as possible. Traditional views of the visual search process predict no essential differences between the two tasks. The tentative model predicts a steeper slope of the array-size function in the "detection" task than in the "go-no go" task and predicts that this difference in slopes is stable with practice. The results obtained appeared in accordance with the predictions of the tentative model. This result supports the point of view that response-related factors strongly contribute to the slope of the array-size function. The data are not in accord with predictions following from Estes' (1972) interactive channels model.
Research Article Attentional Cues in Real Scenes, Saccadic Targeting, andBayesian
2016
ABSTRACT—Performance finding a target improves when artificial cues direct covert attention to the target’s prob-able location or locations, but how do predictive cues help observers search for objects in real scenes? Controlling for target detectability and retinal eccentricity, we recorded observers ’ first saccades during search for objects that appeared in expected and unexpected locations within real scenes. As has been found with synthetic images and cues, accuracy of first saccades was significantly higher when the target appeared at an expected location rather than an unexpected location. Observers ’ saccades with target-absent images make it possible to distinguish two mecha-nisms that might mediate this effect: limited attentional resources versus differential weighting of information (Bayesian priors). Endpoints of first saccades in target-absent images were significantly closer to the expected than the unexpected locations, a result consistent with the differential-weigh...
Philosophical transactions of the Royal Society of London. Series B, Biological sciences, 2017
We explored the influence of early scene analysis and visible object characteristics on eye movements when searching for objects in photographs of scenes. On each trial, participants were shown sequentially either a scene preview or a uniform grey screen (250 ms), a visual mask, the name of the target and the scene, now including the target at a likely location. During the participant's first saccade during search, the target location was changed to: (i) a different likely location, (ii) an unlikely but possible location or (iii) a very implausible location. The results showed that the first saccade landed more often on the likely location in which the target re-appeared than on unlikely or implausible locations, and overall the first saccade landed nearer the first target location with a preview than without. Hence, rapid scene analysis influenced initial eye movement planning, but availability of the target rapidly modified that plan. After the target moved, it was found more ...