Inhibition and anticipation in visual search:evidence from preview search for colour defined static items (original) (raw)

Prioritization in visual search: Visual marking is not dependent on a mnemonic search

Attention Perception & Psychophysics, 2002

Visual marking (VM) refers to our ability to completely exclude old items from search when new stimuli are presented in our visual field. We examined whether this ability reflects an attentional scan of the old items, possibly allowing observers to apply inhibition of return or maintain a memory representation of already seen locations. In four experiments, we compared performance in two search conditions. In the double-search (DS) condition, we required participants to pay attention to a first set of items by having them search for a target within the set. Subsequently, they had to search a second set while the old items remained in the field. In the VM condition, the participants expected the target only to be in the second (new) set. Selection of new items in the DS condition was relatively poor and was always worse than would be expected if only the new stimuli had been searched. In contrast, selection of the new items in the VM condition was good and was equal to what would be expected if there had been an exclusive search of the new stimuli. These results were not altered when differences in Set 1 difficulty, task switching, and response generation were controlled for. We conclude that the mechanism of VM is distinct from mnemonic and/or serial inhibition-of-return processes as involved in search, although we also discuss possible links to more global and flexible inhibition-of-return processes not necessarily related to search.

Object-based selection: The role of attentional shifts

Perception & Psychophysics, 2002

The objective of this paper was to investigate under what conditions object-based effects are observed. Recently, Watson and Kramer (1999) used a divided-attention task and showed that unless topdown factors induce a bias toward selection at a higher level, object-based effects are obtained when same-object targets belong to the same uniformly connected (single-UC) region, but not when they belong to different single-UC regions grouped into a higher order object (grouped-UC regions). We refine this claim by proposing that a critical factor in determining whether or not object-based effects with grouped-UC regions are observed is the need to shift attention. The results of four experiments support this hypothesis. Stimuli and displays were similar to those used by . Subjects had to make size judgments. Using different paradigms, we obtained object-based effects when the task required shifts of attention (spatial cuing, same vs. different judgment with asynchronous target onsets), but not when attention remained either broadly distributed (same vs. different judgment with simultaneous targets) or tightly focused (response competition paradigm).

The psychophysics of visual search

Vision Research, 2000

Most theories of visual search emphasize issues of limited versus unlimited capacity and serial versus parallel processing. In the present article, we suggest a broader framework based on two principles, one empirical and one theoretical. The empirical principle is to focus on conditions at the intersection of visual search and the simple detection and discrimination paradigms of spatial vision. Such simple search conditions avoid artifacts and phenomena specific to more complex stimuli and tasks. The theoretical principle is to focus on the distinction between high and low threshold theory. While high threshold theory is largely discredited for simple detection and discrimination, it persists in the search literature. Furthermore, a low threshold theory such as signal detection theory can account for some of the phenomena attributed to limited capacity or serial processing. In the body of this article, we compare the predictions of high threshold theory and three versions of signal detection theory to the observed effects of manipulating set size, discriminability, number of targets, response bias, external noise, and distractor heterogeneity. For almost all cases, the results are inconsistent with high threshold theory and are consistent with all three versions of signal detection theory. In the Discussion, these simple theories are generalized to a larger domain that includes search asymmetry, multidimensional judgements including conjunction search, response time, search with multiple eye fixations and more general stimulus conditions. We conclude that low threshold theories can account for simple visual search without invoking mechanisms such as limited capacity or serial processing.

Top-down guidance of visual search: A computational account

Visual Cognition, 2006

We present a revised version of the Selective Attention for Identification model (SAIM), using an initial feature detection process to code edge orientations. We show that the revised SAIM can simulate both efficient and inefficient human search, that it shows search asymmetries, and that top-down expectancies for targets play a major role in the model's selection. Predictions of the model for top-down effects are tested with humans participants, and important similarities and dissimilarities are discussed. A computational account 3 Top-down guidance of visual search: A computational account Hopfield, J. . Lexical recovery on extinction: Interactions between visual form and stored knowledge modulate visual selection. Cognitive Neuropsychology, 18 (5), 465-478. . Associative knowledge controls deployment of visual selective attention. Nature neuroscience, 2 (6), 182-189. A computational account 18 Mozer, M. (1991). The perception of multiple objects: a connectionist approach. The MIT Press.

Object-based selection under focused attention: A failure to replicate

Perception & Psychophysics, 2000

In a recent study, Lavie and Driver (1996) reported that object-based effects found with distributed attention disappear when attention is focused on a narrow area of the display. This finding stands in contrast with previous reports of object-based effects under conditions of focused attention (e.g., Atchley & Kramer, 1998; Egly, Driver, & Rafal, 1994).The present study was an attempt to replicate Lavie and Driver's finding, using similar task and stimuli. WhileLavie and Driver's object-based effect in the distributed attention condition was replicated, its absence in the focused attention condition was not. In the two experiments reported in this paper, object-based effects were found under conditions of both distributed and focused attention, with no difference in the magnitude of the object-based effects in the two conditions. It is concluded that, in contrast with Lavie and Driver's claim, the initial spatial setting of attention does not influence object-based constraints on the distribution of attention. A central issue in the study ofvisual selective attention concerns the representational format in which selection takes place. In the last 15 years, numerous studies have investigated whether attentional selection operates within space-based or within object-based representations (see Egeth & Yantis, 1997, for a review). Evidence coming from a wide range ofparadigms shows that the distribution ofattentional resources is constrained by grouping factors other than proximity, thus providing strong support for the object-based view. Using the Eriksen response competition paradigm or flanker task (Eriksen & Hoffman, 1973), several experiments showed that distractors slow response to a target more when they are grouped with it (e.g., by common color or contour) than when they are not (e.g.,

Visual search is modulated by action intentions

2002

The influence of action intentions on visual selection processes was investigated in a visual search paradigm. A predefined target object with a certain orientation and color was presented among distractors, and subjects had to either look and point at the target or look at and grasp the target. Target selection processes prior to the first saccadic eye movement were modulated by the different action intentions. Specifically, fewer saccades to objects with the wrong orientation were made in the grasping condition than in the pointing condition, whereas the number of saccades to an object with the wrong color was the same in the two conditions. Saccadic latencies were similar under the different task conditions, so the results cannot be explained by a speedaccuracy trade-off. The results suggest that a specific action intention, such as grasping, can enhance visual processing of action-relevant features, such as orientation. Together, the findings support the view that visual attention can be best understood as a selection-for-action mechanism.