Active vision using an analog VLSI model of selective attention (original) (raw)
Related papers
A software-hardware selective attention system
2003
Selective attention is a biological mechanism to process salient subregions of the sensory input space, while suppressing non-salient inputs. We present a hardware selective attention system, implemented using a neuromorphic VLSI chip interfaced to a workstation, via a custom PCI board and based on an address event (spike based) representation of signals. The chip selects salient inputs and sequentially shifts from one salient input to the other. The PCI board acts as an interface between the chip and an algorithm that generates saliency maps. We present experimental data showing the system's response to saliency maps generated from natural scenes.
A Computationally Efficient Visual Saliency Algorithm Suitable for an Analog CMOS Implementation
Neural Computation, 2018
Computer vision algorithms are often limited in their application by the large amount of data that must be processed. Mammalian vision systems mitigate this high bandwidth requirement by prioritizing certain regions of the visual field with neural circuits that select the most salient regions. This work introduces a novel and computationally efficient visual saliency algorithm for performing this neuromorphic attention-based data reduction. The proposed algorithm has the added advantage that it is compatible with an analog CMOS design while still achieving comparable performance to existing state-of-the-art saliency algorithms. This compatibility allows for direct integration with the analog-to-digital conversion circuitry present in CMOS image sensors. This integration leads to power savings in the converter by quantizing only the salient pixels. Further system-level power savings are gained by reducing the amount of data that must be transmitted and processed in the digital domain...
Saliency-driven image acuity modulation on a reconfigurable silicon array of spiking neurons
2005
We have constructed a system that uses an array of 9,600 spiking silicon neurons, a fast microcontroller, and digital memory, to implement a reconfigurable network of integrate-and-fire neurons. The system is designed for rapid prototyping of spiking neural networks that require high-throughput communication with external address-event hardware. Arbitrary network topologies can be implemented by selectively routing address-events to specific internal or external targets according to a memory-based projective field mapping. The utility and versatility of the system is demonstrated by configuring it as a three-stage network that accepts input from an address-event imager, detects salient regions of the image, and performs spatial acuity modulation around a high-resolution fovea that is centered on the location of highest salience.
Saliency-Driven Image Acuity Modulation on a Reconfigurable Array of Spiking Silicon Neurons
Neural Information Processing Systems, 2004
We have constructed a system that uses an array of 9,600 spiking silicon neurons, a fast microcontroller, and digital memory, to implement a reconfigurable network of integrate-and-fire neurons. The system is designed for rapid prototyping of spiking neural networks that require high-throughput communication with external address-event hardware. Arbitrary network topologies can be implemented by selectively routing address-events to specific internal or external targets according to a memory-based projective field mapping. The utility and versatility of the system is demonstrated by configuring it as a three-stage network that accepts input from an address-event imager, detects salient regions of the image, and performs spatial acuity modulation around a high-resolution fovea that is centered on the location of highest salience.
Sensory Attention: Computational Sensor Paradigm for Low-Latency Adaptive Vision
The need for robust self-contained and low-latency vision systems is growing: high speed visual servoing and vision-based human computer interface. Conventional vision systems can hardly meet this need because 1) the latency is incurred in a data transfer and computational bottlenecks, and 2) there is no top-down feedback to adapt sensor performance for improved robustness. In this paper we present a tracking computational sensor-a VLSI implementation of a sensory attention. The tracking sensor focuses attention on a salient feature in its receptive field and maintains this attention in the world coordinates. Using both low-latency massive parallel processing and top-down sensory adaptation, the sensor reliably tracks features of interest while it suppresses other irrelevant features that may interfere with the task at hand.
A real time implementation of the saliency-based model of visual attention on a simd architecture
Pattern Recognition, 2002
Visual attention is the ability to rapidly detect the visually salient parts of a given scene. Inspired by biological vision, the saliencybased algorithm e ciently models the visual attention process. Due to its complexity, the saliency-based model of visual attention needs, for a real time implementation, higher computation resources than available in conventional processors. This work reports a real time implementation of this attention model on a highly parallel Single Instruction Multiple Data (SIMD) architecture called ProtoEye. Tailored for low-level image processing, ProtoEye consists of a 2D array of mixed analog-digital processing elements (PE). The operations required for visual attention computation are optimally distributed on the analog and digital parts. The analog di usion network is used to implement the spatial ltering-based transformations such as the conspicuity operator and the competitive normalization of conspicuity maps. Whereas the digital part of Proto-Eye a l l o ws the implementation of logical and arithmetical operations, for instance, the integration of the normalized conspicuity maps into the nal saliency map. Using 64 64 gray l e v el images, the on ProtoEye i mplemented attention process operates in real-time. It runs at a frequency of 14 images per second.
Real-Time Visual Saliency Architecture for FPGA With Top-Down Attention Modulation
IEEE Transactions on Industrial Informatics, 2014
Biological vision uses attention to reduce the visual bandwidth simplifying the higher-level processing. This paper presents a model and its hardware real-time architecture in a field programmable gate array (FPGA) to be integrated in a robotic system that emulates this powerful biological process. It is based on the combination of bottom-up saliency and top-down taskdependent modulation. The bottom-up stream is deployed including local energy, orientation, color opponencies, and motion maps. The most novel part of this work is the saliency modulation by two highlevel features: 1) optical flow and 2) disparity. Furthermore, the influence of the features may be adjusted depending on the application. The proposed system reaches 180 fps for resolution. Finally, an example shows its modulation potential for driving assistance systems.
Low-Power Tracking Image Sensor Based on Biological Models of Attention
2007
This paper presents implementation of a low-power tracking CMOS image sensor based on biological models of attention. The presented imager allows tracking of up to N salient targets in the field of view. Employing "smart" image sensor architecture, where all image processing is implemented on the sensor focal plane, the proposed imager allows reduction of the amount of data transmitted
Real-time visual attention on a massively parallel SIMD architecture
Real-Time Imaging, 2003
Visual attention is the ability to rapidly detect the visually salient parts of a given scene on which higher level vision tasks, such as object recognition, can focus. Found in biological vision, this mechanism represents a fundamental tool for computer vision. This paper reports the first real-time implementation of the complete visual attention mechanism on a compact and low-power architecture. Specifically, the saliency-based model of visual attention was implemented on a highly parallel single instruction, multiple data architecture called ProtoEye. Conceived for general purpose low-level image processing, ProtoEye consists of a 2D array of mixed analog-digital processing elements. To reach real-time, the operations required for visual attention computation were optimally distributed on the analog and digital parts. The currently available prototype runs at a frequency of 14 images=s and operates on 64 Â 64 gray level images. Extensive testing and run-time analysis of the system stress the strengths of the architecture.
Design and basic blocks of a neuromorphic VLSI analogue vision system
Neurocomputing, 2006
In this paper we present a complete neuromorphic image processing system and we report the development of an integrated CMOS low-power circuit to test the feasibility of its different stages. The image system consists of different parallel-processing stages: phototransduction, non-linear filtering, oscillatory segmentation network and post-processing to extract fundamental characteristics. The circuit emulates some parts of the behaviour of biological neural networks as found in the retina and the visual cortex of living beings by adopting the neuromorphic approach that takes advantage of analogue VLSI electronics. The final objective is to develop a small and low-power system embedded in a single focal-plane integrated circuit suitable for portable applications. Each stage is briefly described. Simulations and experimental results of some basic blocks are also reported.