Sio-hoi Ieng - Academia.edu (original) (raw)

Papers by Sio-hoi Ieng

Research paper thumbnail of Geometric construction of the caustic curves for catadioptric sensors

2004 International Conference on Image Processing, 2004. ICIP '04., 2004

Most of the catadioptric cameras rely on the single view-point constraint that is hardly fulfille... more Most of the catadioptric cameras rely on the single view-point constraint that is hardly fulfilled. There exists many works on non single viewpoint catadioptric sensors satisfying specific resolutions. The computation of the caustic curve becomes essential if precision and flexibility are aimed. Existing solutions are unfortunately too specific to a class of curves and need heavy precomputations. This paper presents

Research paper thumbnail of Event-driven stereo vision with orientation filters

ABSTRACT The recently developed Dynamic Vision Sensors (DVS) sense dynamic visual information asy... more ABSTRACT The recently developed Dynamic Vision Sensors (DVS) sense dynamic visual information asynchronously and code it into trains of events with sub-micro second temporal resolution. This high temporal precision makes the output of these sensors especially suited for dynamic 3D visual reconstruction, by matching corresponding events generated by two different sensors in a stereo setup. This paper explores the use of Gabor filters to extract information about the orientation of the object edges that produce the events, applying the matching algorithm to the events generated by the Gabor filters and not to those produced by the DVS. This strategy provides more reliably matched pairs of events, improving the final 3D reconstruction.

Research paper thumbnail of Image Sensor Model Using Geometric Algebra: From Calibration to Motion Estimation

Geometric Algebra Computing, 2010

ABSTRACT

Research paper thumbnail of Synchronization Using Shapes

Procedings of the British Machine Vision Conference 2008, 2008

The synchronicity is a strong restriction that in some cases of wide applications can be difficul... more The synchronicity is a strong restriction that in some cases of wide applications can be difficult to obtain. This paper studies the methodology of using a non synchronized camera network. We consider the cases where the frequency of acquisition of each element of the network can be different, including desynchronization due to delays of transmission inside the network. The following work introduces a new approach to retrieve the temporal synchronization from the multiple unsynchronized frames of a scene. The mathematical characterization of the 3D structure of scenes, is used as a tool to estimate synchronization value, combined with a statistical stratum. This paper presents experimental results on real data for each step of synchronization retrieval.

Research paper thumbnail of Asynchronous event-based high speed vision for microparticle tracking

Journal of Microscopy, 2012

This paper presents a new high speed vision system using an asynchronous address-event representa... more This paper presents a new high speed vision system using an asynchronous address-event representation camera. Within this framework, an asynchronous event-based real-time Hough circle transform is developed to track microspheres. The technology presented in this paper allows for a robust realtime event-based multiobject position detection at a frequency of several kHz with a low computational cost. Brownian motion is also detected within this context with both high speed and precision. The carried-out work is adapted to the automated or remote-operated microrobotic systems fulfilling their need of an extremely fast vision feedback. It is also a very promising solution to the micro physical phenomena analysis and particularly for the micro/nanoscale force measurement.

Research paper thumbnail of Plenoptic cameras in real-time robotics

The International Journal of Robotics Research, 2013

Real-time vision-based navigation is a difficult task largely due to the limited optical properti... more Real-time vision-based navigation is a difficult task largely due to the limited optical properties of single cameras that are usually mounted on robots. Multiple camera systems such as polydioptric sensors provide more efficient and precise solutions for autonomous navigation. They are particularly suitable for motion estimation because they allow one to formulate a linear optimization. These sensors capture the visual information in a more complete form called the plenoptic function that encodes the spatial and temporal light radiance of the scene. The polydioptric sensors are rarely used in robotics because they are usually thought to increase the amount of data produced and require more computational power. This paper shows that these cameras provide more accurate estimation results in mobile robotics navigation if properly designed. It also shows that a plenoptic vision sensor with a resolution ranging from 3 × 3 to 40 × 30 pixels camera, provides higher accuracy than a mono-Slam running on a 320 × 240 pixels camera. The paper also gives a complete scheme to design usable real-time plenoptic cameras for mobile robotics applications by establishing the link between velocity, resolution and motion estimation accuracy. Finally, experiments on a mobile robot are shown allowing for comparison between optimal plenoptic visual sensors and single high resolution cameras. The estimation with the plenoptic sensor is more accurate than a monocular high definition camera with a processing time 100 times lower.

Research paper thumbnail of A Fisher-Rao Metric for Paracatadioptric Images of Lines

International Journal of Computer Vision, 2012

In a central paracatadioptric imaging system a perspective camera takes an image of a scene refle... more In a central paracatadioptric imaging system a perspective camera takes an image of a scene reflected in a paraboloidal mirror. A 360 • field of view is obtained, but the image is severely distorted. In particular, straight lines in the scene project to circles in the image. These distortions make it difficult to detect projected lines using standard image processing algorithms.

Research paper thumbnail of On the use of orientation filters for 3D reconstruction in event-driven stereo vision

Frontiers in Neuroscience, 2014

The recently developed Dynamic Vision Sensors (DVS) sense visual information asynchronously and c... more The recently developed Dynamic Vision Sensors (DVS) sense visual information asynchronously and code it into trains of events with sub-micro second temporal resolution. This high temporal precision makes the output of these sensors especially suited for dynamic 3D visual reconstruction, by matching corresponding events generated by two different sensors in a stereo setup. This paper explores the use of Gabor filters to extract information about the orientation of the object edges that produce the events, therefore increasing the number of constraints applied to the matching algorithm. This strategy provides more reliably matched pairs of events, improving the final 3D reconstruction.

Research paper thumbnail of Asynchronous event-based Hebbian epipolar geometry

IEEE Transactions on Neural Networks, 2011

Dynamic Vision Sensors (DVS) have recently appeared as a new paradigm for vision sensing and proc... more Dynamic Vision Sensors (DVS) have recently appeared as a new paradigm for vision sensing and processing.

Research paper thumbnail of Designing non constant resolution vision sensors via photosites rearrangement

2008 IEEE Conference on Cybernetics and Intelligent Systems, 2008

Non conventional imaging sensors have been intensively investigate in recent works. Research is c... more Non conventional imaging sensors have been intensively investigate in recent works. Research is conducted to design devices capable to provide panoramic views with no need of mozaicing process. These devices combine optical lenses and/or non planar reflective surface with standard pinhole camera. We present in this paper an analysis of pixels rearranged sensor adapted to distortions induced by mirrors. Particularly,

Research paper thumbnail of GEOMETRICCONSTRUCTION OF THECAUSTIC SURFACEOF CATADIOPTRIC NONCENTRAL SENSORS

Most of the catadioptric cameras rely on the single viewpoint constraint that is hardly fulfilled... more Most of the catadioptric cameras rely on the single viewpoint constraint that is hardly fulfilled. There exists many works on non single viewpoint catadiop- tric sensors satisfying specific resolutions. In such configurations, the computa- tion of the caustic curve becomes essential. Existing solutions are unfortunately too specific to a class of curves and need heavy computation load. This paper presents

Research paper thumbnail of An Asynchronous Neuromorphic Event-Driven Visual Part-Based Shape Tracking

IEEE Transactions on Neural Networks and Learning Systems, 2015

Object tracking is an important step in many artificial vision tasks. Current state of the art im... more Object tracking is an important step in many artificial vision tasks. Current state of the art implementations remain too computationally demanding for the problem to be solved in real-time with high dynamics. This paper presents a novel real-time method for visual part-based tracking of complex objects from the output of an asynchronous event-based camera. This work extends the pictorial structures model introduced by Fischler and Elschlager 40 years ago and introduces a new formulation of the problem, allowing the dynamic processing of visual input in real time at high temporal resolution using a conventional PC. It relies on the concept of representing an object as a set of basic elements linked by springs. These basic elements consist of simple trackers capable of successfully tracking a target with an ellipse-like shape at several kHz on a conventional computer. For each incoming event, the method updates the elastic connections established between the trackers and guarantees a desired geometric structure corresponding to the tracked object in real-time. This introduces a high temporal elasticity to adapt to projective deformations of the tracked object in the focal plane. The elastic energy of this virtual mechanical system provides a quality criterion for tracking and can be used to determine whether the measured deformations are caused by the perspective projection of the perceived object or by occlusions. Experiments on real-world data show the robustness of the method in the context of dynamic face tracking.

Research paper thumbnail of Asynchronous Event-Based Multikernel Algorithm for High-Speed Visual Features Tracking

IEEE transactions on neural networks and learning systems, Jan 16, 2014

This paper presents a number of new methods for visual tracking using the output of an event-base... more This paper presents a number of new methods for visual tracking using the output of an event-based asynchronous neuromorphic dynamic vision sensor. It allows the tracking of multiple visual features in real time, achieving an update rate of several hundred kilohertz on a standard desktop PC. The approach has been specially adapted to take advantage of the event-driven properties of these sensors by combining both spatial and temporal correlations of events in an asynchronous iterative framework. Various kernels, such as Gaussian, Gabor, combinations of Gabor functions, and arbitrary user-defined kernels, are used to track features from incoming events. The trackers described in this paper are capable of handling variations in position, scale, and orientation through the use of multiple pools of trackers. This approach avoids the N(2) operations per event associated with conventional kernel-based convolution operations with N ×N kernels. The tracking performance was evaluated experim...

Research paper thumbnail of Asynchronous event-based corner detection and matching

Neural Networks, 2015

This paper introduces an event-based luminance-free method to detect and match corner events from... more This paper introduces an event-based luminance-free method to detect and match corner events from the output of asynchronous event-based neuromorphic retinas. The method relies on the use of space-time properties of moving edges. Asynchronous event-based neuromorphic retinas are composed of autonomous pixels, each of them asynchronously generating "spiking" events that encode relative changes in pixels' illumination at high temporal resolutions. Corner events are defined as the spatiotemporal locations where the aperture problem can be solved using the intersection of several geometric constraints in events' spatiotemporal spaces. A regularization process provides the required constraints, i.e. the motion attributes of the edges with respect to their spatiotemporal locations using local geometric properties of visual events. Experimental results are presented on several real scenes showing the stability and robustness of the detection and matching.

Research paper thumbnail of Spatiotemporal features for asynchronous event-based data

Frontiers in Neuroscience, 2015

Bio-inspired asynchronous event-based vision sensors are currently introducing a paradigm shift i... more Bio-inspired asynchronous event-based vision sensors are currently introducing a paradigm shift in visual information processing. These new sensors rely on a stimulus-driven principle of light acquisition similar to biological retinas. They are event-driven and fully asynchronous, thereby reducing redundancy and encoding exact times of input signal changes, leading to a very precise temporal resolution. Approaches for higher-level computer vision often rely on the reliable detection of features in visual frames, but similar definitions of features for the novel dynamic and event-based visual input representation of silicon retinas have so far been lacking. This article addresses the problem of learning and recognizing features for event-based vision sensors, which capture properties of truly spatiotemporal volumes of sparse visual event information. A novel computational architecture for learning and encoding spatiotemporal features is introduced based on a set of predictive recurrent reservoir networks, competing via winner-take-all selection. Features are learned in an unsupervised manner from real-world input recorded with event-based vision sensors. It is shown that the networks in the architecture learn distinct and task-specific dynamic visual features, and can predict their trajectories over time.

Research paper thumbnail of Using structures to synchronize cameras of robots swarms

International Conference on Intelligent RObots and Systems - IROS, 2008

The synchronization of image sequences acquired by robots swarms is an essential task for localiz... more The synchronization of image sequences acquired by robots swarms is an essential task for localization operations. We address this problem by considering the swarms as dynamic camera networks in which, each robot is reduced to a mobile camera. The synchronicity is a strong restriction that in some cases of wide applications can be dicult to obtain. This paper studies the

Research paper thumbnail of Auto-organized visual perception using distributed camera network

Robotics and Autonomous Systems, 2009

Camera networks are complex vision systems dicult to control if the number of sensors is getting ... more Camera networks are complex vision systems dicult to control if the number of sensors is getting higher. With classic approaches, each camera has to be calibrated and synchronized individually. These tasks are often troublesome because of spatial constraints, and mostly due to the amount of information that need to be processed. Cameras generally observe overlapping areas, lead- ing to redundant

Research paper thumbnail of Asynchronous Event-Based Binocular Stereo Matching

IEEE Transactions on Neural Networks and Learning Systems, 2012

We present a novel event-based stereo matching algorithm that exploits the asynchronous visual ev... more We present a novel event-based stereo matching algorithm that exploits the asynchronous visual events from a pair of silicon retinas. Unlike conventional frame-based cameras, recent artificial retinas transmit their outputs as a continuous stream of asynchronous temporal events, in a manner similar to the output cells of the biological retina. Our algorithm uses the timing information carried by this representation in addressing the stereo-matching problem on moving objects. Using the high temporal resolution of the acquired data stream for the dynamic vision sensor, we show that matching on the timing of the visual events provides a new solution to the real-time computation of 3-D objects when combined with geometric constraints using the distance to the epipolar lines. The proposed algorithm is able to filter out incorrect matches and to accurately reconstruct the depth of moving objects despite the low spatial resolution of the sensor. This brief sets up the principles for further event-based vision processing and demonstrates the importance of dynamic information and spike timing in processing asynchronous streams of visual events.

Research paper thumbnail of Geometric Construction of the Caustic Surface of Catadioptric Non-Central Sensors

Imaging Beyond the Pinhole Camera, 2006

Most of the catadioptric cameras rely on the single viewpoint constraint that is hardly fulfilled... more Most of the catadioptric cameras rely on the single viewpoint constraint that is hardly fulfilled. There exists many works on non single viewpoint catadioptric sensors satisfying specific resolutions. In such configurations, the computation of the caustic curve becomes essential. Existing solutions are unfortunately too specific to a class of curves and need heavy computation load. This paper presents a flexible

Research paper thumbnail of An Efficient Dynamic Multi-Angular Feature Points Matcher for Catadioptric Views

2003 Conference on Computer Vision and Pattern Recognition Workshop, 2003

A new efficient matching algorithm dedicated to catadioptric sensors is proposed in this paper. T... more A new efficient matching algorithm dedicated to catadioptric sensors is proposed in this paper. The presented approach is designed to overcome the varying resolution of the mirror. The aim of this work is to provide a matcher that gives reliable results similar to the ones obtained by classical operators on planar projection images. The matching is based on a dynamical size windows extraction, computed from the viewing angular aperture of the neighborhood around the points of interest. An angular scaling of this angular aperture provides a certain number of different neighborhood resolution around the same considered point. A combinatory cost method is introduced in order to determine the best match between the different angular neighborhood patches of two interest points. Results are presented on sparse matched corner points, that can be used to estimate the epipolar geometry of the scene in order to provide a dense 3D map of the observed environement.

Research paper thumbnail of Geometric construction of the caustic curves for catadioptric sensors

2004 International Conference on Image Processing, 2004. ICIP '04., 2004

Most of the catadioptric cameras rely on the single view-point constraint that is hardly fulfille... more Most of the catadioptric cameras rely on the single view-point constraint that is hardly fulfilled. There exists many works on non single viewpoint catadioptric sensors satisfying specific resolutions. The computation of the caustic curve becomes essential if precision and flexibility are aimed. Existing solutions are unfortunately too specific to a class of curves and need heavy precomputations. This paper presents

Research paper thumbnail of Event-driven stereo vision with orientation filters

ABSTRACT The recently developed Dynamic Vision Sensors (DVS) sense dynamic visual information asy... more ABSTRACT The recently developed Dynamic Vision Sensors (DVS) sense dynamic visual information asynchronously and code it into trains of events with sub-micro second temporal resolution. This high temporal precision makes the output of these sensors especially suited for dynamic 3D visual reconstruction, by matching corresponding events generated by two different sensors in a stereo setup. This paper explores the use of Gabor filters to extract information about the orientation of the object edges that produce the events, applying the matching algorithm to the events generated by the Gabor filters and not to those produced by the DVS. This strategy provides more reliably matched pairs of events, improving the final 3D reconstruction.

Research paper thumbnail of Image Sensor Model Using Geometric Algebra: From Calibration to Motion Estimation

Geometric Algebra Computing, 2010

ABSTRACT

Research paper thumbnail of Synchronization Using Shapes

Procedings of the British Machine Vision Conference 2008, 2008

The synchronicity is a strong restriction that in some cases of wide applications can be difficul... more The synchronicity is a strong restriction that in some cases of wide applications can be difficult to obtain. This paper studies the methodology of using a non synchronized camera network. We consider the cases where the frequency of acquisition of each element of the network can be different, including desynchronization due to delays of transmission inside the network. The following work introduces a new approach to retrieve the temporal synchronization from the multiple unsynchronized frames of a scene. The mathematical characterization of the 3D structure of scenes, is used as a tool to estimate synchronization value, combined with a statistical stratum. This paper presents experimental results on real data for each step of synchronization retrieval.

Research paper thumbnail of Asynchronous event-based high speed vision for microparticle tracking

Journal of Microscopy, 2012

This paper presents a new high speed vision system using an asynchronous address-event representa... more This paper presents a new high speed vision system using an asynchronous address-event representation camera. Within this framework, an asynchronous event-based real-time Hough circle transform is developed to track microspheres. The technology presented in this paper allows for a robust realtime event-based multiobject position detection at a frequency of several kHz with a low computational cost. Brownian motion is also detected within this context with both high speed and precision. The carried-out work is adapted to the automated or remote-operated microrobotic systems fulfilling their need of an extremely fast vision feedback. It is also a very promising solution to the micro physical phenomena analysis and particularly for the micro/nanoscale force measurement.

Research paper thumbnail of Plenoptic cameras in real-time robotics

The International Journal of Robotics Research, 2013

Real-time vision-based navigation is a difficult task largely due to the limited optical properti... more Real-time vision-based navigation is a difficult task largely due to the limited optical properties of single cameras that are usually mounted on robots. Multiple camera systems such as polydioptric sensors provide more efficient and precise solutions for autonomous navigation. They are particularly suitable for motion estimation because they allow one to formulate a linear optimization. These sensors capture the visual information in a more complete form called the plenoptic function that encodes the spatial and temporal light radiance of the scene. The polydioptric sensors are rarely used in robotics because they are usually thought to increase the amount of data produced and require more computational power. This paper shows that these cameras provide more accurate estimation results in mobile robotics navigation if properly designed. It also shows that a plenoptic vision sensor with a resolution ranging from 3 × 3 to 40 × 30 pixels camera, provides higher accuracy than a mono-Slam running on a 320 × 240 pixels camera. The paper also gives a complete scheme to design usable real-time plenoptic cameras for mobile robotics applications by establishing the link between velocity, resolution and motion estimation accuracy. Finally, experiments on a mobile robot are shown allowing for comparison between optimal plenoptic visual sensors and single high resolution cameras. The estimation with the plenoptic sensor is more accurate than a monocular high definition camera with a processing time 100 times lower.

Research paper thumbnail of A Fisher-Rao Metric for Paracatadioptric Images of Lines

International Journal of Computer Vision, 2012

In a central paracatadioptric imaging system a perspective camera takes an image of a scene refle... more In a central paracatadioptric imaging system a perspective camera takes an image of a scene reflected in a paraboloidal mirror. A 360 • field of view is obtained, but the image is severely distorted. In particular, straight lines in the scene project to circles in the image. These distortions make it difficult to detect projected lines using standard image processing algorithms.

Research paper thumbnail of On the use of orientation filters for 3D reconstruction in event-driven stereo vision

Frontiers in Neuroscience, 2014

The recently developed Dynamic Vision Sensors (DVS) sense visual information asynchronously and c... more The recently developed Dynamic Vision Sensors (DVS) sense visual information asynchronously and code it into trains of events with sub-micro second temporal resolution. This high temporal precision makes the output of these sensors especially suited for dynamic 3D visual reconstruction, by matching corresponding events generated by two different sensors in a stereo setup. This paper explores the use of Gabor filters to extract information about the orientation of the object edges that produce the events, therefore increasing the number of constraints applied to the matching algorithm. This strategy provides more reliably matched pairs of events, improving the final 3D reconstruction.

Research paper thumbnail of Asynchronous event-based Hebbian epipolar geometry

IEEE Transactions on Neural Networks, 2011

Dynamic Vision Sensors (DVS) have recently appeared as a new paradigm for vision sensing and proc... more Dynamic Vision Sensors (DVS) have recently appeared as a new paradigm for vision sensing and processing.

Research paper thumbnail of Designing non constant resolution vision sensors via photosites rearrangement

2008 IEEE Conference on Cybernetics and Intelligent Systems, 2008

Non conventional imaging sensors have been intensively investigate in recent works. Research is c... more Non conventional imaging sensors have been intensively investigate in recent works. Research is conducted to design devices capable to provide panoramic views with no need of mozaicing process. These devices combine optical lenses and/or non planar reflective surface with standard pinhole camera. We present in this paper an analysis of pixels rearranged sensor adapted to distortions induced by mirrors. Particularly,

Research paper thumbnail of GEOMETRICCONSTRUCTION OF THECAUSTIC SURFACEOF CATADIOPTRIC NONCENTRAL SENSORS

Most of the catadioptric cameras rely on the single viewpoint constraint that is hardly fulfilled... more Most of the catadioptric cameras rely on the single viewpoint constraint that is hardly fulfilled. There exists many works on non single viewpoint catadiop- tric sensors satisfying specific resolutions. In such configurations, the computa- tion of the caustic curve becomes essential. Existing solutions are unfortunately too specific to a class of curves and need heavy computation load. This paper presents

Research paper thumbnail of An Asynchronous Neuromorphic Event-Driven Visual Part-Based Shape Tracking

IEEE Transactions on Neural Networks and Learning Systems, 2015

Object tracking is an important step in many artificial vision tasks. Current state of the art im... more Object tracking is an important step in many artificial vision tasks. Current state of the art implementations remain too computationally demanding for the problem to be solved in real-time with high dynamics. This paper presents a novel real-time method for visual part-based tracking of complex objects from the output of an asynchronous event-based camera. This work extends the pictorial structures model introduced by Fischler and Elschlager 40 years ago and introduces a new formulation of the problem, allowing the dynamic processing of visual input in real time at high temporal resolution using a conventional PC. It relies on the concept of representing an object as a set of basic elements linked by springs. These basic elements consist of simple trackers capable of successfully tracking a target with an ellipse-like shape at several kHz on a conventional computer. For each incoming event, the method updates the elastic connections established between the trackers and guarantees a desired geometric structure corresponding to the tracked object in real-time. This introduces a high temporal elasticity to adapt to projective deformations of the tracked object in the focal plane. The elastic energy of this virtual mechanical system provides a quality criterion for tracking and can be used to determine whether the measured deformations are caused by the perspective projection of the perceived object or by occlusions. Experiments on real-world data show the robustness of the method in the context of dynamic face tracking.

Research paper thumbnail of Asynchronous Event-Based Multikernel Algorithm for High-Speed Visual Features Tracking

IEEE transactions on neural networks and learning systems, Jan 16, 2014

This paper presents a number of new methods for visual tracking using the output of an event-base... more This paper presents a number of new methods for visual tracking using the output of an event-based asynchronous neuromorphic dynamic vision sensor. It allows the tracking of multiple visual features in real time, achieving an update rate of several hundred kilohertz on a standard desktop PC. The approach has been specially adapted to take advantage of the event-driven properties of these sensors by combining both spatial and temporal correlations of events in an asynchronous iterative framework. Various kernels, such as Gaussian, Gabor, combinations of Gabor functions, and arbitrary user-defined kernels, are used to track features from incoming events. The trackers described in this paper are capable of handling variations in position, scale, and orientation through the use of multiple pools of trackers. This approach avoids the N(2) operations per event associated with conventional kernel-based convolution operations with N ×N kernels. The tracking performance was evaluated experim...

Research paper thumbnail of Asynchronous event-based corner detection and matching

Neural Networks, 2015

This paper introduces an event-based luminance-free method to detect and match corner events from... more This paper introduces an event-based luminance-free method to detect and match corner events from the output of asynchronous event-based neuromorphic retinas. The method relies on the use of space-time properties of moving edges. Asynchronous event-based neuromorphic retinas are composed of autonomous pixels, each of them asynchronously generating "spiking" events that encode relative changes in pixels' illumination at high temporal resolutions. Corner events are defined as the spatiotemporal locations where the aperture problem can be solved using the intersection of several geometric constraints in events' spatiotemporal spaces. A regularization process provides the required constraints, i.e. the motion attributes of the edges with respect to their spatiotemporal locations using local geometric properties of visual events. Experimental results are presented on several real scenes showing the stability and robustness of the detection and matching.

Research paper thumbnail of Spatiotemporal features for asynchronous event-based data

Frontiers in Neuroscience, 2015

Bio-inspired asynchronous event-based vision sensors are currently introducing a paradigm shift i... more Bio-inspired asynchronous event-based vision sensors are currently introducing a paradigm shift in visual information processing. These new sensors rely on a stimulus-driven principle of light acquisition similar to biological retinas. They are event-driven and fully asynchronous, thereby reducing redundancy and encoding exact times of input signal changes, leading to a very precise temporal resolution. Approaches for higher-level computer vision often rely on the reliable detection of features in visual frames, but similar definitions of features for the novel dynamic and event-based visual input representation of silicon retinas have so far been lacking. This article addresses the problem of learning and recognizing features for event-based vision sensors, which capture properties of truly spatiotemporal volumes of sparse visual event information. A novel computational architecture for learning and encoding spatiotemporal features is introduced based on a set of predictive recurrent reservoir networks, competing via winner-take-all selection. Features are learned in an unsupervised manner from real-world input recorded with event-based vision sensors. It is shown that the networks in the architecture learn distinct and task-specific dynamic visual features, and can predict their trajectories over time.

Research paper thumbnail of Using structures to synchronize cameras of robots swarms

International Conference on Intelligent RObots and Systems - IROS, 2008

The synchronization of image sequences acquired by robots swarms is an essential task for localiz... more The synchronization of image sequences acquired by robots swarms is an essential task for localization operations. We address this problem by considering the swarms as dynamic camera networks in which, each robot is reduced to a mobile camera. The synchronicity is a strong restriction that in some cases of wide applications can be dicult to obtain. This paper studies the

Research paper thumbnail of Auto-organized visual perception using distributed camera network

Robotics and Autonomous Systems, 2009

Camera networks are complex vision systems dicult to control if the number of sensors is getting ... more Camera networks are complex vision systems dicult to control if the number of sensors is getting higher. With classic approaches, each camera has to be calibrated and synchronized individually. These tasks are often troublesome because of spatial constraints, and mostly due to the amount of information that need to be processed. Cameras generally observe overlapping areas, lead- ing to redundant

Research paper thumbnail of Asynchronous Event-Based Binocular Stereo Matching

IEEE Transactions on Neural Networks and Learning Systems, 2012

We present a novel event-based stereo matching algorithm that exploits the asynchronous visual ev... more We present a novel event-based stereo matching algorithm that exploits the asynchronous visual events from a pair of silicon retinas. Unlike conventional frame-based cameras, recent artificial retinas transmit their outputs as a continuous stream of asynchronous temporal events, in a manner similar to the output cells of the biological retina. Our algorithm uses the timing information carried by this representation in addressing the stereo-matching problem on moving objects. Using the high temporal resolution of the acquired data stream for the dynamic vision sensor, we show that matching on the timing of the visual events provides a new solution to the real-time computation of 3-D objects when combined with geometric constraints using the distance to the epipolar lines. The proposed algorithm is able to filter out incorrect matches and to accurately reconstruct the depth of moving objects despite the low spatial resolution of the sensor. This brief sets up the principles for further event-based vision processing and demonstrates the importance of dynamic information and spike timing in processing asynchronous streams of visual events.

Research paper thumbnail of Geometric Construction of the Caustic Surface of Catadioptric Non-Central Sensors

Imaging Beyond the Pinhole Camera, 2006

Most of the catadioptric cameras rely on the single viewpoint constraint that is hardly fulfilled... more Most of the catadioptric cameras rely on the single viewpoint constraint that is hardly fulfilled. There exists many works on non single viewpoint catadioptric sensors satisfying specific resolutions. In such configurations, the computation of the caustic curve becomes essential. Existing solutions are unfortunately too specific to a class of curves and need heavy computation load. This paper presents a flexible

Research paper thumbnail of An Efficient Dynamic Multi-Angular Feature Points Matcher for Catadioptric Views

2003 Conference on Computer Vision and Pattern Recognition Workshop, 2003

A new efficient matching algorithm dedicated to catadioptric sensors is proposed in this paper. T... more A new efficient matching algorithm dedicated to catadioptric sensors is proposed in this paper. The presented approach is designed to overcome the varying resolution of the mirror. The aim of this work is to provide a matcher that gives reliable results similar to the ones obtained by classical operators on planar projection images. The matching is based on a dynamical size windows extraction, computed from the viewing angular aperture of the neighborhood around the points of interest. An angular scaling of this angular aperture provides a certain number of different neighborhood resolution around the same considered point. A combinatory cost method is introduced in order to determine the best match between the different angular neighborhood patches of two interest points. Results are presented on sparse matched corner points, that can be used to estimate the epipolar geometry of the scene in order to provide a dense 3D map of the observed environement.