Luc Robert | Autodesk - Academia.edu (original) (raw)
Papers by Luc Robert
Proceedings of the …, Jan 1, 2003
Abstract: Many classical image processing tasks can be realized as evaluations of a boolean funct... more Abstract: Many classical image processing tasks can be realized as evaluations of a boolean function over subsets of an image. For instance, the simplicity test used in 3D thinning re- quires examining,the 26 neighbors of each voxel and computing,a single boolean function of these inputs. In this article, we show how Binary Decision Diagrams can be used to pro- duce
Computer Vision and Pattern Recognition, 1997
Proceedings. 1991 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 1991
An edge-based trinocular stereovision algorithm is presented. The primitives it works on are cubi... more An edge-based trinocular stereovision algorithm is presented. The primitives it works on are cubic B-spline approximations of the 2-D edges. This allows one to deal conveniently with curvature and to extend to some nonpolyhedral scenes to previous stereo algorithms. To build a matching primitive, the principle of the algorithm is, first, to find a triplet of corresponding points on three
Proceedings of 1993 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS '93), 1993
This paper describes the development and implementation of a reactive visual module utilized on a... more This paper describes the development and implementation of a reactive visual module utilized on an autonomous mobile robot to automatically correct its trajectory. We use a multisensorial mechanism based on inertial and visual cues. We report here only on the implementation and the experimentation of this module, whereas the main theoretical aspects have been developed elsewhere.
Lecture Notes in Computer Science, 1996
... A classical constraint is the smoothness assumption on the resulting depth map. That is the c... more ... A classical constraint is the smoothness assumption on the resulting depth map. That is the case for the well-known Tikhonov regularizafion term [28]: = ff[ V?3qa Z IZdml S(Z) ... In the trinocular case, depth is recovered everywhere on the pyramid. ...
The Second IEEE and ACM International Symposium on Mixed and Augmented Reality, 2003. Proceedings., 2003
Proceedings of IEEE International Conference on Computer Vision, 1995
In this paper we present a vision system for autonomous navigation based on stereo perception wit... more In this paper we present a vision system for autonomous navigation based on stereo perception without 3-D reconstruction. This approach uses weakly calibrated stereo images, i.e., images for which only the epipolar geometry is known. The vision system first rectifies the images, matches selected points between the two images, and then computes the relative elevation of the points relative to a reference plane as well as the images of their projections on this plane.
We usually think of the physical space as being embedded in a three-dimensional Eu- clidean space... more We usually think of the physical space as being embedded in a three-dimensional Eu- clidean space where measurements of lengths and angles do make sense. It turns out that for artificial systems, such as robots, this is not a mandatory viewpoint and that it is sometimes sufficient to think of the physical space as being embedded in an affine or
Automatic Extraction of Man-Made Objects from Aerial and Space Images, 1995
In this paper, we address the problem of the recovery of the Euclidean geometry of a scene from a... more In this paper, we address the problem of the recovery of the Euclidean geometry of a scene from a sequence of images without any prior knowledge either about the parameters of the cameras, or about the motion of the camera(s). We do not require any knowledge of the absolute coordinates of some control points in the scene to achieve this goal. Using various computer vision tools, we establish correspondences between images and recover the epipolar geometry of the set of images, from which we show how to compute the complete set of perspective projection matrices for each camera position. These being known, we proceed to reconstruct the scene. This reconstruction is defined up to an unknown projective transformation (i.e. is parameterized with 15 arbitrary parameters). Next we show how to go from this reconstruction to a more constrained class of reconstructions, defined up to an unknown affine transformation (i.e. parameterized with 12 arbitrary parameters) by exploiting known geometric relations between features in the scene such as parallelism. Finally, we show how to go from this reconstruction to another class, defined up to an unknown similitude (i.e. parameterized with 7 arbitrary parameters). This means that in an Euclidean frame attached to the scene or to one of the cameras, the reconstruction depends only upon one parameter, the global scale. This parameter is easily fixed as soon as one absolute length measurement is known. We see this vision system as a building block, a vision server, of a CAD system that is used by a human to model a scene for such applications as simulation, virtual or augmented reality. We believe that such a system can save a lot of tedious work to the human observer as well as play a leading role in keeping the geometric data base accurate and coherent.
Lecture Notes in Computer Science, 1994
In this paper, we investigate algorithms for evaluating surface orientation from pairs of stereo ... more In this paper, we investigate algorithms for evaluating surface orientation from pairs of stereo images using limited calibration information, and without reconstructing an explicit metric representation of the observed scene.
Lecture Notes in Computer Science, 1990
This article shows a way of using a stereo vision system as a logical sensor to perform mobile ro... more This article shows a way of using a stereo vision system as a logical sensor to perform mobile robot navigation tasks such as obstacle avoidance. We describe our system, from which the implementation of a task described by an automaton can be done very easily. Then we show an example of a navigation task.
ACM SIGGRAPH 98 Conference abstracts and applications on - SIGGRAPH '98, 1998
Eurographics, 1997
The advent of computer augmented reality (CAR), in which computer generated objects mix with real... more The advent of computer augmented reality (CAR), in which computer generated objects mix with real video images, has resulted in many interesting new application domains. Providing common illumination between the real and synthetic objects can be very beneficial, since the additional visual cues (shadows, interreflections etc.) are critical to seamless real-synthetic world integration. Building on recent advances in computer graphics and computer vision, we present a new framework to resolving this problem. We address three specific aspects of the common illumination problem for CAR: (a) simplification of camera calibration and modeling of the real scene; (b) efficient update of illumination for moving CG objects and (c) efficient rendering of the merged world. A first working system is presented for a limited sub-problem: a static real scene and camera with moving CG objects. Novel advances in computer vision are used for camera calibration and user-friendly modeling of the real scene, a recent interactive radiosity update algorithm is adapted to provide fast illumination update and finally textured polygons are used for display. This approach allows interactive update rates on mid-range graphics workstations. Our new framework will hopefully lead to CAR systems with interactive common illumination without restrictions on the movement of real or synthetic objects, lights and cameras.
Proceedings of 3rd IEEE International Conference on Image Processing, 1996
International Journal of Computer Vision, 1996
International Journal of Computer Vision, 1996
This paper discusses the problem of predicting image features in an image from image features in ... more This paper discusses the problem of predicting image features in an image from image features in two other images and the epipolar geometry between the three images. We adopt the most general camera model of perspective projection and show that u point can be predicted in the third image as a bilinear function of its images in the first two cameras, that the tangents to three corresponding curves are related by a trilinear function, and that the curvature of a curve in the third image is a linear function of the curvatures at the corresponding points in the other two images. Our analysis relies heavily on the use of the fundamental matrix which has been recently introduced (Faugeras et al, 1992) and on the properties of a special plane which we call the trz~ocul plane. Though the trinocular geometry of points and lines has been very recently addressed, our use of the differential properties of curves for prediction is unique.
Image and Vision Computing, 1995
In this paper we show some very recent results usang the "weak-calibration" idea. Assuming that w... more In this paper we show some very recent results usang the "weak-calibration" idea. Assuming that we only know the epipolar geometry of a pair of stereo images, encoded in ihe so-called fundamental matrix, we show that some useful three-dimensional information such as relative positions of points and planes and 3D convex hulls can be computed. We introduce the notion of visibility, which allows deriving those properties. Results on both synthetic and real data are shown.
IEEE Transactions on Visualization and Computer Graphics, 2000
Computer augmented reality (CAR) is a rapidly emerging field which enables users to mix real and ... more Computer augmented reality (CAR) is a rapidly emerging field which enables users to mix real and virtual worlds. Our goal is to provide interactive tools to perform common illumination, i.e., light interactions between real and virtual objects, including shadows and relighting (real and virtual light source modification). In particular, we concentrate on virtually modifying real light source intensities and inserting virtual lights and objects into a real scene; such changes can be very useful for virtual lighting design and prototyping. To achieve this, we present a three-step method. We first reconstruct a simplified representation of real scene geometry using semi-automatic vision-based techniques. With the simplified geometry, and by adapting recent hierarchical radiosity algorithms, we construct an approximation of real scene light exchanges. We next perform a preprocessing step, based on the radiosity system, to create unoccluded illumination textures. These replace the original scene textures which contained real light effects such as shadows from real lights. This texture is then modulated by a ratio of the radiosity (which can be changed) over a display factor which corresponds to the radiosity for which occlusion has been ignored. Since our goal is to achieve a convincing relighting effect, rather than an accurate solution, we present a heuristic correction process which results in visually plausible renderings. Finally, we perform an interactive process to compute new illumination with modified real and virtual light intensities. Our results show that we are able to virtually relight real scenes interactively, including modifications and additions of virtual light sources and objects.
Proceedings of the …, Jan 1, 2003
Abstract: Many classical image processing tasks can be realized as evaluations of a boolean funct... more Abstract: Many classical image processing tasks can be realized as evaluations of a boolean function over subsets of an image. For instance, the simplicity test used in 3D thinning re- quires examining,the 26 neighbors of each voxel and computing,a single boolean function of these inputs. In this article, we show how Binary Decision Diagrams can be used to pro- duce
Computer Vision and Pattern Recognition, 1997
Proceedings. 1991 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 1991
An edge-based trinocular stereovision algorithm is presented. The primitives it works on are cubi... more An edge-based trinocular stereovision algorithm is presented. The primitives it works on are cubic B-spline approximations of the 2-D edges. This allows one to deal conveniently with curvature and to extend to some nonpolyhedral scenes to previous stereo algorithms. To build a matching primitive, the principle of the algorithm is, first, to find a triplet of corresponding points on three
Proceedings of 1993 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS '93), 1993
This paper describes the development and implementation of a reactive visual module utilized on a... more This paper describes the development and implementation of a reactive visual module utilized on an autonomous mobile robot to automatically correct its trajectory. We use a multisensorial mechanism based on inertial and visual cues. We report here only on the implementation and the experimentation of this module, whereas the main theoretical aspects have been developed elsewhere.
Lecture Notes in Computer Science, 1996
... A classical constraint is the smoothness assumption on the resulting depth map. That is the c... more ... A classical constraint is the smoothness assumption on the resulting depth map. That is the case for the well-known Tikhonov regularizafion term [28]: = ff[ V?3qa Z IZdml S(Z) ... In the trinocular case, depth is recovered everywhere on the pyramid. ...
The Second IEEE and ACM International Symposium on Mixed and Augmented Reality, 2003. Proceedings., 2003
Proceedings of IEEE International Conference on Computer Vision, 1995
In this paper we present a vision system for autonomous navigation based on stereo perception wit... more In this paper we present a vision system for autonomous navigation based on stereo perception without 3-D reconstruction. This approach uses weakly calibrated stereo images, i.e., images for which only the epipolar geometry is known. The vision system first rectifies the images, matches selected points between the two images, and then computes the relative elevation of the points relative to a reference plane as well as the images of their projections on this plane.
We usually think of the physical space as being embedded in a three-dimensional Eu- clidean space... more We usually think of the physical space as being embedded in a three-dimensional Eu- clidean space where measurements of lengths and angles do make sense. It turns out that for artificial systems, such as robots, this is not a mandatory viewpoint and that it is sometimes sufficient to think of the physical space as being embedded in an affine or
Automatic Extraction of Man-Made Objects from Aerial and Space Images, 1995
In this paper, we address the problem of the recovery of the Euclidean geometry of a scene from a... more In this paper, we address the problem of the recovery of the Euclidean geometry of a scene from a sequence of images without any prior knowledge either about the parameters of the cameras, or about the motion of the camera(s). We do not require any knowledge of the absolute coordinates of some control points in the scene to achieve this goal. Using various computer vision tools, we establish correspondences between images and recover the epipolar geometry of the set of images, from which we show how to compute the complete set of perspective projection matrices for each camera position. These being known, we proceed to reconstruct the scene. This reconstruction is defined up to an unknown projective transformation (i.e. is parameterized with 15 arbitrary parameters). Next we show how to go from this reconstruction to a more constrained class of reconstructions, defined up to an unknown affine transformation (i.e. parameterized with 12 arbitrary parameters) by exploiting known geometric relations between features in the scene such as parallelism. Finally, we show how to go from this reconstruction to another class, defined up to an unknown similitude (i.e. parameterized with 7 arbitrary parameters). This means that in an Euclidean frame attached to the scene or to one of the cameras, the reconstruction depends only upon one parameter, the global scale. This parameter is easily fixed as soon as one absolute length measurement is known. We see this vision system as a building block, a vision server, of a CAD system that is used by a human to model a scene for such applications as simulation, virtual or augmented reality. We believe that such a system can save a lot of tedious work to the human observer as well as play a leading role in keeping the geometric data base accurate and coherent.
Lecture Notes in Computer Science, 1994
In this paper, we investigate algorithms for evaluating surface orientation from pairs of stereo ... more In this paper, we investigate algorithms for evaluating surface orientation from pairs of stereo images using limited calibration information, and without reconstructing an explicit metric representation of the observed scene.
Lecture Notes in Computer Science, 1990
This article shows a way of using a stereo vision system as a logical sensor to perform mobile ro... more This article shows a way of using a stereo vision system as a logical sensor to perform mobile robot navigation tasks such as obstacle avoidance. We describe our system, from which the implementation of a task described by an automaton can be done very easily. Then we show an example of a navigation task.
ACM SIGGRAPH 98 Conference abstracts and applications on - SIGGRAPH '98, 1998
Eurographics, 1997
The advent of computer augmented reality (CAR), in which computer generated objects mix with real... more The advent of computer augmented reality (CAR), in which computer generated objects mix with real video images, has resulted in many interesting new application domains. Providing common illumination between the real and synthetic objects can be very beneficial, since the additional visual cues (shadows, interreflections etc.) are critical to seamless real-synthetic world integration. Building on recent advances in computer graphics and computer vision, we present a new framework to resolving this problem. We address three specific aspects of the common illumination problem for CAR: (a) simplification of camera calibration and modeling of the real scene; (b) efficient update of illumination for moving CG objects and (c) efficient rendering of the merged world. A first working system is presented for a limited sub-problem: a static real scene and camera with moving CG objects. Novel advances in computer vision are used for camera calibration and user-friendly modeling of the real scene, a recent interactive radiosity update algorithm is adapted to provide fast illumination update and finally textured polygons are used for display. This approach allows interactive update rates on mid-range graphics workstations. Our new framework will hopefully lead to CAR systems with interactive common illumination without restrictions on the movement of real or synthetic objects, lights and cameras.
Proceedings of 3rd IEEE International Conference on Image Processing, 1996
International Journal of Computer Vision, 1996
International Journal of Computer Vision, 1996
This paper discusses the problem of predicting image features in an image from image features in ... more This paper discusses the problem of predicting image features in an image from image features in two other images and the epipolar geometry between the three images. We adopt the most general camera model of perspective projection and show that u point can be predicted in the third image as a bilinear function of its images in the first two cameras, that the tangents to three corresponding curves are related by a trilinear function, and that the curvature of a curve in the third image is a linear function of the curvatures at the corresponding points in the other two images. Our analysis relies heavily on the use of the fundamental matrix which has been recently introduced (Faugeras et al, 1992) and on the properties of a special plane which we call the trz~ocul plane. Though the trinocular geometry of points and lines has been very recently addressed, our use of the differential properties of curves for prediction is unique.
Image and Vision Computing, 1995
In this paper we show some very recent results usang the "weak-calibration" idea. Assuming that w... more In this paper we show some very recent results usang the "weak-calibration" idea. Assuming that we only know the epipolar geometry of a pair of stereo images, encoded in ihe so-called fundamental matrix, we show that some useful three-dimensional information such as relative positions of points and planes and 3D convex hulls can be computed. We introduce the notion of visibility, which allows deriving those properties. Results on both synthetic and real data are shown.
IEEE Transactions on Visualization and Computer Graphics, 2000
Computer augmented reality (CAR) is a rapidly emerging field which enables users to mix real and ... more Computer augmented reality (CAR) is a rapidly emerging field which enables users to mix real and virtual worlds. Our goal is to provide interactive tools to perform common illumination, i.e., light interactions between real and virtual objects, including shadows and relighting (real and virtual light source modification). In particular, we concentrate on virtually modifying real light source intensities and inserting virtual lights and objects into a real scene; such changes can be very useful for virtual lighting design and prototyping. To achieve this, we present a three-step method. We first reconstruct a simplified representation of real scene geometry using semi-automatic vision-based techniques. With the simplified geometry, and by adapting recent hierarchical radiosity algorithms, we construct an approximation of real scene light exchanges. We next perform a preprocessing step, based on the radiosity system, to create unoccluded illumination textures. These replace the original scene textures which contained real light effects such as shadows from real lights. This texture is then modulated by a ratio of the radiosity (which can be changed) over a display factor which corresponds to the radiosity for which occlusion has been ignored. Since our goal is to achieve a convincing relighting effect, rather than an accurate solution, we present a heuristic correction process which results in visually plausible renderings. Finally, we perform an interactive process to compute new illumination with modified real and virtual light intensities. Our results show that we are able to virtually relight real scenes interactively, including modifications and additions of virtual light sources and objects.