3D reconstruction through segmentation of multi-view image sequences (original) (raw)
Enforcing Consistency of 3D Scenes with Multiple Objects Using Shape-from-Contours
Lecture Notes in Computer Science, 2013
In this paper we present a new approach for modelling scenes with multiple 3D objects from images taken from various viewpoints. Such images are segmented using either supervised or unsupervised algorithms. We consider the mean-shift and support vector machines for image segmentation using the colour and texture as features. Backprojections of segmented contours are used to enforce the consistency of the segmented contours with initial estimates of the 3D scene. A study for detecting merged objects in 3D scenes is provided as well.
Globally Optimal 3D Image Reconstruction and Segmentation Via Energy Minimisation Techniques
Lecture Notes in Computer Science, 2005
This paper provides an overview of a number of techniques developed within our group to perform 3D reconstruction and image segmentation based of the application of energy minimisation concepts. We begin with classical snake techniques and show how similar energy minimisation concepts can be extended to derive globally optimal segmentation methods. Then we discuss more recent work based on geodesic active contours that can lead to globally optimal segmentations and reconstructions in 2D. Finally we extend the work to 3D by introducing continuous flow globally minimal surfaces. Several applications are discussed to show the wide applicability and suitability of these techniques to several difficult image analysis problems.
Simultaneous Segmentation and 3D Reconstruction of Monocular Image Sequences
2007 IEEE 11th International Conference on Computer Vision, 2007
When trying to extract 3D scene information and camera motion from an image sequence alone, it is often necessary to cope with independently moving objects. Recent research has unveiled some of the mathematical foundations of the problem, but a general and practical algorithm, which can handle long, realistic sequences, is still missing. In this paper, we identify the necessary parts of such an algorithm, highlight both unexplored theoretical issues and practical challenges, and propose solutions. Theoretical issues include proper handling of different situations, in which the number of independent motions changes: objects can enter the scene, objects previously moving together can split and follow independent trajectories, or independently moving objects can merge into one common motion. We derive model scoring criteria to handle these changes in the number of segments. A further theoretical issue is the resolution of the relative scale ambiguity between such changes. Practical issues include robust 3D reconstruction of freely moving foreground objects, which often have few and short feature tracks. The proposed framework simultaneously tracks features, groups them into rigidly moving segments, and reconstructs all segments in 3D. Such an online approach, as opposed to batch processing techniques, which first track features, and then perform segmentation and reconstruction, is vital in order to handle small foreground objects.
A geometric segmentation approach for the 3D reconstruction of dynamic scenes in 2D video sequences
In this paper, an algorithm is proposed to solve the multi-frame structure from motion (MFSfM) problem for monocular video sequences with multiple rigid moving objects. The algorithm uses the epipolar criterion to segment feature trajectories belonging to the background scene and each of the independently moving objects. As a large baseline length is essential for the reliability of the epipolar geometry, the geometric robust information criterion is employed for key-frame selection within the sequences. Once the features are segmented, corresponding objects are reconstructed individually using a sequential algorithm that is capable of prioritizing the frame pairs with respect to their reliability and information content. The experimental results on synthetic and real data demonstrate that our approach has the potential to effectively deal with the multi-body MFSfM problem.
Correcting 3D scenes estimated from sets of multi-view images using shape-from-contours
2014 IEEE International Conference on Image Processing (ICIP), 2014
This paper proposes enforcing the consistency with segmented contours when modelling scenes with multiple objects from multi-view images. A certain rough initialization of the 3D scene is assumed to be available and in the case of multiple objects inconsistencies are expected. In the proposed shape-from-contours approach images are segmented and back-projections of segmented contours are used for enforcing the consistency of the segmented contours with 3D objects from the scene. We provide a study for the physical requirements for detecting occlusions when reconstructing 3-D scenes with multiple objects.
Geodesic Active Contours with Combined Shape and Appearance Priors
Lecture Notes in Computer Science, 2008
We present a new object segmentation method that is based on geodesic active contours with combined shape and appearance priors. It is known that using shape priors can significantly improve object segmentation in cluttered scenes and occlusions. Within this context, we add a new prior, based on the appearance of the object, (i.e., an image) to be segmented. This method enables the appearance pattern to be incorporated within the geodesic active contour framework with shape priors, seeking for the object whose boundaries lie on high image gradients and that best fits the shape and appearance of a reference model. The output contour results from minimizing an energy functional built of these three main terms. We show that appearance is a powerful term that distinguishes between objects with similar shapes and capable of successfully segment an object in a very cluttered environment where standard active contours (even those with shape priors) tend to fail.
Automatic 3D object segmentation in multiple views using volumetric graph-cuts
Image and Vision Computing, 2010
We propose an algorithm for automatically obtaining a segmentation of a rigid object in a sequence of images that are calibrated for camera pose and intrinsic parameters. Until recently, the best segmentation results have been obtained by interactive methods that require manual labelling of image regions. Our method requires no user input but instead relies on the camera fixating on the object of interest during the sequence. We begin by learning a model of the object's colour, from the image pixels around the fixation points. We then extract image edges and combine these with the object colour information in a volumetric binary MRF model. The globally optimal segmentation of 3D space is obtained by a graph-cut optimisation. From this segmentation an improved colour model is extracted and the whole process is iterated until convergence.
Active contour based segmentation of 3D surfaces
Computer Vision–ECCV 2008, 2008
Algorithms incorporating 3D information have proven to be superior to purely 2D approaches in many areas of computer vision including face biometrics and recognition. Still, the range of methods for feature extraction from 3D surfaces is limited. Very popular in 2D image analysis, active contours have been generalized to curved surfaces only recently. Current implementations require a global surface parametrisation. We show that a balloon force cannot be included properly in existing methods, making them unsuitable for applications with noisy data. To overcome this drawback we propose a new algorithm for evolving geodesic active contours on implicit surfaces. We also introduce a new narrowband scheme which results in linear computational complexity. The performance of our model is illustrated on various real and synthetic 3D surfaces.
A semi-supervised approach to space carving
2010
In this paper, we present a semi-supervised approach to space carving by casting the recovery of volumetric data from multiple views into an evidence combining setting. The method presented here is statistical in nature and employs, as a starting point, a manually obtained contour. By making use of this user-provided information, we obtain probabilistic silhouettes of all successive images. These silhouettes provide a prior distribution that is then used to compute the probability of a voxel being carved. This evidence combining setting allows us to make use of background pixel information. As a result, our method combines the advantages of shape-from-silhouette techniques and statistical space carving approaches. For the carving process, we propose a new voxelated space. The proposed space is a projective one that provides a color mapping for the object voxels which is consistent in terms of pixel coverage with their projection onto the image planes for the imagery under consideration. We provide quantitative results and illustrate the utility of the method on real-world imagery.