Murali Subbarao - Academia.edu (original) (raw)

Uploads

Papers by Murali Subbarao

Research paper thumbnail of A range image refinement technique for multi-view 3D model reconstruction

Fourth International Conference on 3-D Digital Imaging and Modeling, 2003. 3DIM 2003. Proceedings.

This paper presents a range image refinement technique for generating accurate 3D computer models... more This paper presents a range image refinement technique for generating accurate 3D computer models of real objects. Range images obtained from a stereo-vision system typically experience geometric distortions on reconstructed 3D surfaces due to the inherent stereo matching problems such as occlusions or mismatchings. This paper introduces a range image refinement technique to correct such erroneous ranges by employing epipolar geometry of a multiview modeling system and the visual hull of an object. After registering multiple range images into a common coordinate system, we first determine if a 3D point in a range image is erroneous, by measuring registration of the point with its correspondences in other range images. The correspondences are determined on 3D contours which are inverseprojections of epipolar lines in other 2D silhouette images. Then the range of the point is refined onto the object's surface, if it is erroneous. We employ two techniques to search the correspondences fast. In case that there is no correspondence for an erroneous point, we refine the point onto the visual hull of the object. We show that refined range images yield better geometric structures in reconstructed 3D models.

Research paper thumbnail of Depth from defocus by changing camera aperture: a spatial domain approach

Proceedings of IEEE Conference on Computer Vision and Pattern Recognition

Research paper thumbnail of Parallel Depth Recovery By Changing Camera Parameters

[1988 Proceedings] Second International Conference on Computer Vision

A new method is described for recovering the distance of objects in a scene from images formed by... more A new method is described for recovering the distance of objects in a scene from images formed by lenses. The recovery is based on measuring the change in the scene's image due to a known change in the three intrinsic camera parameters: (i) distance between the lens and the image detector, (ii) focal length of the lens, and (iii) diameter of the lens aperture. The method is parallel involving simple local computations. In comparison with stereo vision and structure-frommotion methods, the correspondence problem does not arise. This method for depth-map recovery may also be used for (i) obtaining focused images (i.e. images having large depth of field) from two images having finite depth of field, and (ii) rapid autofocusing of computer controlled video cameras.

Research paper thumbnail of Pose estimation and integration for complete 3D model reconstruction

Sixth IEEE Workshop on Applications of Computer Vision, 2002. (WACV 2002). Proceedings.

An automatic 3D model reconstruction technique is presented to acquire complete 3D models of real... more An automatic 3D model reconstruction technique is presented to acquire complete 3D models of real objects. The technique is based on novel approaches to pose estimation and integration. Two different poses of an object are used because a single pose often hides some surfaces from a range sensor. The presence of hidden surfaces makes the 3D model reconstructed from any single pose a partial model. Two such partial 3D models are reconstructed for two different poses of the object using a multi-view 3D modeling technique. The two partial 3D models are then registered. Coarse registration is facilitated by a novel pose estimation technique between two models. The pose is estimated by matching a stable tangent plane (STP) of each pose model with the base tangent plane (BTP) which is invariant for a vision system. The partial models are then integrated to a complete 3D model based on voxel classification defined in multi-view integration. Texture mapping is done to obtain a photo-realistic reconstruction of the object.

Research paper thumbnail of Image Defocus Simulator: A Software Tool

Research paper thumbnail of Accurate reconstruction of three-dimensional shape and focused image from a sequence of noisy defocused images

Three-Dimensional Imaging and Laser-Based Systems for Metrology and Inspection II, 1997

ABSTRACT

Research paper thumbnail of <title>Analysis of defocused image data for 3D shape recovery using a regularization technique</title>

Three-Dimensional Imaging and Laser-based Systems for Metrology and Inspection III, 1997

Research paper thumbnail of <title>Root-mean square error in passive autofocusing and 3D shape recovery</title>

Three-Dimensional Imaging and Laser-Based Systems for Metrology and Inspection II, 1997

ABSTRACT

Research paper thumbnail of Computer modeling and simulation of an active vision camera system

Research paper thumbnail of <title>Integration of multiple-baseline color stereo vision with focus and defocus analysis for 3D shape measurement</title>

Three-Dimensional Imaging, Optical Metrology, and Inspection IV, 1998

A 3D vision system named SVIS is developed for three-dimensional shape measurement that integrate... more A 3D vision system named SVIS is developed for three-dimensional shape measurement that integrates three methods: (i) multiple-baseline, multiple-resolution Stereo Image Analysis (SIA) that uses color image data, (ii) Image Defocus Analysis (IDA), and (iii) Image Focus Analysis (IFA). IDA and IFA are less accurate than stereo but they do not suffer from the correspondence problem associated with stereo. A rough 3D shape is first obtained using IDA and then IFA is used to obtain an improved estimate. The result is then used in SIA to solve the correspondence problem and obtain an accurate measurement of 3D shape. SIA is implemented using color (RGB) images recorded at multiple-baselines. Color images provide more information than monochrome images for stereo matching. Therefore matching errors are reduced and accuracy of 3D shape is improved. Further improvements are obtained through multiple-baseline stereo analysis. First short baseline images are analyzed to obtain an initial estimate of 3D shape. In this step, stereo matching errors are low and computation is fast since a shorter baseline results in lower disparities. The initial estimate of 3D shape is used to match longer baseline stereo images. This yields more accurate estimation of 3D shape. The stereo matching step is implemented using a multipleresolution matching approach to reduce computation. First lower resolution images are matched and the results are used in matching higher resolution images. This paper presents the algorithms and the experimental results of 3D shape measurement on SVIS for several objects. These results suggest a practical vision system for 3D shape measurement.

Research paper thumbnail of New dynamic zoom calibration technique for a stereo-vision-based multiview 3D modeling system

SPIE Proceedings, 2004

A new technique is proposed for calibrating a 3D modeling system with variable zoom based on mult... more A new technique is proposed for calibrating a 3D modeling system with variable zoom based on multi-view stereo image analysis. The 3D modeling system uses a stereo camera with variable zoom setting and a turntable for rotating an object. Given an object whose complete 3D model (mesh and texture-map) needs to be generated, the object is placed on the turntable and stereo images of the object are captured from multiple views by rotating the turntable. Partial 3D models generated from different views are integrated to obtain a complete 3D model of the object. Changing the zoom to accommodate objects of different sizes and at different distances from the stereo camera changes several internal camera parameters such as focal length and image center. Also, the parameters of the rotation axis of the turntable changes. We present camera calibration techniques for estimating the camera parameters and the rotation axis for different zoom settings. The Perspective Projection Matrices (PPM) of the cameras are calibrated at a selected set of zoom settings. The PPM is decomposed into intrinsic parameters, orientation angles, and translation vectors. Camera parameters at an arbitrary intermediate zoom setting are estimated from the nearest calibrated zoom positions through interpolation. A performance evaluation of this technique is presented with experimental results. We also present a refinement technique for stereo rectification that improves partial shape recovery. And the rotation axis of multi-view at different zoom setting is estimated without further calibration. Complete 3D models obtained with our techniques are presented.

Research paper thumbnail of <title>New technique for registration and integration of partial 3D models</title>

Machine Vision and Three-Dimensional Imaging Systems for Inspection and Metrology II, 2002

Research paper thumbnail of <title>Model for image sensing and digitization in machine vision</title>

Optics, Illumination, and Image Sensing for Machine Vision V, 1991

ABSTRACT

Research paper thumbnail of Camera calibration and performance evaluation of depth from defocus (DFD)

SPIE Proceedings, 2005

Real-time and accurate autofocusing of stationary and moving objects is an important problem in m... more Real-time and accurate autofocusing of stationary and moving objects is an important problem in modern digital cameras. Depth From Defocus (DFD) is a technique for autofocusing that needs only two or three images recorded with different camera parameters. In practice, there exist many factors that affect the performance of DFD algorithms, such as nonlinear sensor response, lens vignetting, and magnification variation. In this paper, we present calibration methods and algorithms for these three factors. Their correctness and effects on the performance of DFD have been investigated with experiments.

Research paper thumbnail of Continuous focusing of moving objects using image defocus

Research paper thumbnail of <title>Digital vision system for three-dimensional model acquisition</title>

Vision Geometry IX, 2000

ABSTRACT

Research paper thumbnail of <title>New method for shape from focus</title>

Machine Vision Applications, Architectures, and Systems Integration II, 1993

Research paper thumbnail of <title>Performance evaluation of different depth from defocus (DFD) techniques</title>

Two- and Three-Dimensional Methods for Inspection and Metrology III, 2005

In this paper, several binary mask based Depth From Defocus (DFD) algorithms are proposed to impr... more In this paper, several binary mask based Depth From Defocus (DFD) algorithms are proposed to improve autofocusing performance and robustness. A binary mask is defined by thresholding image Laplacian to remove unreliable points with low Signal-to-Noise Ratio (SNR). Three different DFD schemes-with/without spatial integration and with/without squaring-are investigated and evaluated, both through simulation and actual experiments. The actual experiments use a large variety of objects including very low contrast Ogata test charts. Experimental results show that autofocusing RMS step error is less than 2.6 lens steps, which corresponds to 1.73%. Although our discussion in this paper is mainly focused on a spatial domain method STM1, this technique should be of general value for different approaches such as STM2 and other spatial domain based algorithms.

Research paper thumbnail of Localized and computationally efficient approach to shift-variant image deblurring

2008 15th IEEE International Conference on Image Processing, 2008

A new localized and computationally efficient approach is presented for shift/space-variant image... more A new localized and computationally efficient approach is presented for shift/space-variant image restoration. Unlike conventional approaches, it models shift-variant blurring in a completely local form based on the recently proposed Rao Transform (RT). RT facilitates almost exact inversion of the blurring process locally and permits very fine-grain parallel implementation. The new approach naturally exploits the spatial locality of blurring kernels and smoothness of underlying focused images. It formulates the deblurring problem in terms of local parameters that are less correlated than raw image data. It is a fundamental advance that is general and not limited to any specific form of the blurring kernel such as a Gaussian. It has significant theoretical and computational advantages in comparison with conventional approaches such as those based on Singular Value Decomposition of blurring kernel matrices. Experimental results are presented for both synthetic and real image data. This approach is also relevant to solving integral equations.

Research paper thumbnail of Robust depth-from-defocus for autofocusing in the presence of image shifts

SPIE Proceedings, 2008

A new passive ranging technique named Robust Depth-from-Defocus (RDFD) is presented for autofocus... more A new passive ranging technique named Robust Depth-from-Defocus (RDFD) is presented for autofocusing in digital cameras. It is adapted to work in the presence of image shift and scale change caused by camera/hand/object motion. RDFD is similar to spatial-domain Depth-from-Defocus (DFD) techniques in terms of computational efficiency, but it does not require pixel correspondence between two images captured with different defocus levels. It requires approximate correspondence between image regions in different image frames as in the case of Depth-from-Focus (DFF) techniques. Theory and computational algorithm are presented for two different variations of RDFD. Experimental results are presented to show that RDFD is robust against image shifts and useful in practical applications. RDFD also provides insight into the close relation between DFF and DFD techniques.

Research paper thumbnail of A range image refinement technique for multi-view 3D model reconstruction

Fourth International Conference on 3-D Digital Imaging and Modeling, 2003. 3DIM 2003. Proceedings.

This paper presents a range image refinement technique for generating accurate 3D computer models... more This paper presents a range image refinement technique for generating accurate 3D computer models of real objects. Range images obtained from a stereo-vision system typically experience geometric distortions on reconstructed 3D surfaces due to the inherent stereo matching problems such as occlusions or mismatchings. This paper introduces a range image refinement technique to correct such erroneous ranges by employing epipolar geometry of a multiview modeling system and the visual hull of an object. After registering multiple range images into a common coordinate system, we first determine if a 3D point in a range image is erroneous, by measuring registration of the point with its correspondences in other range images. The correspondences are determined on 3D contours which are inverseprojections of epipolar lines in other 2D silhouette images. Then the range of the point is refined onto the object's surface, if it is erroneous. We employ two techniques to search the correspondences fast. In case that there is no correspondence for an erroneous point, we refine the point onto the visual hull of the object. We show that refined range images yield better geometric structures in reconstructed 3D models.

Research paper thumbnail of Depth from defocus by changing camera aperture: a spatial domain approach

Proceedings of IEEE Conference on Computer Vision and Pattern Recognition

Research paper thumbnail of Parallel Depth Recovery By Changing Camera Parameters

[1988 Proceedings] Second International Conference on Computer Vision

A new method is described for recovering the distance of objects in a scene from images formed by... more A new method is described for recovering the distance of objects in a scene from images formed by lenses. The recovery is based on measuring the change in the scene's image due to a known change in the three intrinsic camera parameters: (i) distance between the lens and the image detector, (ii) focal length of the lens, and (iii) diameter of the lens aperture. The method is parallel involving simple local computations. In comparison with stereo vision and structure-frommotion methods, the correspondence problem does not arise. This method for depth-map recovery may also be used for (i) obtaining focused images (i.e. images having large depth of field) from two images having finite depth of field, and (ii) rapid autofocusing of computer controlled video cameras.

Research paper thumbnail of Pose estimation and integration for complete 3D model reconstruction

Sixth IEEE Workshop on Applications of Computer Vision, 2002. (WACV 2002). Proceedings.

An automatic 3D model reconstruction technique is presented to acquire complete 3D models of real... more An automatic 3D model reconstruction technique is presented to acquire complete 3D models of real objects. The technique is based on novel approaches to pose estimation and integration. Two different poses of an object are used because a single pose often hides some surfaces from a range sensor. The presence of hidden surfaces makes the 3D model reconstructed from any single pose a partial model. Two such partial 3D models are reconstructed for two different poses of the object using a multi-view 3D modeling technique. The two partial 3D models are then registered. Coarse registration is facilitated by a novel pose estimation technique between two models. The pose is estimated by matching a stable tangent plane (STP) of each pose model with the base tangent plane (BTP) which is invariant for a vision system. The partial models are then integrated to a complete 3D model based on voxel classification defined in multi-view integration. Texture mapping is done to obtain a photo-realistic reconstruction of the object.

Research paper thumbnail of Image Defocus Simulator: A Software Tool

Research paper thumbnail of Accurate reconstruction of three-dimensional shape and focused image from a sequence of noisy defocused images

Three-Dimensional Imaging and Laser-Based Systems for Metrology and Inspection II, 1997

ABSTRACT

Research paper thumbnail of <title>Analysis of defocused image data for 3D shape recovery using a regularization technique</title>

Three-Dimensional Imaging and Laser-based Systems for Metrology and Inspection III, 1997

Research paper thumbnail of <title>Root-mean square error in passive autofocusing and 3D shape recovery</title>

Three-Dimensional Imaging and Laser-Based Systems for Metrology and Inspection II, 1997

ABSTRACT

Research paper thumbnail of Computer modeling and simulation of an active vision camera system

Research paper thumbnail of <title>Integration of multiple-baseline color stereo vision with focus and defocus analysis for 3D shape measurement</title>

Three-Dimensional Imaging, Optical Metrology, and Inspection IV, 1998

A 3D vision system named SVIS is developed for three-dimensional shape measurement that integrate... more A 3D vision system named SVIS is developed for three-dimensional shape measurement that integrates three methods: (i) multiple-baseline, multiple-resolution Stereo Image Analysis (SIA) that uses color image data, (ii) Image Defocus Analysis (IDA), and (iii) Image Focus Analysis (IFA). IDA and IFA are less accurate than stereo but they do not suffer from the correspondence problem associated with stereo. A rough 3D shape is first obtained using IDA and then IFA is used to obtain an improved estimate. The result is then used in SIA to solve the correspondence problem and obtain an accurate measurement of 3D shape. SIA is implemented using color (RGB) images recorded at multiple-baselines. Color images provide more information than monochrome images for stereo matching. Therefore matching errors are reduced and accuracy of 3D shape is improved. Further improvements are obtained through multiple-baseline stereo analysis. First short baseline images are analyzed to obtain an initial estimate of 3D shape. In this step, stereo matching errors are low and computation is fast since a shorter baseline results in lower disparities. The initial estimate of 3D shape is used to match longer baseline stereo images. This yields more accurate estimation of 3D shape. The stereo matching step is implemented using a multipleresolution matching approach to reduce computation. First lower resolution images are matched and the results are used in matching higher resolution images. This paper presents the algorithms and the experimental results of 3D shape measurement on SVIS for several objects. These results suggest a practical vision system for 3D shape measurement.

Research paper thumbnail of New dynamic zoom calibration technique for a stereo-vision-based multiview 3D modeling system

SPIE Proceedings, 2004

A new technique is proposed for calibrating a 3D modeling system with variable zoom based on mult... more A new technique is proposed for calibrating a 3D modeling system with variable zoom based on multi-view stereo image analysis. The 3D modeling system uses a stereo camera with variable zoom setting and a turntable for rotating an object. Given an object whose complete 3D model (mesh and texture-map) needs to be generated, the object is placed on the turntable and stereo images of the object are captured from multiple views by rotating the turntable. Partial 3D models generated from different views are integrated to obtain a complete 3D model of the object. Changing the zoom to accommodate objects of different sizes and at different distances from the stereo camera changes several internal camera parameters such as focal length and image center. Also, the parameters of the rotation axis of the turntable changes. We present camera calibration techniques for estimating the camera parameters and the rotation axis for different zoom settings. The Perspective Projection Matrices (PPM) of the cameras are calibrated at a selected set of zoom settings. The PPM is decomposed into intrinsic parameters, orientation angles, and translation vectors. Camera parameters at an arbitrary intermediate zoom setting are estimated from the nearest calibrated zoom positions through interpolation. A performance evaluation of this technique is presented with experimental results. We also present a refinement technique for stereo rectification that improves partial shape recovery. And the rotation axis of multi-view at different zoom setting is estimated without further calibration. Complete 3D models obtained with our techniques are presented.

Research paper thumbnail of <title>New technique for registration and integration of partial 3D models</title>

Machine Vision and Three-Dimensional Imaging Systems for Inspection and Metrology II, 2002

Research paper thumbnail of <title>Model for image sensing and digitization in machine vision</title>

Optics, Illumination, and Image Sensing for Machine Vision V, 1991

ABSTRACT

Research paper thumbnail of Camera calibration and performance evaluation of depth from defocus (DFD)

SPIE Proceedings, 2005

Real-time and accurate autofocusing of stationary and moving objects is an important problem in m... more Real-time and accurate autofocusing of stationary and moving objects is an important problem in modern digital cameras. Depth From Defocus (DFD) is a technique for autofocusing that needs only two or three images recorded with different camera parameters. In practice, there exist many factors that affect the performance of DFD algorithms, such as nonlinear sensor response, lens vignetting, and magnification variation. In this paper, we present calibration methods and algorithms for these three factors. Their correctness and effects on the performance of DFD have been investigated with experiments.

Research paper thumbnail of Continuous focusing of moving objects using image defocus

Research paper thumbnail of <title>Digital vision system for three-dimensional model acquisition</title>

Vision Geometry IX, 2000

ABSTRACT

Research paper thumbnail of <title>New method for shape from focus</title>

Machine Vision Applications, Architectures, and Systems Integration II, 1993

Research paper thumbnail of <title>Performance evaluation of different depth from defocus (DFD) techniques</title>

Two- and Three-Dimensional Methods for Inspection and Metrology III, 2005

In this paper, several binary mask based Depth From Defocus (DFD) algorithms are proposed to impr... more In this paper, several binary mask based Depth From Defocus (DFD) algorithms are proposed to improve autofocusing performance and robustness. A binary mask is defined by thresholding image Laplacian to remove unreliable points with low Signal-to-Noise Ratio (SNR). Three different DFD schemes-with/without spatial integration and with/without squaring-are investigated and evaluated, both through simulation and actual experiments. The actual experiments use a large variety of objects including very low contrast Ogata test charts. Experimental results show that autofocusing RMS step error is less than 2.6 lens steps, which corresponds to 1.73%. Although our discussion in this paper is mainly focused on a spatial domain method STM1, this technique should be of general value for different approaches such as STM2 and other spatial domain based algorithms.

Research paper thumbnail of Localized and computationally efficient approach to shift-variant image deblurring

2008 15th IEEE International Conference on Image Processing, 2008

A new localized and computationally efficient approach is presented for shift/space-variant image... more A new localized and computationally efficient approach is presented for shift/space-variant image restoration. Unlike conventional approaches, it models shift-variant blurring in a completely local form based on the recently proposed Rao Transform (RT). RT facilitates almost exact inversion of the blurring process locally and permits very fine-grain parallel implementation. The new approach naturally exploits the spatial locality of blurring kernels and smoothness of underlying focused images. It formulates the deblurring problem in terms of local parameters that are less correlated than raw image data. It is a fundamental advance that is general and not limited to any specific form of the blurring kernel such as a Gaussian. It has significant theoretical and computational advantages in comparison with conventional approaches such as those based on Singular Value Decomposition of blurring kernel matrices. Experimental results are presented for both synthetic and real image data. This approach is also relevant to solving integral equations.

Research paper thumbnail of Robust depth-from-defocus for autofocusing in the presence of image shifts

SPIE Proceedings, 2008

A new passive ranging technique named Robust Depth-from-Defocus (RDFD) is presented for autofocus... more A new passive ranging technique named Robust Depth-from-Defocus (RDFD) is presented for autofocusing in digital cameras. It is adapted to work in the presence of image shift and scale change caused by camera/hand/object motion. RDFD is similar to spatial-domain Depth-from-Defocus (DFD) techniques in terms of computational efficiency, but it does not require pixel correspondence between two images captured with different defocus levels. It requires approximate correspondence between image regions in different image frames as in the case of Depth-from-Focus (DFF) techniques. Theory and computational algorithm are presented for two different variations of RDFD. Experimental results are presented to show that RDFD is robust against image shifts and useful in practical applications. RDFD also provides insight into the close relation between DFF and DFD techniques.