Todor Georgiev - Academia.edu (original) (raw)
Papers by Todor Georgiev
Computer Graphics Forum, 2010
The focused plenoptic camera differs from the traditional plenoptic camera in that its microlense... more The focused plenoptic camera differs from the traditional plenoptic camera in that its microlenses are focused on the photographed object rather than at infinity. The spatio-angular tradeoffs available with this approach enable rendering of final images that have significantly higher resolution than those from traditional plenoptic cameras. Unfortunately, this approach can result in visible artifacts when basic rendering is used. In this paper, we present two new methods that work together to minimize these artifacts. The first method is based on careful design of the optical system. The second method is computational and based on a new lightfield rendering algorithm that extracts the depth information of a scene directly from the lightfield and then uses that depth information in the final rendering. Experimental results demonstrate the effectiveness of these approaches.
Abstract Lightfield photography is based on capturing discrete representations of all light rays ... more Abstract Lightfield photography is based on capturing discrete representations of all light rays in a volume of 3D space. Compared to conventional photography, which captures 2D images, lightfield photography captures 4D data. To multiplex this 4D radiance onto ...
This paper presents a theory that encompasses both “plenoptic” (microlens based) and “heterodynin... more This paper presents a theory that encompasses both “plenoptic” (microlens based) and “heterodyning” (mask based) cameras in a single frequency-domain mathematical formalism. Light-field capture has traditionally been analyzed using spatio-angular representation, with the exception of the frequency-domain “heterodyning” work. In this paper we interpret “heterodyning” as a general theory of multiplexing the radiance in the frequency domain. Using this interpretation, we derive a mathematical theory of recovering the 4D spatial and angular information from the multiplexed 2D frequency representation. The resulting method is applicable to all lightfield cameras, lens-based and mask-based. The generality of our approach suggests new designs for lightfield cameras. We present one such novel lightfield camera, based on a mask outside a conventional camera. Experimental results are presented for all cameras described.
Abstract Camera Technology has evolved tremendously in the last 10 years, the proliferation of ca... more Abstract Camera Technology has evolved tremendously in the last 10 years, the proliferation of camera-phones fueling unprecedented advancements on CMOS image sensors. As an emerging field, some of the problems are justified while others are bi-products of the ...
IEEE Computer Graphics and Applications, 2011
This approach uses a focused plenoptic camera to capture the plenoptic function's rich "non 3D" s... more This approach uses a focused plenoptic camera to capture the plenoptic function's rich "non 3D" structure. It employs two techniques. The first simultaneously captures multiple exposures (or other aspects) based on a microlens array having an interleaved set of different filters. The second places multiple filters at the main lens aperture.
Depth estimation in focused plenoptic camera is a critical step for most applications of this tec... more Depth estimation in focused plenoptic camera is a critical step for most applications of this technology and poses interesting challenges, as this estimation is content based. We present an iterative algorithm, content adaptive, that exploits the redundancy found in focused plenoptic camera captured images. Our algorithm determines for each point its depth along with a measure of reliability allowing subsequent enhancements of spatial resolution of the depth map. We remark that the spatial resolution of the recovered depth corresponds to discrete values of depth in the captured scene to which we refer as slices. Moreover, each slice has a different depth and will allow extraction of different spatial resolutions of depth, depending on the scene content being present in that slice along with occluding areas. Interestingly, as focused plenoptic camera is not theoretically limited in spatial resolution, we show that the recovered spatial resolution is depth related, and as such, rendering of a focused plenoptic image is content dependent.
We demonstrate high dynamic range (HDR) imaging with the Plenoptic 2.0 camera. Multiple exposure ... more We demonstrate high dynamic range (HDR) imaging with the Plenoptic 2.0 camera. Multiple exposure capture is achieved with a single shot using microimages created by microlens array that has an interleaved set of different apertures.
Plenoptic cameras, constructed with internal microlens arrays, focus those microlenses at infinit... more Plenoptic cameras, constructed with internal microlens arrays, focus those microlenses at infinity in order to sample the 4D radiance directly at the microlenses. The consequent assumption is that each microlens image is completely defocused with respect to to the image created by the main camera lens and the outside object. As a result, only a single pixel in the final image can be rendered from it, resulting in disappointingly low resolution. In this paper, we present a new approach to lightfield capture and image rendering that interprets the microlens array as an imaging system focused on the focal plane of the main camera lens. This approach captures a lightfield with significantly higher spatial resolution than the traditional approach, allowing us to render high resolution images that meet the expectations of modern photographers. Although the new approach samples the lightfield with reduced angular density, analysis and experimental results demonstrate that there is sufficient parallax to completely support lightfield manipulation algorithms such as refocusing and novel views.
Journal of Electronic Imaging, 2010
Plenoptic cameras, constructed with internal microlens arrays, capture both spatial and angular i... more Plenoptic cameras, constructed with internal microlens arrays, capture both spatial and angular information, i.e., the full 4-D radiance, of a scene. The design of traditional plenoptic cameras assumes that each microlens image is completely defocused with respect to the image created by the main camera lens. As a result, only a single pixel in the final image is rendered from each microlens image, resulting in disappointingly low resolution. A recently developed alternative approach based on the focused plenoptic camera uses the microlens array as an imaging system focused on the image plane of the main camera lens. The flexible spatioangular tradeoff that becomes available with this design enables rendering of final images with significantly higher resolution than those from traditional plenoptic cameras. We analyze the focused plenoptic camera in optical phase space and present basic, blended, and depth-based rendering algorithms for producing high-quality, high-resolution images. We also present our graphics-processing-unit-based implementations of these algorithms, which are able to render full screen refocused images in real time.
Digital images from a CCD or CMOS sensor with a color filter array must undergo a demosaicing pro... more Digital images from a CCD or CMOS sensor with a color filter array must undergo a demosaicing process to combine the separate color samples into a single color image. This interpolation process can interfere with the subsequent superresolution process. Plenoptic superresolution, which relies on precise sub-pixel sampling across captured microimages, is particularly sensitive to such resampling of the raw data. In this paper we present an approach for superresolving plenoptic images that takes place at the time of demosaicing the raw color image data. Our approach exploits the interleaving provided by typical color filter arrays (e.g., Bayer filter) to further refine plenoptic sub-pixel sampling. Our rendering algorithm treats the color channels in a plenoptic image separately, which improves final superresolution by a factor of two. With appropriate plenoptic capture we show the theoretical possibility for rendering final images at full sensor resolution.
The plenoptic function was originally defined as a record of both the 3D structure of the lightfi... more The plenoptic function was originally defined as a record of both the 3D structure of the lightfield and of its dependence on parameters such as wavelength, polarization, etc. Still, most work on these ideas has emphasized the 3D aspect of lightfield capture and manipulation, with less attention paid to other parameters. In this paper, we leverage the high resolution and flexible sampling trade-offs of the focused plenoptic camera to perform high-resolution capture of the rich “non 3D” structure of the plenoptic function. Two different techniques are presented and analyzed, using extended dynamic range photography as a particular example. The first technique simultaneously captures multiple exposures with a microlens array that has an interleaved set of different filters. The second technique places multiple filters at the main lens aperture. Experimental results validate our approach, producing 1.3Mpixel HDR images with a single capture.
We propose a general mathematical framework for dealing with Light Fields:
Computer Graphics Forum, 2010
The focused plenoptic camera differs from the traditional plenoptic camera in that its microlense... more The focused plenoptic camera differs from the traditional plenoptic camera in that its microlenses are focused on the photographed object rather than at infinity. The spatio-angular tradeoffs available with this approach enable rendering of final images that have significantly higher resolution than those from traditional plenoptic cameras. Unfortunately, this approach can result in visible artifacts when basic rendering is used. In this paper, we present two new methods that work together to minimize these artifacts. The first method is based on careful design of the optical system. The second method is computational and based on a new lightfield rendering algorithm that extracts the depth information of a scene directly from the lightfield and then uses that depth information in the final rendering. Experimental results demonstrate the effectiveness of these approaches.
Abstract Lightfield photography is based on capturing discrete representations of all light rays ... more Abstract Lightfield photography is based on capturing discrete representations of all light rays in a volume of 3D space. Compared to conventional photography, which captures 2D images, lightfield photography captures 4D data. To multiplex this 4D radiance onto ...
This paper presents a theory that encompasses both “plenoptic” (microlens based) and “heterodynin... more This paper presents a theory that encompasses both “plenoptic” (microlens based) and “heterodyning” (mask based) cameras in a single frequency-domain mathematical formalism. Light-field capture has traditionally been analyzed using spatio-angular representation, with the exception of the frequency-domain “heterodyning” work. In this paper we interpret “heterodyning” as a general theory of multiplexing the radiance in the frequency domain. Using this interpretation, we derive a mathematical theory of recovering the 4D spatial and angular information from the multiplexed 2D frequency representation. The resulting method is applicable to all lightfield cameras, lens-based and mask-based. The generality of our approach suggests new designs for lightfield cameras. We present one such novel lightfield camera, based on a mask outside a conventional camera. Experimental results are presented for all cameras described.
Abstract Camera Technology has evolved tremendously in the last 10 years, the proliferation of ca... more Abstract Camera Technology has evolved tremendously in the last 10 years, the proliferation of camera-phones fueling unprecedented advancements on CMOS image sensors. As an emerging field, some of the problems are justified while others are bi-products of the ...
IEEE Computer Graphics and Applications, 2011
This approach uses a focused plenoptic camera to capture the plenoptic function's rich "non 3D" s... more This approach uses a focused plenoptic camera to capture the plenoptic function's rich "non 3D" structure. It employs two techniques. The first simultaneously captures multiple exposures (or other aspects) based on a microlens array having an interleaved set of different filters. The second places multiple filters at the main lens aperture.
Depth estimation in focused plenoptic camera is a critical step for most applications of this tec... more Depth estimation in focused plenoptic camera is a critical step for most applications of this technology and poses interesting challenges, as this estimation is content based. We present an iterative algorithm, content adaptive, that exploits the redundancy found in focused plenoptic camera captured images. Our algorithm determines for each point its depth along with a measure of reliability allowing subsequent enhancements of spatial resolution of the depth map. We remark that the spatial resolution of the recovered depth corresponds to discrete values of depth in the captured scene to which we refer as slices. Moreover, each slice has a different depth and will allow extraction of different spatial resolutions of depth, depending on the scene content being present in that slice along with occluding areas. Interestingly, as focused plenoptic camera is not theoretically limited in spatial resolution, we show that the recovered spatial resolution is depth related, and as such, rendering of a focused plenoptic image is content dependent.
We demonstrate high dynamic range (HDR) imaging with the Plenoptic 2.0 camera. Multiple exposure ... more We demonstrate high dynamic range (HDR) imaging with the Plenoptic 2.0 camera. Multiple exposure capture is achieved with a single shot using microimages created by microlens array that has an interleaved set of different apertures.
Plenoptic cameras, constructed with internal microlens arrays, focus those microlenses at infinit... more Plenoptic cameras, constructed with internal microlens arrays, focus those microlenses at infinity in order to sample the 4D radiance directly at the microlenses. The consequent assumption is that each microlens image is completely defocused with respect to to the image created by the main camera lens and the outside object. As a result, only a single pixel in the final image can be rendered from it, resulting in disappointingly low resolution. In this paper, we present a new approach to lightfield capture and image rendering that interprets the microlens array as an imaging system focused on the focal plane of the main camera lens. This approach captures a lightfield with significantly higher spatial resolution than the traditional approach, allowing us to render high resolution images that meet the expectations of modern photographers. Although the new approach samples the lightfield with reduced angular density, analysis and experimental results demonstrate that there is sufficient parallax to completely support lightfield manipulation algorithms such as refocusing and novel views.
Journal of Electronic Imaging, 2010
Plenoptic cameras, constructed with internal microlens arrays, capture both spatial and angular i... more Plenoptic cameras, constructed with internal microlens arrays, capture both spatial and angular information, i.e., the full 4-D radiance, of a scene. The design of traditional plenoptic cameras assumes that each microlens image is completely defocused with respect to the image created by the main camera lens. As a result, only a single pixel in the final image is rendered from each microlens image, resulting in disappointingly low resolution. A recently developed alternative approach based on the focused plenoptic camera uses the microlens array as an imaging system focused on the image plane of the main camera lens. The flexible spatioangular tradeoff that becomes available with this design enables rendering of final images with significantly higher resolution than those from traditional plenoptic cameras. We analyze the focused plenoptic camera in optical phase space and present basic, blended, and depth-based rendering algorithms for producing high-quality, high-resolution images. We also present our graphics-processing-unit-based implementations of these algorithms, which are able to render full screen refocused images in real time.
Digital images from a CCD or CMOS sensor with a color filter array must undergo a demosaicing pro... more Digital images from a CCD or CMOS sensor with a color filter array must undergo a demosaicing process to combine the separate color samples into a single color image. This interpolation process can interfere with the subsequent superresolution process. Plenoptic superresolution, which relies on precise sub-pixel sampling across captured microimages, is particularly sensitive to such resampling of the raw data. In this paper we present an approach for superresolving plenoptic images that takes place at the time of demosaicing the raw color image data. Our approach exploits the interleaving provided by typical color filter arrays (e.g., Bayer filter) to further refine plenoptic sub-pixel sampling. Our rendering algorithm treats the color channels in a plenoptic image separately, which improves final superresolution by a factor of two. With appropriate plenoptic capture we show the theoretical possibility for rendering final images at full sensor resolution.
The plenoptic function was originally defined as a record of both the 3D structure of the lightfi... more The plenoptic function was originally defined as a record of both the 3D structure of the lightfield and of its dependence on parameters such as wavelength, polarization, etc. Still, most work on these ideas has emphasized the 3D aspect of lightfield capture and manipulation, with less attention paid to other parameters. In this paper, we leverage the high resolution and flexible sampling trade-offs of the focused plenoptic camera to perform high-resolution capture of the rich “non 3D” structure of the plenoptic function. Two different techniques are presented and analyzed, using extended dynamic range photography as a particular example. The first technique simultaneously captures multiple exposures with a microlens array that has an interleaved set of different filters. The second technique places multiple filters at the main lens aperture. Experimental results validate our approach, producing 1.3Mpixel HDR images with a single capture.
We propose a general mathematical framework for dealing with Light Fields: