Todor Georgiev | Adobe Systems (original) (raw)

Papers by Todor Georgiev

Research paper thumbnail of The radon image as plenoptic function

We introduce a novel plenoptic function that can be directly captured or generated after the fact... more We introduce a novel plenoptic function that can be directly captured or generated after the fact in plenoptic cameras. Whereas previous approaches represent the plenoptic function over a 4D ray space (as radiance or light field), we introduce the representation of the plenoptic function over a 3D plane space. Our approach uses the Radon plenoptic function instead of the traditional 4D plenoptic function to achieve 3D representation-which promises reduced size, making it suitable for use in mobile devices. Moreover, we show that the original 3D luminous density of the scene can be recovered via the inverse Radon transform. Finally, we demonstrate how various 3D views and differently-focused pictures can be rendered directly from this new representation.

Research paper thumbnail of Lightfield photography: theory and methods

International Conference on Computer Graphics and Interactive Techniques, 2010

Research paper thumbnail of Removing Artifacts Due To Frequency-Domain Processing of Light-Fields

Eurographics, 2008

In previous works, light-field capture has been analyzed in spatio-angular representation. A ligh... more In previous works, light-field capture has been analyzed in spatio-angular representation. A light-field camera samples the optical signal within a single photograph by multiplexing the 4D radiance onto the physical 2D surface of the sensor. Besides sampling the light field spatially, methods have been developed for multiplexing the radiance in the frequency domain by optically mixing different spatial and angular frequency components. The mathematical method for recovering the multiplexed spatial and angular information from the frequency representation is very straightforward. However, the results are prone to lots of artifacts due to limitations inherent to frequency-domain processing of images. In this paper, we try understand the characteristics of these artifacts. Furthermore, we study the effect and sources of artifacts that affect the quality of the results and present various methods for the removal of artifacts.

Research paper thumbnail of Vision, healing brush, and fiber bundles

Proceedings of SPIE, Mar 18, 2005

The Healing Brush is a tool introduced for the first time in Adobe Photoshop (2002) that removes ... more The Healing Brush is a tool introduced for the first time in Adobe Photoshop (2002) that removes defects in images by seamless cloning (gradient domain fusion). The Healing Brush algorithms are built on a new mathematical approach that uses Fibre Bundles and Connections to model the representation of images in the visual system. Our mathematical results are derived from first

Research paper thumbnail of Front Matter: Volume 8667

Proceedings of SPIE, Apr 4, 2013

The papers included in this volume were part of the technical conference cited on the cover and t... more The papers included in this volume were part of the technical conference cited on the cover and title page. Papers were selected and subject to review by the editors and conference program committee. Some conference presentations may not be available for publication. The papers published in these proceedings reflect the work and thoughts of the authors and are published herein as submitted. The publishers are not responsible for the validity of the information or for any outcomes resulting from reliance thereon.

Research paper thumbnail of Use of the complex-scaling method in calculations of Stark resonances

Physical Review A, Jul 1, 1997

Herbst and Simon [Phys. Rev. Lett. 41, 67 (1978)] have proven that the complex-scaling method (CS... more Herbst and Simon [Phys. Rev. Lett. 41, 67 (1978)] have proven that the complex-scaling method (CSM) is applicable to the Stark effect in atoms. However, their proof is limited to complex scaling with an argument< π/3. Their method makes no statement as to the ...

Research paper thumbnail of Introduction to the JEI Focal Track Presentations

Proceedings of SPIE, Feb 26, 2013

In addition to the usual conference presentations, the 2013 Mobile Computational Photography conf... more In addition to the usual conference presentations, the 2013 Mobile Computational Photography conference includes a "focal track" of peer-reviewed papers that appear in a special section of the Journal of Electronic Imaging. Here, we introduce these papers, using an extract from the Editorial 5 accompanying the JEI issue.

Research paper thumbnail of Special Section Guest Editorial: Mobile Computational Photography

Journal of Electronic Imaging, Feb 21, 2013

Research paper thumbnail of Multimode plenoptic imaging

Proceedings of SPIE, Feb 27, 2015

The plenoptic function was originally defined as a complete record of the 3D structure of radianc... more The plenoptic function was originally defined as a complete record of the 3D structure of radiance in a scene and its dependence on a number of different parameters including position, angle, wavelength, polarization, etc. Recently-developed plenoptic cameras typically capture only the geometric aspects of the plenoptic function. Using this information, computational photography can render images with an infinite variety of features such as focus, depth of field, and parallax. Less attention has been paid to other, nonspatial, parameters of the plenoptic function that could also be captured. In this paper, we develop the microlens-based image sensor (aka the Lippmann sensor) as a generalized plenoptic capture device, able to capture additional information based on filters/modifiers placed on different microlenses. Multimodal capture can comprise many different parameters such as high-dynamic range, multispectral, and so on. For this paper we explore two particular examples in detail: polarization capture based on interleaved polarization filters, and capture with extended depth of field based on microlenses with different focal lengths.

Research paper thumbnail of Plenoptic Principal Planes

Imaging and Applied Optics, 2011

We show that the plenoptic camera is optically equivalent to an array of cameras. We compute the ... more We show that the plenoptic camera is optically equivalent to an array of cameras. We compute the parameters that establish that equivalence and show where the plenoptic camera is more useful than the camera array.

Research paper thumbnail of Plenoptic depth map in the case of occlusions

Proceedings of SPIE, Mar 7, 2013

Recent realizations of hand-held plenoptic cameras have given rise to previously unexplored effec... more Recent realizations of hand-held plenoptic cameras have given rise to previously unexplored effects in photography. Designing a mobile phone plenoptic camera is becoming feasible with the significant increase of computing power of mobile devices and the introduction of System on a Chip. However, capturing high numbers of views is still impractical due to special requirements such as ultra-thin camera and low costs. In this paper, we analyze a mobile plenoptic camera solution with a small number of views. Such a camera can produce a refocusable high resolution final image if a depth map is generated for every pixel in the sparse set of views. With the captured multi-view images, the obstacle to recovering a high-resolution depth is occlusions. To robustly resolve these, we first analyze the behavior of pixels in such situations. We show that even under severe occlusion, one can still distinguish different depth layers based on statistics. We estimate the depth of each pixel by discretizing the space in the scene and conducting plane sweeping. Specifically, for each given depth, we gather all corresponding pixels from other views and model the in-focus pixels as a Gaussian distribution. We show how it is possible to distinguish occlusion pixels, and in-focus pixels in order to find the depths. Final depth maps are computed in real scenes captured by a mobile plenoptic camera.

Research paper thumbnail of Content-based depth estimation in focused plenoptic camera

Proceedings of SPIE, Jan 23, 2011

Depth estimation in focused plenoptic camera is a critical step for most applications of this tec... more Depth estimation in focused plenoptic camera is a critical step for most applications of this technology and poses interesting challenges, as this estimation is content based. We present an iterative algorithm, content adaptive, that exploits the redundancy found in focused plenoptic camera captured images. Our algorithm determines for each point its depth along with a measure of reliability allowing subsequent

Research paper thumbnail of Depth of Field in Plenoptic Cameras

Eurographics, 2009

Certain new algorithms used by plenoptic cameras require focused microlens images. The range of a... more Certain new algorithms used by plenoptic cameras require focused microlens images. The range of applicability of these algorithms therefore depends on the depth of field of the relay system comprising the plenoptic camera. We analyze the relationships and tradeoffs between camera parameters and depth of field and characterize conditions for optimal refocusing, stereo, and 3D imaging.

Research paper thumbnail of Superresolution with the focused plenoptic camera

Proceedings of SPIE, Feb 10, 2011

Digital images from a CCD or CMOS sensor with a color filter array must undergo a demosaicing pro... more Digital images from a CCD or CMOS sensor with a color filter array must undergo a demosaicing process to combine the separate color samples into a single color image. This interpolation process can interfere with the subsequent superresolution process. Plenoptic superresolution, which relies on precise sub-pixel sampling across captured microimages, is particularly sensitive to such resampling of the raw data. In this paper we present an approach for superresolving plenoptic images that takes place at the time of demosaicing the raw color image data. Our approach exploits the interleaving provided by typical color filter arrays (e.g., Bayer filter) to further refine plenoptic sub-pixel sampling. Our rendering algorithm treats the color channels in a plenoptic image separately, which improves final superresolution by a factor of two. With appropriate plenoptic capture we show the theoretical possibility for rendering final images at full sensor resolution.

Research paper thumbnail of Reducing Plenoptic Camera Artifacts

Computer Graphics Forum, Jun 14, 2010

ABSTRACT The focused plenoptic camera differs from the traditional plenoptic camera in that its m... more ABSTRACT The focused plenoptic camera differs from the traditional plenoptic camera in that its microlenses are focused on the photographed object rather than at infinity. The spatio-angular tradeoffs available with this approach enable rendering of final images that have significantly higher resolution than those from traditional plenoptic cameras. Unfortunately, this approach can result in visible artifacts when basic rendering is used. In this paper, we present two new methods that work together to minimize these artifacts. The first method is based on careful design of the optical system. The second method is computational and based on a new lightfield rendering algorithm that extracts the depth information of a scene directly from the lightfield and then uses that depth information in the final rendering. Experimental results demonstrate the effectiveness of these approaches.

Research paper thumbnail of Lytro camera technology: theory, algorithms, performance analysis

Proceedings of SPIE, Mar 7, 2013

The Lytro camera is the first implementation of a plenoptic camera for the consumer market. We co... more The Lytro camera is the first implementation of a plenoptic camera for the consumer market. We consider it a successful example of the miniaturization aided by the increase in computational power characterizing mobile computational photography. The plenoptic camera approach to radiance capture uses a microlens array as an imaging system focused on the focal plane of the main camera lens. This paper analyzes the performance of Lytro camera from a system level perspective, considering the Lytro camera as a black box, and uses our interpretation of Lytro image data saved by the camera. We present our findings based on our interpretation of Lytro camera file structure, image calibration and image rendering; in this context, artifacts and final image resolution are discussed.

Research paper thumbnail of Stark Resonances and Complex Scaling

Abstract The Avron-Herbst-Simon theory of the Stark Effect shows that while the Stark Hamiltonian... more Abstract The Avron-Herbst-Simon theory of the Stark Effect shows that while the Stark Hamiltonian has only continuous spectrum, there is an appropriately built S-matrix which possesses resonance poles on the second sheet. These can be found with the method of ...

Research paper thumbnail of Interferometric Measurement of Sensor MTF and Crosstalk

IS&T International Symposium on Electronic Imaging Science and Technology, Jan 29, 2017

We have developed a laser interferometer with the goal of precise measurement of the pixel MTF an... more We have developed a laser interferometer with the goal of precise measurement of the pixel MTF and pixel crosstalk in camera sensors. One of the advantages of our interferometric method for measuring sensor MTF is that the sinusoidal illumination pattern is formed directly on the sensor rather than beamed through a lens. This allows for a precise measurement of sensor MTF and crosstalk unaltered by the lens. Another advantage is that we measure MTF in a wide range of spatial frequencies reaching high above the Nyquist frequency. We discuss the theory behind the expected and observed sensor performance, and show our experimental results. Our comparison with the Slanted Edge method shows that we have better precision and cover a wider range of frequencies.

Research paper thumbnail of Image Reconstruction Invariant to Relighting

Eurographics, 2005

This paper describes an improvement to the Poisson image editing method for seamless cloning. Our... more This paper describes an improvement to the Poisson image editing method for seamless cloning. Our approach is based on minimizing an energy expression invariant to relighting. The improved method reconstructs seamlessly the selected region, matching both pixel values and texture contrast of the surrounding area, while previous algorithms matched pixel values only. Our algorithm solves a deeper problem: It performs reconstruction in terms of the internal working mechanisms of human visual system. Retinex-type effects of adaptation are built into the structure of the mathematical model, producing results that change covariantly with lighting.

Research paper thumbnail of Plenoptic rendering with interactive performance using GPUs

Proceedings of SPIE, Feb 9, 2012

Processing and rendering of plenoptic camera data requires significant computational power and me... more Processing and rendering of plenoptic camera data requires significant computational power and memory bandwidth. At the same time, real-time rendering performance is highly desirable so that users can interactively explore the infinite variety of images that can be rendered from a single plenoptic image. In this paper we describe a GPU-based approach for lightfield processing and rendering, with which we are able to achieve interactive performance for focused plenoptic rendering tasks such as refocusing and novel-view generation. We present a progression of rendering approaches for focused plenoptic camera data and analyze their performance on popular GPU-based systems. Our analyses are validated with experimental results on commercially available GPU hardware. Even for complicated rendering algorithms, we are able to render 39Mpixel plenoptic data to 2Mpixel images with frame rates in excess of 500 frames per second.

Research paper thumbnail of The radon image as plenoptic function

We introduce a novel plenoptic function that can be directly captured or generated after the fact... more We introduce a novel plenoptic function that can be directly captured or generated after the fact in plenoptic cameras. Whereas previous approaches represent the plenoptic function over a 4D ray space (as radiance or light field), we introduce the representation of the plenoptic function over a 3D plane space. Our approach uses the Radon plenoptic function instead of the traditional 4D plenoptic function to achieve 3D representation-which promises reduced size, making it suitable for use in mobile devices. Moreover, we show that the original 3D luminous density of the scene can be recovered via the inverse Radon transform. Finally, we demonstrate how various 3D views and differently-focused pictures can be rendered directly from this new representation.

Research paper thumbnail of Lightfield photography: theory and methods

International Conference on Computer Graphics and Interactive Techniques, 2010

Research paper thumbnail of Removing Artifacts Due To Frequency-Domain Processing of Light-Fields

Eurographics, 2008

In previous works, light-field capture has been analyzed in spatio-angular representation. A ligh... more In previous works, light-field capture has been analyzed in spatio-angular representation. A light-field camera samples the optical signal within a single photograph by multiplexing the 4D radiance onto the physical 2D surface of the sensor. Besides sampling the light field spatially, methods have been developed for multiplexing the radiance in the frequency domain by optically mixing different spatial and angular frequency components. The mathematical method for recovering the multiplexed spatial and angular information from the frequency representation is very straightforward. However, the results are prone to lots of artifacts due to limitations inherent to frequency-domain processing of images. In this paper, we try understand the characteristics of these artifacts. Furthermore, we study the effect and sources of artifacts that affect the quality of the results and present various methods for the removal of artifacts.

Research paper thumbnail of Vision, healing brush, and fiber bundles

Proceedings of SPIE, Mar 18, 2005

The Healing Brush is a tool introduced for the first time in Adobe Photoshop (2002) that removes ... more The Healing Brush is a tool introduced for the first time in Adobe Photoshop (2002) that removes defects in images by seamless cloning (gradient domain fusion). The Healing Brush algorithms are built on a new mathematical approach that uses Fibre Bundles and Connections to model the representation of images in the visual system. Our mathematical results are derived from first

Research paper thumbnail of Front Matter: Volume 8667

Proceedings of SPIE, Apr 4, 2013

The papers included in this volume were part of the technical conference cited on the cover and t... more The papers included in this volume were part of the technical conference cited on the cover and title page. Papers were selected and subject to review by the editors and conference program committee. Some conference presentations may not be available for publication. The papers published in these proceedings reflect the work and thoughts of the authors and are published herein as submitted. The publishers are not responsible for the validity of the information or for any outcomes resulting from reliance thereon.

Research paper thumbnail of Use of the complex-scaling method in calculations of Stark resonances

Physical Review A, Jul 1, 1997

Herbst and Simon [Phys. Rev. Lett. 41, 67 (1978)] have proven that the complex-scaling method (CS... more Herbst and Simon [Phys. Rev. Lett. 41, 67 (1978)] have proven that the complex-scaling method (CSM) is applicable to the Stark effect in atoms. However, their proof is limited to complex scaling with an argument< π/3. Their method makes no statement as to the ...

Research paper thumbnail of Introduction to the JEI Focal Track Presentations

Proceedings of SPIE, Feb 26, 2013

In addition to the usual conference presentations, the 2013 Mobile Computational Photography conf... more In addition to the usual conference presentations, the 2013 Mobile Computational Photography conference includes a "focal track" of peer-reviewed papers that appear in a special section of the Journal of Electronic Imaging. Here, we introduce these papers, using an extract from the Editorial 5 accompanying the JEI issue.

Research paper thumbnail of Special Section Guest Editorial: Mobile Computational Photography

Journal of Electronic Imaging, Feb 21, 2013

Research paper thumbnail of Multimode plenoptic imaging

Proceedings of SPIE, Feb 27, 2015

The plenoptic function was originally defined as a complete record of the 3D structure of radianc... more The plenoptic function was originally defined as a complete record of the 3D structure of radiance in a scene and its dependence on a number of different parameters including position, angle, wavelength, polarization, etc. Recently-developed plenoptic cameras typically capture only the geometric aspects of the plenoptic function. Using this information, computational photography can render images with an infinite variety of features such as focus, depth of field, and parallax. Less attention has been paid to other, nonspatial, parameters of the plenoptic function that could also be captured. In this paper, we develop the microlens-based image sensor (aka the Lippmann sensor) as a generalized plenoptic capture device, able to capture additional information based on filters/modifiers placed on different microlenses. Multimodal capture can comprise many different parameters such as high-dynamic range, multispectral, and so on. For this paper we explore two particular examples in detail: polarization capture based on interleaved polarization filters, and capture with extended depth of field based on microlenses with different focal lengths.

Research paper thumbnail of Plenoptic Principal Planes

Imaging and Applied Optics, 2011

We show that the plenoptic camera is optically equivalent to an array of cameras. We compute the ... more We show that the plenoptic camera is optically equivalent to an array of cameras. We compute the parameters that establish that equivalence and show where the plenoptic camera is more useful than the camera array.

Research paper thumbnail of Plenoptic depth map in the case of occlusions

Proceedings of SPIE, Mar 7, 2013

Recent realizations of hand-held plenoptic cameras have given rise to previously unexplored effec... more Recent realizations of hand-held plenoptic cameras have given rise to previously unexplored effects in photography. Designing a mobile phone plenoptic camera is becoming feasible with the significant increase of computing power of mobile devices and the introduction of System on a Chip. However, capturing high numbers of views is still impractical due to special requirements such as ultra-thin camera and low costs. In this paper, we analyze a mobile plenoptic camera solution with a small number of views. Such a camera can produce a refocusable high resolution final image if a depth map is generated for every pixel in the sparse set of views. With the captured multi-view images, the obstacle to recovering a high-resolution depth is occlusions. To robustly resolve these, we first analyze the behavior of pixels in such situations. We show that even under severe occlusion, one can still distinguish different depth layers based on statistics. We estimate the depth of each pixel by discretizing the space in the scene and conducting plane sweeping. Specifically, for each given depth, we gather all corresponding pixels from other views and model the in-focus pixels as a Gaussian distribution. We show how it is possible to distinguish occlusion pixels, and in-focus pixels in order to find the depths. Final depth maps are computed in real scenes captured by a mobile plenoptic camera.

Research paper thumbnail of Content-based depth estimation in focused plenoptic camera

Proceedings of SPIE, Jan 23, 2011

Depth estimation in focused plenoptic camera is a critical step for most applications of this tec... more Depth estimation in focused plenoptic camera is a critical step for most applications of this technology and poses interesting challenges, as this estimation is content based. We present an iterative algorithm, content adaptive, that exploits the redundancy found in focused plenoptic camera captured images. Our algorithm determines for each point its depth along with a measure of reliability allowing subsequent

Research paper thumbnail of Depth of Field in Plenoptic Cameras

Eurographics, 2009

Certain new algorithms used by plenoptic cameras require focused microlens images. The range of a... more Certain new algorithms used by plenoptic cameras require focused microlens images. The range of applicability of these algorithms therefore depends on the depth of field of the relay system comprising the plenoptic camera. We analyze the relationships and tradeoffs between camera parameters and depth of field and characterize conditions for optimal refocusing, stereo, and 3D imaging.

Research paper thumbnail of Superresolution with the focused plenoptic camera

Proceedings of SPIE, Feb 10, 2011

Digital images from a CCD or CMOS sensor with a color filter array must undergo a demosaicing pro... more Digital images from a CCD or CMOS sensor with a color filter array must undergo a demosaicing process to combine the separate color samples into a single color image. This interpolation process can interfere with the subsequent superresolution process. Plenoptic superresolution, which relies on precise sub-pixel sampling across captured microimages, is particularly sensitive to such resampling of the raw data. In this paper we present an approach for superresolving plenoptic images that takes place at the time of demosaicing the raw color image data. Our approach exploits the interleaving provided by typical color filter arrays (e.g., Bayer filter) to further refine plenoptic sub-pixel sampling. Our rendering algorithm treats the color channels in a plenoptic image separately, which improves final superresolution by a factor of two. With appropriate plenoptic capture we show the theoretical possibility for rendering final images at full sensor resolution.

Research paper thumbnail of Reducing Plenoptic Camera Artifacts

Computer Graphics Forum, Jun 14, 2010

ABSTRACT The focused plenoptic camera differs from the traditional plenoptic camera in that its m... more ABSTRACT The focused plenoptic camera differs from the traditional plenoptic camera in that its microlenses are focused on the photographed object rather than at infinity. The spatio-angular tradeoffs available with this approach enable rendering of final images that have significantly higher resolution than those from traditional plenoptic cameras. Unfortunately, this approach can result in visible artifacts when basic rendering is used. In this paper, we present two new methods that work together to minimize these artifacts. The first method is based on careful design of the optical system. The second method is computational and based on a new lightfield rendering algorithm that extracts the depth information of a scene directly from the lightfield and then uses that depth information in the final rendering. Experimental results demonstrate the effectiveness of these approaches.

Research paper thumbnail of Lytro camera technology: theory, algorithms, performance analysis

Proceedings of SPIE, Mar 7, 2013

The Lytro camera is the first implementation of a plenoptic camera for the consumer market. We co... more The Lytro camera is the first implementation of a plenoptic camera for the consumer market. We consider it a successful example of the miniaturization aided by the increase in computational power characterizing mobile computational photography. The plenoptic camera approach to radiance capture uses a microlens array as an imaging system focused on the focal plane of the main camera lens. This paper analyzes the performance of Lytro camera from a system level perspective, considering the Lytro camera as a black box, and uses our interpretation of Lytro image data saved by the camera. We present our findings based on our interpretation of Lytro camera file structure, image calibration and image rendering; in this context, artifacts and final image resolution are discussed.

Research paper thumbnail of Stark Resonances and Complex Scaling

Abstract The Avron-Herbst-Simon theory of the Stark Effect shows that while the Stark Hamiltonian... more Abstract The Avron-Herbst-Simon theory of the Stark Effect shows that while the Stark Hamiltonian has only continuous spectrum, there is an appropriately built S-matrix which possesses resonance poles on the second sheet. These can be found with the method of ...

Research paper thumbnail of Interferometric Measurement of Sensor MTF and Crosstalk

IS&T International Symposium on Electronic Imaging Science and Technology, Jan 29, 2017

We have developed a laser interferometer with the goal of precise measurement of the pixel MTF an... more We have developed a laser interferometer with the goal of precise measurement of the pixel MTF and pixel crosstalk in camera sensors. One of the advantages of our interferometric method for measuring sensor MTF is that the sinusoidal illumination pattern is formed directly on the sensor rather than beamed through a lens. This allows for a precise measurement of sensor MTF and crosstalk unaltered by the lens. Another advantage is that we measure MTF in a wide range of spatial frequencies reaching high above the Nyquist frequency. We discuss the theory behind the expected and observed sensor performance, and show our experimental results. Our comparison with the Slanted Edge method shows that we have better precision and cover a wider range of frequencies.

Research paper thumbnail of Image Reconstruction Invariant to Relighting

Eurographics, 2005

This paper describes an improvement to the Poisson image editing method for seamless cloning. Our... more This paper describes an improvement to the Poisson image editing method for seamless cloning. Our approach is based on minimizing an energy expression invariant to relighting. The improved method reconstructs seamlessly the selected region, matching both pixel values and texture contrast of the surrounding area, while previous algorithms matched pixel values only. Our algorithm solves a deeper problem: It performs reconstruction in terms of the internal working mechanisms of human visual system. Retinex-type effects of adaptation are built into the structure of the mathematical model, producing results that change covariantly with lighting.

Research paper thumbnail of Plenoptic rendering with interactive performance using GPUs

Proceedings of SPIE, Feb 9, 2012

Processing and rendering of plenoptic camera data requires significant computational power and me... more Processing and rendering of plenoptic camera data requires significant computational power and memory bandwidth. At the same time, real-time rendering performance is highly desirable so that users can interactively explore the infinite variety of images that can be rendered from a single plenoptic image. In this paper we describe a GPU-based approach for lightfield processing and rendering, with which we are able to achieve interactive performance for focused plenoptic rendering tasks such as refocusing and novel-view generation. We present a progression of rendering approaches for focused plenoptic camera data and analyze their performance on popular GPU-based systems. Our analyses are validated with experimental results on commercially available GPU hardware. Even for complicated rendering algorithms, we are able to render 39Mpixel plenoptic data to 2Mpixel images with frame rates in excess of 500 frames per second.