Light fields and computational photography (original) (raw)
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
---|---|---|---|---|---|---|
(i) | (j) | (k) | (l) | (m) | (n) | (o) |
Marc Levoy has retired from Stanford University to lead a team at Google. This project is no longer active as a Stanford research project. However, many of the technologies developed during this project live on in products such asGoogle StreetView (e), the Lytro Camera (j), and Google'sJump camera (o).
Overview
The light field, first described in Arun Gershun's classic 1936 paper of the same name, is defined as radiance as a function of position and direction in regions of space free of occluders. In free space, the light field is a 4D function - scalar or vector depending on the exact definition employed. Light fields were introduced into computer graphics in 1996 by Marc Levoy and Pat Hanrahan. Their proposed application was image-based-rendering - computing new views of a scene from pre-existing views without the need for scene geometry. (A workshop on image-based modeling and rendering was held at Stanford in 1998.)
Since 1996, research on light fields has followed a number of lines. On the theoretical side, researchers have developed spatial and frequency domain analyses of light field sampling and have proposed several new parameterizations of the light field, including surface light fields and unstructured Lumigraphs. On the practical side, researchers have experimented with literally dozens of ways to capture light fields, ranging from camera arrays to kaleidoscopes, as well as several ways to display them, such as an array of video projectors aimed at a lenticular sheet. Researchers have also explored the relationship between light fields and other sampled representations of light transport, such as incident light fields and reflectance fields. At Stanford, we have focused on the boundary between light fields, photography, and high-performance imaging, an area we sometimes call_computational photography_. (A workshop on this theme was held at MIT in May of 2005.) However, computational photography has grown to become broader than light fields, and our research also touches on other aspects of light fields, such as interactive animation of light fields and computing shape from light fields.
The images above depict some of the work we have done in our laboratory on light fields:
**(a)**An image from ourSIGGRAPH 1996 paper, showing an array of renderings of a 3D computer model of a buddha statuette (at top) and the transpose of the light field (at bottom).
**(b)**The Stanford Spherical Gantry, a multi-purpose 4-degree of freedom motorized gantry we built for capturing light fields and bidirectional reflectance distribution functions (BRDFs) of small stationary objects.
**(c)**We first demonstratedsynthetic focusing of light fields in 1996. Here is our first continuous focusing, from 2002. Thearray of input images was captured by the same robot/camera rig used to capture a light field ofMichelangelo's statue of Night in 1999. Focusing was done in software, producing thismovie.
**(d)**An array of 128 synchronized CMOS video cameras we built as part of the Stanford Multi-Camera Arrayproject.
**(e)**PhD student Gaurav Garg rides in a pickup truck with a sideways-looking video camera, which we used to construct multi-perspective panoramas of urban landscapes in theStanford CityBlock project. Google'sStreetView grew out of this project.
**(f)**An array of 16 planar mirrors, into which we aimed a high-resolution camera and projector to produce 16 virtual cameras and projectors for our paper onsynthetic aperture confocal imaging.
**(g)**An array of curved mirrorlets, created by machining and chrome-plating an aluminum block. Arrays like this are an alternative to aiming a camera into an array of microlenses.
**(h)**A photo-resister, which we used in conjunction with a video projector to capture slices of a light field using a technique calleddual photography.
**(i)**PhD student Ren Ng holds a medium format SLR camera into which he has inserted a microlens array between the sensor and main lens. The camera captures light fields instead of photographs. Here are papers describing thecameraand itstheory of operation. Ren's PhD dissertation, "Digital Light Field Photography," won the 2006ACM Doctoral dissertation Award.
**(j)**Ren is founder and current CEO ofLytro, a startup company that is commercializing his dissertation research. Pictured here is theircamera, which captures a light field you can synthetically focus (a.k.a. digitally refocus) after capture.
**(k)**By inserting a microlens array (circled in red) into a standard microscope, we create alight field microscope (LFM). From its images we can produce perspective flybys, focal stacks, and volume datasets at a single instant in time. Here is a paper describing the idea.
**(l)**By inserting a second microlens array and video projector into the light path of the same microscope, we can control the incident light field falling on a specimen as well as the light field leaving it. In this example, we use ourLight Field Illuminator (LFI) to change the characteristics of light falling on a single blond human hair. Here is a paper describing the idea.
**(m)**We sometimes create light fields synthetically. This Venn diagram shows a taxonomy of the kinds of apertures that can occur in light fields, under a formulation we have devised of general linear cameras with finite (i.e. non-pinhole) apertures.
**(n)**In anothertheoretical paper, we explore the relationship between light fields as used in the computer graphics community and the Wigner distribution commonly used in the wave optics community.
**(o)**A ring of 16 synchronized Go-Pro cameras, built by Google as part of itsJump systemand announced in May 2015 atGoogle I/O. Although not a product of our laboratory, this is a light field camera. Levoy's team at Google advised on its design and contributed software to the system.
Recent papers in this area:
Gyro-Based Multi-Image Deconvolution for Removing Handshake Blur
Proc. CVPR 2014
WYSIWYG Computational Photography via Viewfinder Editing
Jongmin Baek,Dawid Pająk,Kihwan Kim,Kari Pulli,Marc Levoy
Proc. SIGGRAPH Asia 2013
Abe Davis,Fredo Durand,Marc Levoy
Proc. Eurographics 2012
Applications of Multi-Bucket Sensors to Computational Photography
Gordon Wan,Mark Horowitz Marc Levoy
Stanford Computer Graphics Laboratory Technical Report 2012-2
Focal stack compositing for depth of field control
David E. Jacobs,Jongmin Baek,Marc Levoy
Stanford Computer Graphics Laboratory Technical Report 2012-1
Digital Video Stabilization and Rolling Shutter Correction using Gyroscopes
Alexandre Karpenko,David E. Jacobs,Jongmin Baek,Marc Levoy
Stanford Computer Science Tech Report CSTR 2011-03
Wigner Distributions and How They Relate to the Light Field
IEEE International Conference on Computational Photography (ICCP) 2009
Veiling Glare in High Dynamic Range Imaging
Eino-Ville (Eddy),Andrew Adams,Mark Horowitz,Marc Levoy
Proc. SIGGRAPH 2007
General Linear Cameras with Finite Aperture
Eurographics Symposium on Rendering (EGSR) 2007
Light Fields and Computational Imaging
IEEE Computer, August 2006
Symmetric Photography: Exploiting Data-sparseness in Reflectance Fields
Gaurav Garg,Eino-Ville (Eddy) Talvala,Marc Levoy,Hendrik P.A. Lensch
Proc. 2006 Eurographics Symposium on Rendering
Light Field Photography with a Hand-Held Plenoptic Camera
Ren Ng,Marc Levoy, Mathieu Brédif, Gene Duval,Mark Horowitz,Pat Hanrahan
Stanford University Computer Science Tech Report CSTR 2005-02
Proc. SIGGRAPH 2005
Pradeep Sen,Billy Chen,Gaurav Garg,Steve Marschner,Mark Horowitz,Marc Levoy,Hendrik Lensch
Proc. SIGGRAPH 2005
Interactive Deformation of Light Fields
Billy Chen,Eyal Ofek,Harry Shum,Marc Levoy
Proc. Symposium on Interactive 3D Graphics and Games (I3D) 2005
Synthetic aperture confocal imaging
Marc Levoy,Billy Chen,Vaibhav Vaish,Mark Horowitz, Ian McDowall, Mark Bolas
Proc. SIGGRAPH 2004
See also the papers produced by theStanford Multi-Camera Array,Light Field Microscope, andCityBlock projects.
Some older papers:
Proc. SIGGRAPH '96
Slides from talks:
(Listed in reverse chronological order. (Slides from papers may also be available on the web pages of those papers.)
- Marc Levoy
Light field photography and videography (or PPT or PDF),
University of Virginia, October 18, 2005 - Marc Levoy
Acquisition and Manipulation of Dense Range Data, Light Fields, and BRDFs
Workshop on Image-Based Modeling and Rendering
Stanford University, March 24, 1998 - Marc Levoy andPat Hanrahan
Light Field Rendering
SIGGRAPH '96 - Marc Levoy
Expanding the Horizons of Image-Based Modeling and Rendering
Siggraph '97 panel on Image-Based Rendering (IBR)
Demos:
- lifview: Interactive light field viewer (obsolete, see our Flash-based viewer below)
- recorded interactive sessions with our real-time light field viewer
- Explanatory video from our Siggraph 1996 talk (RealVideo (24 MB)
- A photographic essay describing how we captured a largelight field of Michelangelo's statue of Night
- A Flash-based light field viewer that lets you interactively pan around and focus through some of our light fields interactively in your browser,
- Lytro's gallery of refocusable light fields, also employing a Flash-based viewer
Available software and data:
- Light Field Authoring and Rendering Package
- An(old) archive of light fields, produced in association with the foregoing package.
- A(new) archive of light fields, several of which were captured using the Stanford Multi-Camera Array.
- ImageStack, an ever-growing, command-line-driven, package of image processing and computational photography algorithms
Publicity about the project
- About the refocusable camera
- Der Spiegel Online (October 31, 2005)
- Stanford Report (November 3)
- New Scientist (November 16)
- Wired News (November 21)
- Slashdot (November 21)
- DP Review (November 22)
- About Lytro, Ren Ng's company
- About computational photography in general
- Interview of Marc Levoy byRobert Scoble (October 25, 2007)
- BBC (July 10, 2013)
- About the Camera 2.0 project
Financial support:
If images on this page look dark to you, see our note about gamma correction.
A list of technical papers, with abstracts and pointers to additional information, is also available. Or you can return to the research projects page or our home page.