CSE 274 Selected Topics in Graphics: Sampling and Reconstruction of Visual Appearance: From Denoising to View Synthesis (original) (raw)
CSE 274 Topics in Computer Graphics, Fall 2022, Prof. Ravi Ramamoorthi Time and Place: Tu/Th 9:30-10:50, EBU3B 4140
Overview
There are a number of topics in computer graphics which today require sampling and reconstruction of high-dimensional visual appearance datasets, such as high-quality real-time rendering with precomputed light transport, Monte Carlo rendering, light transport acquisition, and view synthesis for virtual reality. Perhaps the most significant recent developments have arisen in the area of Monte Carlo image synthesis via path tracing and related methods. Indeed, one of the perennial goals of computer graphics is creating high quality images which are indistinguishable from photographs: a goal referred to as photorealism. (While we formally only require CSE 167 or an equivalent introduction to computer graphics as a pre-requisite and do provide background on modern Monte Carlo rendering, I would also encourage you to look at CSE 168 if you have not taken it. The UCSD Online public version of the course is available at: Link to Sign up for free).
In essence, one is sampling a high-dimensional function (8D for image coordinates, soft shadows on an area light, global illumination from incident directions, and depth of field on a lens), then reconstructing or denoising it to produce the final image. The industry has recently moved to physically-based path tracing as its preferred solution, with a low sample count, followed by advanced reconstruction and denoising methods. These advances have reduced the computation time by one to two orders of magnitude, enabling production rendering with physically-based global illumination, and this approach is now becoming commonplace for real-time rendering applications, arguably accelerating the use of physically-based rendering in real-time by a decade or more. This course surveys these recent developments over the past 10+ years, in many of which UCSD faculty have played a prominent role. (Note that relatively little background on Monte Carlo rendering is required, since most algorithms take the actual rendering process as a black box).
Beyond traditional rendering, sampling and reconstruction of visual appearance is critical for many other applications, such as acquisition of light transport from real-world scenes, light field cameras that capture both spatial and angular distributions of light, and real-time methods based on precomputed light transport. All of these problems involve 4D-8D functions, which are prohibitive to sample by brute force, and where coherence or sparsity needs to be exploited to enable sparse sampling and sophisticated reconstrcution. The course surveys recent advances, within the last decade, in all of these areas. In particular, we focus on problems of view synthesis or appearance acquisition, where dramatic strides have been made in the last few years.
This body of work is built on mathematical techniques, some of which are recent developments, including multi-dimensional Fourier analysis of light transport spectra, compressive sensing for sparse datasets, and deep learning for feature-based image denoising and neural radiance fields. While no prior background on these areas is required, the course will provide a brief high-level introduction, to the extent these topics are needed for the practical applications we consider. Note that almost all papers within the last 5 years on these topics have involved deep learning, and we encourage taking other deep learning courses in the department. We are hoping for this course to have access to the instructional machine learning cluster, and we may have some older GPUs or AWS credits available, please ask. Beyond this, you are on your own in terms of infrastructure if you seek to do a deep learning project. You are of course also welcome to implement or build off one of the earlier papers that does not require deep learning.
Please note that CSE 274 is a topics course, covering current topics in computer graphics. Course content may change every year or two years. This course material was first offered in Winter and Fall 2018, and you are allowed to enrol and get full credit even if you have taken CSE 274 on another occasion with me or other faculty. In fact, the course material this year has also been updated to include considerable emphasis on newer methods for view synthesis including neural radiance fields, beyond Monte Carlo denoising. The course is basically the same as last year (Fall 2021), but does include a few papers published within the past 12 months. Below are a few example images produced using some of the algorithms and systems we will be discussing.
![]() |
![]() |
![]() |
![]() |
![]() |
---|
Prerequisites
The course is targeted towards MS and PhD students with a knowledge of and interest in computer graphics (at the level of the introductory CSE 167 computer graphics course, or equivalent at another university). CSE 168 is helpful but not required (see link at the top of the page for UCSD online to sign up for the CSE 168 MOOC). We will cover some of the basis of physically-based Monte Carlo path tracing in this course for those who have not taken CSE 168, but that material is much more detailed and thorough. We encourage you to do the CSE 168 assignments until at least the basic path tracer (homework 3, and ideally homework 4) in a systematic fashion. Note however, that no prior knowledge of rendering or CSE 168 is required, and that the path tracing assignment is strictly optional/for your own benefit and not graded... you can also use an existing off-the-shelf black box path tracer (such as PBRT, Mitsuba or OptiX) for your project, focusing on sampling and reconstruction, or on other topics such as view synthesis. However, if you have not written a path tracer or taken CSE 168 before, and you do end up building the path tracer, please include a link in your final project submission, to your full resolution grader feedback results. We welcome all PhD students working in graphics, vision, and robotics. We are willing to consider students who do not completely fulfil the pre-requisites in exceptional cases, if they have a strong interest in the material. Undergraduates who have taken CSE 167 (and/or other advanced courses such as CSE 168/169) are also most welcome, space permitting (space is not usually a problem).
Relationship to Other Courses
The course is an integral part of the vision and graphics track for undergraduate, MS and PhD students (The instructor can help in getting any relevant permissions for it to count for credit towards the track as needed). It builds on the undergraduate graphics classes CSE 167 [taught by Profs. Chern/myself in the fall/winter] and CSE 168/169/190. If you like this course, you may also be interested in CSE 291s in winter and spring (this year is our strongest ever sequence of graduate graphics courses): machine learning for 3D geometry taught in the winter by Prof. Hao Su and spring courses on computational photography taught by Ben Ochoa (CSE 273), among others.
Course Format and Requirements
The course will consist of lectures on the relevant topics by the instructor, student presentations of papers covering current research in the area, and student projects. A syllabus/schedule is noted below. The grading will depend primarily on the final project. (Loosely, we note this as approximately 60% for the project, 30% for paper presentation and 10% for class participation, with the final project being the most critical aspect.) Students are expected to come to class regularly and participate in discussions, since this is an advanced graduate course.
In general, roughly (depends on the number of students in the course), 1 or 2 paper presentations will be required (if the number of students in the course exceeds the number of presentations, some may be done jointly by a group of two students). A project is not required for students taking the course S/U or P/NP; we leave it to your discretion if you want to register for 2 or 4 units in this case. An S/U registration is a good option for PhD students and others to read papers on this exciting topic and learn about the area without committing too much effort into a course project, and we encourage you to sign up. Auditors, who simply want to sit in on the course are also welcome; however, we prefer if you sign up for the course pass/fail instead [this just involves doing one or two paper presentations].
Students taking the course for a letter grade are required to do a project [this may be in groups of 2], give a presentation in class regarding their results, and also submit a final written report. Wide flexibility is available with respect to project topics, provided they relate loosely to the subject matter of the course. We expect that most projects will implement one or more of the algorithms or papers discussed in the course, showing results on sampling and reconstruction of visual appearance. We welcome suggestions from the students on alternative project ideas. The best projects will go beyond the published work in some way, such as trying out an alternative or better approach or trying to develop some variant or more general version of the technique. However, this is not essential; in general, students who fulfil all course requirements including a well-executed project will easily receive an A in the course.
As a potentially easier alternative to the project, we will also accept a well-written summary or tutorial, covering 3 or 4 papers. The best summaries will point out links between the papers not noticed by the original authors and suggest improvements or directions for future research. However, this option is recommended only as a last resort and will generally receive a slightly lower score; we prefer that you do a good project (which may involve understanding a few papers in any case).
Writing a path tracer as per CSE 168, if you have not done so already, is not explicitly graded. But if you do it as part of your project, we encourage you to include documentation in your final project (or survey if doing a survey of papers) report so we can consider it when grading. Note that this is not required and no penalty will be levied/the assignment is strictly optional. You are also welcome and encouraged to use existing path tracers (PBRT, Mitsuba, OptiX) as black boxes and focus on sampling and reconstruction, if that is most suitable for your project. Indeed, the philosophy of recent work and the course is to treat the path tracer as a black box and focus on sampling/reconstruction.
Lectures and Office Hours
The lectures will be in EBU3B (CSE Building) 4140 from 9:30-10:50 on Tuesdays and Thursdays. Students are expected to come to the lectures and participate in discussion. Office hours will be immediately after class from 11-12 in the Professor's office, EBU3B 4118. You may also e-mail for another time to meet if that is not convenient. We also have a Piazza discussion board setup for this course if you are interested. It can be accessed at: http://piazza.com/ucsd/fall2022/cse274
Teaching Assistant
We have a teaching assistant for the course, Kai-En Lin. His e-mail is k2lin@eng.ucsd.edu. His office hours will be in 4150 (CS bldg) Wednesdays and Thursdays from noon-1pm, or you can e-mail him to schedule an appointment.
Topics
Topics to be covered include
- Introduction to BRDFs, Radiometry, Visual Appearance
- Monte Carlo Rendering and Ray/Path Tracing
- Basics of Denoising and Frequency Analysis
- Sampling and Reconstruction for Monte Carlo Rendering
- Feature-Based Denoising for Path Tracing
- Compressive Sensing and Light Transport Reconstruction
- Light Field Imaging and Sparse Interpolation
- Deep Learning for Reconstruction in Rendering and Imaging
- Sparse Appearance Acquisition
- Neural Radiance Fields and View Synthesis
Resources
- Books: There are no books specifically required for this course. Chapters of books may be referenced as reading material and will generally be handed out in class.
- Papers: I have downloaded many of these locally. Note that SIGGRAPH papers are available directly from the ACM digital library.
Outline
The (tentative) course schedule is as follows. In general, you will likely benefit from doing the reading (i.e. the papers assigned for a particular date) before class; it will at least make for more lively discussion.
- Week 0: Introduction and Overview, Course Logistics
- Week 1: Basic Preliminaries on BRDFs, Radiometry, Ray Tracing, Rendering
- Week 2: Monte Carlo Path Tracing, View Synthesis and Appearance Acquisition, Applications
- Week 3: Basic Denoising and Frequency Analysis
- Week 4: Apriori Methods, Presentation of Papers on Denoising, Sampling/Reconstruction
- Week 5: Feature Spaces, Real-Time Rendering,
- Week 6: Learning for Real-Time and Offline Rendering Reconstruction
- Week 7: Compressive Sensing, Light Transport and Light Field Acquisition
- Week 8: NeRFs, View Synthesis
- Week 9: NeRF papers
- Week 10: Project Presentations Sep 22:
- Lecture: Introduction and Overview, Course Logistics PPT PDF
- Reading: Books, notes and links
- M.F. Cohen and J.R. Wallace, 1993. Radiosity and Realistic Image Synthesis, Chapter 2 by Pat Hanrahan. Rendering Concepts. (not posted online, link or photocopy in class)
- P. Dutre, P. Bekaert, K. Bala, 2006. Advanced Global Illumination, 2nd Edition. Chapter 2: The Physics of Light Transport. (not posted online, link or photocopy in class) Sep 27:
- Lecture: Basic Preliminaries: BRDFs and Radiometry, Ray Tracing PPT PDF
- Reading: Books, notes and links
- Scribed lecture notes from long ago on overview of appearance models and BRDFs from Stanford. Overview and BRDFs
- T. Whitted. An Improved Illumination Model for Shaded Display SIGGRAPH 79 (CACM 23(6): 343-349). Sep 29:
- Lecture: Rendering Equation and Monte Carlo Integration PPT PDF
- Assignment: Sign up for paper presentations.
- Reading: Papers
- T. Ritschel, C. Dachsbacher, T. Grosh and J. Kautz. The State of the Art in Interactive Global Illumination EG STAR Report, Computer Graphics Forum, Vol 31, No 1, 160-188.
- J. Kajiya. The Rendering Equation SIGGRAPH 86, pp 143-150.
- E. Veach. Robust Monte Carlo Methods for Light Transport Simulation (Chapter on Monte Carlo Integration) Ph.D. Thesis, Chapter 2.
- Hanrahan et al. Stanford Course on Mathematical Methods for Graphics (Part 2, Monte Carlo Methods). Oct 4:
- Lecture: Monte Carlo Path Tracing PPT PDF
- Reading: Papers, theses, notes
- Henrik Wann Jensen et al. State of the Art in Monte Carlo Ray Tracing SIGGRAPH Course Notes (old 2001).
- Fascione et al. Path Tracing in Production SIGGRAPH Course Notes (new 2017).
- R. Cook. Distributed Ray TracingSIGGRAPH 84, pp 137-145. Oct 6:
- Lecture: Applications of Sampling/Reconstruction PPT PDF
- E-mail brief description of proposed projects. Schedule meeting time to discuss projects if necessary.
- Finalize Paper Presentations
- Reading: Books, notes and links
- M. Zwicker et al. Recent Advances in Adaptive Sampling and Reconstruction for Monte Carlo Rendering EG STAR Report
- B. Guo. Progressive Radiance Evaluation Using Directional Coherence Maps SIGGRAPH 98, pp 255-266.
- F. Durand et al. A Frequency Analysis of Light Transport SIGGRAPH 05, pp 1115-1126.
- P. Debevec et al. Acquiring the Reflectance Field of a Human Face SIGGRAPH 00, pp 145-156.
- M. Levoy and P. Hanrahan. Light Field Rendering SIGGRAPH 96, pp 31-42.
- S. Gortler, R. Grzeszczuk, R. Szeliski, M. Cohen. The Lumigraph SIGGRAPH 96, pp 43-54.
- E. Adelson and J. Wang. Single Lens Stereo with a Plenoptic Camera PAMI 14(2), pp 99-106 (Feb 1992).
- R. Ng et al. Light Field Photography with a Hand-Held Plenoptic Camera Stanford CSTR 2005-02. Oct 11:
- Lecture: Basic Denoising and Frequency Analysis PPT PDF
- Reading: Papers
- J. Chai, X. Tong, S. Chan and H. Shum. Plenoptic Sampling SIGGRAPH 00, pp 307-318.
- K. Egan et al. Frequency Analysis and Sheared Reconstruction for Rendering Motion Blur SIGGRAPH 09, article 93. Oct 13:
- Lecture: Frequency Analysis (cont'd)
- Finalize Projects
- Papers: First New A-Posteriori Denoising Methods (presented by students)
- T. Hachisuka et al. Multidimensional Adaptive Sampling and Reconstruction for Ray Tracing SIGGRAPH 08, Article 33. Presented by [Ravi]
- R. Overbeck, C. Donner and R. Ramamoorthi. Adaptive Wavelet Rendering SIGGRAPH Asia 09, Article 140. Presented by Xujie Chen Oct 18:
- Papers: Frequency Analysis for A-Priori Denoising (presented by students)
- C. Soler, et al. Fourier Depth of Field TOG 09, Article 18. Presented by Junliang Liu
- K. Egan et al. Frequency Analysis and Sheared Reconstruction for Rendering Motion Blur SIGGRAPH 09, Article 93. Presented by [Ravi]
- L. Belcour et al. 5D Covariance Tracing for Efficient Defocus and Motion Blur TOG 2013, Article 31. Presented by Wesley Chang Oct 20:
- Papers: Sampling and Reconstruction for Rendering (presented by students)
- K. Vaidyanathan et al. Layered Light Field Reconstruction for Defocus Blur TOG 2015, Article 23. Presented by Yijian Liu
- J. Lehtinen et al. Temporal Light Field Reconstruction for Rendering Distribution Effects" SIGGRAPH 11, Article 55. Presented by Wesley Chang
- F. Rousselle, C. Knaus and M. Zwicker. Adaptive Rendering with Non-Local Means Filtering SIGGRAPH Asia 12, Article 195. Presented by Zane Wang
- For reading only, not presented M. Zwicker et al. Recent Advances in Adaptive Sampling and Reconstruction for Monte Carlo Rendering EG STAR Report Oct 25:
- Brief Lecture: Feature-Based Denoising PPT PDF
- Papers: Feature-Based Denoising (presented by students)
- P. Sen and S. Darabi. On Filtering the Noise from the Random Parameters in Monte Carlo Rendering ACM TOG 12, Article 18. Presented by Tina Jin
- T. Li, Y. Wu and Y. Chuang. SURE-based Optimization for Adaptive Sampling and Reconstruction SIGGRAPH Asia 12, article 194. Presented by Venkataram Sivaram
- B. Moon, S. McDonagh, K. Mitchell and M. Gross. Adaptive Polynomial Rendering SIGGRAPH 16, article 40. Presented by Abhinav Gupta Oct 27:
- Brief Lecture: Real-Time Rendering PPT PDF
- Papers: Sampling and Reconstruction for Real-Time Rendering (presented by students)
- S. Mehta, B. Wang and R. Ramamoorthi. Axis-Aligned Filtering for Interactive Sampled Soft Shadows SIGGRAPH Asia 12, Article 163. Presented by Leo Cao
- L. Yan, S. Mehta, R. Ramamoorthi and F. Durand. Fast 4D Sheared Filtering for Interactive Rendering of Distribution Effects ACM TOG 15, article 7. Presented by Yuelin Dai
- L. Wu, L. Yan, A. Kuznetsov and R. Ramamoorthi. Multiple Axis-Aligned Filters for Rendering of Combined Distribution Effects EGSR 17 (Computer Graphics Forum). Presented by Haolin Lu
- Project Milestone 1 (1-2 pages) and Final Project Proposal Due Nov 1:
- Lecture: Deep Learning for Rendering and Imaging (not posted online)
- Reading: Papers by former postdoc Nima Kalantari etc.
- N. Kalantari, S. Bako and P. Sen. A Machine Learning Approach for Filtering Monte Carlo Noise SIGGRAPH 15, article 122.
- N. Kalantari, T. Wang and R. Ramamoorthi. Learning-Based View Synthesis for Light Field Cameras SIGGRAPH Asia 16, article 193.
- T. Wang et al. Light Field Video Capture Using a Learning-Based Hybrid Imaging System SIGGRAPH 17, article 133. Nov 3:
- Papers: Modern Denoising (with Deep Learning) (presented by students)
- S. Bako et al. Kernel-Predicting Convolutional Networks for Denoising Monte Carlo Renderings SIGGRAPH 17, article 97. Presented by Xingyu Chen
- C. Alla-Chaitanya et al. Interactive Reconstruction of Monte Carlo Image Sequences using a Recurrent Denoising Autoencoder SIGGRAPH 17, article 98. Presented by Siddharth Saravanan
- M. Gharbi et al. Sample-based Monte Carlo Denoising Using a Kernel-Splatting Network SIGGRAPH 19, article 125. Presented by Kangming Yu
- J. Back et al. Self-Supervised Post-Correction for Monte Carlo Denoising SIGGRAPH 22. Presented by Mrigankshi Kapoor Nov 8:
- Lecture: Sparsity, Compressive Sensing and Light Transport Acquisition PPT PDF
- Reading: Papers
- M. Hasan, F. Pellacini and K. Bala. Matrix Row-Column Sampling for the Many-Light Problem SIGGRAPH 07, article 26.
- J. Gu et al. Compressive Structured Light for Recovering Inhomogeneous Participating Media ECCV 08.
- P. Peers et al. Compressive Light Transport Sensing ACM TOG 09, article 3.
- P. Sen and S. Darabi. Compressive Dual Photography EUROGRAPHICS 09.
- Y. Huo et al. Adaptive Matrix Column Sampling and Completion for Rendering Participating Media SIGGRAPH Asia 16, article 167. Nov 10:
- Project Milestone 2 (same rules as milestone 1, including at least one-two para update on new results and images; any changes in project direction). Sign up for final project presentations
- Brief Lecture: Sparse Interpolation in (Light Field) Imaging, View Synthesis and Appearance Acquisition PPT PDF
- Papers: Sparse Interpolation for Rendering and Imaging (presented by students)
- K. Marwah, G. Wetzstein, Y. Bando and R. Raskar. Compressive Light Field Photography Using Overcomplete Dictionaries and Optimized Projections SIGGRAPH 13, article 46. Presented by [Ravi]
- Z. Xu, K. Sunkavalli, S. Hadap and R. Ramamoorthi. Deep Image-Based Relighting from Optimal Sparse Samples SIGGRAPH 18, article 126. Presented by Cheng Wang
- V. Sitzmann et al. DeepVoxels: Learning Persistent 3D Feature Embeddings CVPR 19, 2437-2446. Presented by Chinmay Talegaonkar Nov 15:
- Brief Lecture: Representations for View Synthesis from MPIs to NeRFs (not posted online)
- Papers on View Synthesis Representations (presented by students)
- T. Zhou et al. Stereo Magnification: Learning view synthesis using multiplane images SIGGRAPH 18, article 65. Presented by Ziyang Fu
- V. Sitzmann, M. Zollhofer and G. Wetzstein. Scene Representation Networks: Continuous 3D-Structure-Aware Neural Scene RepresentationsNeurIPS 19. Presented by Mustafa Yaldiz
- B. Mildenhall et al. NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis Presented by Liwen Wu Nov 17:
- Papers on NeRFs (presented by students)
- P. Srinivasan et al. NeRV: Neural Reflectance and Visibility Fields for Relighting and View Synthesis CVPR 21, 7495-7504. Presented by Mustafa Yaldiz
- M. Niemeyer and A. Geiger. GIRAFFE: Representing Scenes as Compositional Generative Neural Feature Fields CVPR 21, 11453-11464 (best paper award winner at CVPR 21). Presented by Chuhao Chen
- S. Wizadwongsa et al. NeX: Real-time View Synthesis with Neural Basis Expansion CVPR 21, 8534-8543. Presented by Liwen Wu
- (Interest only, not presented), see Dellaert website Nov 22:
- Papers on NeRFs (presented by students)
- J. Barron et al. Mip-NeRF: A Multiscale Representation for Anti-Aliasing Neural Radiance Fields ICCV 21, 5855-5864 (best paper honorable mention at ICCV 21). Presented by Mohan Li
- D. Verbin et al. Ref-NeRF: Structured View-Dependent Appearance for Neural Radiance Fields CVPR 22, 5491-5500 (best student paper honorable mention at CVPR 22). Presented by Vishal Vinod
- T. Muller, A. Evans, C. Schied and A. Keller. Instant Neural Graphics Primitives with a Multiresolution Hash Encoding Best paper at SIGGRAPH 22. Presented by Pranav Gangwar
- (Interest only, not presented), see Dellaert website Nov 29:
- Project Presentations Dec 1:
- Project Presentations
- Final Reports due Dec 6
Last modified: Thu Oct 6 16:39:20 PDT 2022