Welcome to the CVPR 2020 tutorial on Neural Rendering!
The tutorial has ended. You can watch the full sessions here:
Neural rendering is a new class of deep image and video generation approaches that enable explicit or implicit control of scene properties such as illumination, camera parameters, pose, geometry, appearance, and semantic structure. It combines generative machine learning techniques with physical knowledge from computer graphics to obtain controllable and photo-realistic outputs.
This tutorial teaches the fundamentals of neural rendering and summarizes recent trends and applications. Starting with an overview of the underlying graphics, vision and machine learning concepts, we discuss critical aspects of neural rendering approaches. Specifically, our emphasis is on what aspects of the generated imagery can be controlled, which parts of the pipeline are learned, explicit vs. implicit control, generalization, and stochastic vs. deterministic synthesis. Next, we focus on the many important use cases for the described algorithms such as novel view synthesis, semantic photo manipulation, facial and body reenactment, relighting, free-viewpoint video, and the creation of photo-realistic avatars for virtual and augmented reality telepresence. Finally, we conclude with a discussion of the social implications of this technology and investigate open research problems.
|09:00–09:15||Welcome and Introduction||Michael Zollhöfer|
|09:15–09:30||Fundamentals, Taxonomy, Neural Rendering||Ayush Tewari|
|Semantic Photo Synthesis and Manipulation|
|09:40–10:00||Semantic Image Synthesis with Spatially-Adaptive Normalization||Taesung Park|
|Facial Reenactment & Body Reenactment|
|10:35–11:00||Neural Rendering for High-Quality Synthesis of Human Portrait Video and Images||Christian Theobalt|
|11:00–11:20||Neural Rendering for Virtual Avatars||Aliaksandra Shysheya|
|Novel View Synthesis|
|11:30–11:50||Neural Rerendering in the Wild||Moustafa Meshry|
|11:50–12:10||NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis||Ben Mildenhall|
|Learning to Relight|
|13:30–13:50||Multi-view Relighting Using a Geometry-Aware Network||Julien Philip|
|13:50–14:10||Neural Inverse Rendering||Abhimitra Meka|
|Free Viewpoint Videos|
|14:20–14:40||Neural Rendering for Performance Capture||Rohit K. Pandey|
|14:40–15:00||Neural Volumes: Learning Dynamic Renderable Volumes from Images||Stephen Lombardi|
|15:30–15:45||Social Implications, Open Challenges, Conclusion||Ohad Fried|
The tutorial is based on this Eurographics 2020 state-of-the-art report.
Ayush Tewari, Ohad Fried, Justus Thies, Vincent Sitzmann, Stephen Lombardi, Kalyan Sunkavalli, Ricardo Martin-Brualla, Tomas Simon, Jason Saragih, Matthias Nießner, Rohit K. Pandey, Sean Fanello, Gordon Wetzstein, Jun-Yan Zhu, Christian Theobalt, Maneesh Agrawala, Eli Shechtman, Dan B. Goldman, Michael Zollhöfer.
June 15th, 2020, 9:00–16:30 PT.
CVPR 2020 is now a fully virtual conference.