Program
Below is the program for Sunday, Monday, Tuesday and Wednesday. The alloted time for MAM presentations is 15 minutes plus 5 minutes for discussion and setup. EGSR presentations take 18 minutes plus 5 minutes for discussion and setup. There is additional information for presenters on the venue page.
Sunday, July 1 (MAM)
9:30 – 12:00 Registration and information desk open
10:30 – 10:40 Welcome and introduction (Holly Rushmeier)
10:40 – 12:00 Session: Measurement and fluorescence
- ICL Multispectral Light Stage: building a versatile LED sphere with off-the-shelf components
- Christos Kampouris and Abhijeet Ghosh
- On the Advancement of BTF Measurement on Site
- Vlastimil Havran, Jan Hosek, Sarka Nemcova and Jiri Cap
- Iso Photographic Rendering
- Philippe Porral, Laurent Lucas, Thomas Muller and Joël Randrianandrasana
- A Simple Diffuse Fluorescent BBRRDF Model
- Alisa Jung, Johannes Hanika, Steve Marschner and Carsten Dachsbacher
12:00 – 12:30 Panel discussion: Why you want predictive rendering to be bi-spectral (Alexander Wilkie)
12:30 – 14:00 Lunch
14:00 – 15:20 Session: Cloth and cars
- Image-based Fitting of Procedural Yarn Models
- Alina Saalfeld, Florian Reibold and Carsten Dachsbacher
- Towards Practical Rendering of Fiber-Level Cloth Appearance Models
- Adrian Alejandre, Carlos Aliaga, Julio Marco, Adrian Jarabo and Adolfo Muñoz
- A Method for Fitting Measured Car Paints to a Game Engine’s Rendering Model
- Tom Kneiphof, Tim Golla, Michael Weinmann and Reinhard Klein
- Perception of car shape orientation and anisotropy alignment
- Jiri Filip and Martina Kolafová
15:30 – 16:00 Coffee break
16:00 – 17:10 Session: Thermal infrared, SVB*F and benchmarking
- Towards Physically-Based Material Appearance in the Thermal Infrared Spectrum: A Short Survey
- Eva Burkard and Laura Haraké
- Deep Dual Loss BRDF Parameter Estimation
- Mark Boss, Fabian Groh, Sebastian Herholz and Hendrik P. A. Lensch
- Kernel Prediction for Spatially Varying BSSRDFs
- Oskar Elek
- Update: Benchmarking Infrastructure
- Pieter Peers
17:15 Closing (Holly Rushmeier)
17:00 – 18:00 Registration and information desk open
18:00 Beer and pretzels on the roof-top terrace (open to attendees of EGSR)
Monday, July 2 (EGSR)
8:00 – 16:00 Registration and information desk open
8:45 – 9:00 Opening
9:00 – 10:30 Session: Acquisition (chair Romain Pacanowski)
- Diffuse-Specular Separation using Binary Spherical Gradient Illumination (EI&I Track)
- Christos Kampouris, Stefanos Zafeiriou and Abhijeet Ghosh
- Approximate svBRDF Estimation From Mobile Phone Video (EI&I Track)
- Rachel Albert, Dorian Chan, Dan Goldman and James O'Brien
- Turning a Digital Camera into an Absolute 2D Tele-Colorimeter (CGF Paper)
- Giuseppe C. Guarnera, Simone Bianco and Raimondo Schettini
- Acquisition and Validation of Spectral Ground Truth Data for Predictive Rendering of Rough Surfaces (CGF Track)
- Olaf Clausen, Ricardo Marroquim and Arnulph Fuhrmann
10:30 – 11:00 Coffee break
11:00 – 12:30 Session: Sampling (chair Philip Dutré)
- Stratified Sampling of Projected Spherical Caps (CGF Track)
- Carlos Ureña and Iliyan Georgiev
- Optimal Sample Weights for Hemispherical Integral Quadratures (CGF Paper)
- Ricardo Marques, Christian Bouville and Kadi Bouatouch
- Progressive multi-jittered sample sequences (CGF Track)
- Per Christensen, Andrew Kensler and Charlie Kilpatrick
- Deep Adaptive Sampling for Low Sample Count Rendering (CGF Track)
- Alexandr Kuznetsov, Kalantari Nima Khademi and Ravi Ramamoorthi
12:30 – 14:00 Lunch
14:00 – 15:30 Session: Rendering techniques I (chair Iliyan Georgiev)
- Matrix Bidirectional Path Tracing (EI&I Track)
- Chaitanya Chakravarty Reddy Alla, Laurent Belcour, Toshiya Hachisuka, Simon Premoze, Jacopo Pantaleoni and Derek Nowrouzezahrai
- PN-Method for Multiple Scattering in Participating Media (EI&I Track)
- David Koerner, Jamie Portsmouth and Wenzel Jakob
- Spectral Gradient Sampling for Path Tracing (CGF Track)
- Victor Petitjean, Pablo Bauszat and Elmar Eisemann
- A Unified Manifold Framework for Efficient BRDF Sampling based on Parametric Mixture Model (EI&I Track)
- Sebastian Herholz, Oskar Elek, Jens Schindel, Jaroslav Křivánek and Hendrik Lensch
15:30 – 16:00 Coffee break
16:00 – 17:00 Keynote by Michael Betancourt
18:00 Welcoming reception with BBQ
Tuesday, July 3 (EGSR)
8:30 – 16:00 Registration and information desk open
9:00 – 10:30 Session: Materials (chair Pieter Peers)
- A Composite BRDF Model for Hazy Gloss (CGF Track)
- Pascal Barla, Romain Pacanowski and Peter Vangorp
- A Physically-based Appearance Model for Special Effect Pigments (CGF Track)
- Jie Guo, Yanjun Chen, Yanwen Guo and Jingui Pan
- Handling Fluorescence in a Uni-directional Spectral Path Tracer (CGF Track)
- Michal Mojzík, Alban Fichet and Alexander Wilkie
- Reproducing Spectral Reflectances From Tristimulus Colours (CGF Paper)
- Hisanari Otsu, Masafumi Yamamoto and Toshiya Hachisuka
10:30 – 11:00 Coffee break
11:00 – 12:30 Session: Image-based techniques (chair Jaroslav Krivanek)
- Deep Painting Harmonization (CGF Track)
- Fujun Luan, Sylvain Paris, Eli Shechtman and Kavita Bala
- Deep Hybrid Real and Synthetic Training for Intrinsic Decomposition (EI&I Track)
- Sai Bi, Kalantari Nima Khademi and Ravi Ramamoorthi
- Thin Structures in Image-Based Rendering (CGF Track)
- Theo Thonat, Abdelaziz Djelouah, Fredo Durand and George Drettakis
- Exploiting Repetitions for Image-Based Rendering of Facades (CGF Track)
- Simon Rodriguez, Adrien Bousseau, Fredo Durand and George Drettakis
12:30 – 14:00 Lunch
14:00 – 15:30 Session: Rendering techniques II (chair George Drettakis)
- Efficient Caustic Rendering with Lightweight Photon Mapping (CGF Track)
- Pascal Grittmann, Arsène Pérard-Gayot, Philipp Slusallek and Jaroslav Křivánek
- An Improved Multiple Importance Sampling Heuristic for Density Estimates in Light Transport Simulations (EI&I Track)
- Johannes Jendersie and Thorsten Grosch
- Primary sample space path guiding (EI&I Track)
- Jerry Jinfeng Guo, Pablo Bauszat, Jacco Bikker and Elmar Eisemann
- Re-Weighting Firefly Samples for Improved Finite-Sample Monte Carlo Estimates (CGF Paper)
- Tobias Zirr, Johannes Hanika and Carsten Dachsbacher
15:30 – 16:00 Coffee break
16:00 – 17:00 Keynote by Per Christensen
17:15 – 18:15 Townhall meeting
~18:25 Buses to the conference dinner depart
19:00 Conference dinner
Wednesday, July 4 (EGSR)
8:30 – 12:00 Registration and information desk open
9:00 – 10:30 Session: Real-time rendering (chair Martin Eisemann)
- Scalable Real-Time Shadows using Clustering and Metric Trees (EI&I Track)
- François Deves, Frédéric Mora, Lilian Aveneau and Djamchid Ghazanfarpour
- Soft Transparency for Point Cloud Rendering (EI&I Track)
- Patrick Seemann, Gianpaolo Palma, Matteo Dellepiane, Paolo Cignoni and Michael Goesele
- Online Shader Simplification (CGF Track)
- Yazhen Yuan, Rui Wang, Tianlei Hu and Hujun Bao
- On-the-Fly Power-Aware Rendering (CGF Track)
- Yunjin Zhang, Marta OrtínObón, Victor Arellano, Rui Wang, Diego Gutierrez and Hujun Bao
10:30 – 11:00 Coffee break
11:00 – 11:45 Session: Screen-space methods (chair Pascal Barla)
- Quad-Based Fourier Transform for Efficient Diffraction Synthesis (CGF Track)
- Leonardo Scandolo, Sungkil Lee and Elmar Eisemann
- Screen Space Approximate Gaussian Hulls (EI&I Track)
- Julian Meder and Beat Bruderlin
11:45 Closing remarks
Best Paper Award
The best paper award honors excellent work as well as a good presentation at the conference. The winner is selected by an anonymous jury of four members. Works in both the CGF Track and the EI&I track are eligible. This year, the winning paper is:
- A Composite BRDF Model for Hazy Gloss (CGF Track)
- Pascal Barla, Romain Pacanowski and Peter Vangorp
The authors received a Titan V that was generously donated by NVIDIA. Since it has been a close call, the jury decided to additionally name a runner-up for the award:
- Progressive multi-jittered sample sequences (CGF Track)
- Per Christensen, Andrew Kensler and Charlie Kilpatrick
Congratulations to all of the authors for their outstanding work.
Unfortunately, Matt Pharr cannot attend the conference for personal reasons and his keynote has been canceled. Instead, there will be a keynote by Per Christensen.
Gambling in the Depths of High-Dimensional Spaces
- Abstract:
- Integration is a ubiquitous mathematical tool, and modern applications require integration across increasingly higher dimensional spaces. Unfortunately most of the intuitions that we take for granted in our low-dimensional, routine experiences don’t persist to these high-dimensional spaces which makes the development of scalable computational methodologies and algorithms all the more challenging. In this talk I will discuss the counterintuitive behavior of high-dimensional spaces and the consequences for statistical computation, in particular the unique advantages of Hamiltonian Monte Carlo.
-
Bio:
- Michael Betancourt is the principal research scientist with Symplectomorphic, LLC where he develops theoretical and methodological tools to support practical Bayesian inference. He is also a core developer of Stan, where he implements and tests these tools. In addition to hosting tutorials and workshops on Bayesian inference with Stan he also collaborates on analyses in epidemiology, pharmacology, and physics, amongst others. Before moving into statistics, Michael earned a B.S. from the California Institute of Technology and a Ph.D. from the Massachusetts Institute of Technology, both in physics.
Interactive and Off-Line Path Tracing with RenderMan
- Abstract:
- RenderMan is a modern extensible and programmable path tracer with
many features essential to handling the fiercely complex scenes in
movie production. RenderMan has traditionally been focused on
off-line rendering of high-quality final movie frames, but has
recently been overhauled, targeting interactive rendering during
modeling, texturing, lay-out, animation, and lighting. Path tracing
has gone from being a pure research technique to now being the main
rendering technique in many production renderers. In this talk Per
Christensen will describe the use of path tracing for animated movies
and visual effects, and will also describe advanced path tracing
techniques such as bidirectional path tracing, progressive photon
mapping, and vertex connection and merging (VCM). He will also touch
upon current rendering projects at Pixar such as mixed CPU and GPU
rendering and high-dimensional sample sequences specifically targeted
at path tracing.
-
Bio:
- Per Christensen is a principal software developer in Pixar's RenderMan
group in Seattle. His main research interests are efficient ray
tracing and global illumination in very complex scenes. He received
an M.Sc. degree in electrical engineering from the Technical
University of Denmark and a Ph.D. in computer science from the
University of Washington. Prior to joining Pixar, he worked at ILM in
San Rafael, Mental Images in Berlin, and Square USA in Honolulu. He
has movie credits in Pixar movies since "Finding Nemo", and has
received an Academy Award for his contributions to efficient
point-based global illumination and ambient occlusion.
CANCELED: Offline and interactive rendering: Converging at last?
- Abstract:
- For decades, offline rendering and interactive rendering have been two largely separate worlds, where different constraints and requirements have led to the adoption of very different rendering techniques. Interactive rendering today is largely the realm of GPU-powered rasterization, where predictable performance and maintaining frame-rate are key requirements. In contrast, offline rendering has generally come to adopt CPU-based path tracing, given higher accuracy and realism expectations and a willingness to trade-off performance for flexibility. Recent work on the application of deep learning to image filtering and reconstruction in both realms points to the possibility that both types of rendering will converge to follow similar approaches: sparse sampling with path tracing followed by ML-driven reconstruction.
In this talk I will first discuss some of the constraints that both types of rendering face and how those have influenced them in the past. Next, I'll talk about some of the implications of incorporating ML into the image synthesis process as well as a few directions for further work in this area. Finally, I'll discuss ways in which the offline rendering community can contribute to interactive rendering, assuming this convergence continues.
-
Bio:
- Matt Pharr is a distinguished research scientist at NVIDIA research, where he's working on ray tracing and applications of deep learning to rendering. He previously initiated and led the light field capture and rendering project in the VR group at Google, started Exluna (acquired by NVIDIA) and Neoptica (acquired by Intel), and wrote the ispc compiler. With Wenzel Jakob and Greg Humphreys, he is the author of Physically Based Rendering: From Theory to Implementation, for which he was awarded a Scientific and Technical Academy Award in 2014 for the influence the book has had on rendering for film production. He has B.S. in Computer Science from Yale and a Ph.D. from Stanford.