Product Importance Sampling for Light Transport Path Guiding
Sampling quality for the KITCHENETTE scene containing numerous anisotropic BRDFs. Our product sampling produces a visibly smoother image compared to Vorba et al.  at the same rendering time.
The efficiency of Monte Carlo algorithms for light transport simulation is directly related to their ability to importance-sample the product of the illumination and reflectance in the rendering equation. Since the optimal sampling strategy would require knowledge about the transport solution itself, importance sampling most often follows only one of the known factors – BRDF or an approximation of the incident illumination. To address this issue, we propose to represent the illumination and the reflectance factors by the Gaussian mixture model (GMM), which we fit by using a combination of weighted expectation maximization and non-linear optimization methods. The GMM representation then allows us to obtain the resulting product distribution for importance sampling on-the-fly at each scene point. For its efficient evaluation and sampling we preform an up-front adaptive decimation of both factor mixtures. In comparison to state-of-the-art sampling methods, we show that our product importance sampling can lead to significantly better convergence in scenes with complex illumination and reflectance.
Sebastian Herholz, Oskar Elek, Jiří Vorba, Hendrik Lensch, and Jaroslav Křivánek
Links and Downloads
We are grateful to Anton Kaplanyan and Johannes Hanika for sharing the Mitsuba-adapted KITCHEN and JEWELRY scenes, Martin Šik for sharing the KITCHENETTE scene and providing code for the reference integrator, Ivo Kondapaneni for testing mixture initialization code, Ludvík Koutný for creating the LIVINGROOM scene, and the anonymous reviewers for their valuable feedback.
This project has received funding from the European Union’s Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie grant agreement No 642841; from the Czech Science Foundation grant 16-18964S; from the grant SVV-2016-260332; from the Charles University Grant Agency project GAUK340915; from the DFG Emmy Noether fellowship Le 1341/1-1; and from an NVIDIA hardware grant.