Bayesian online regression for adaptive direct illumination sampling
* Petr Vévoda and Ivo Kondapaneni share the first authorship of this work.
Click on images for full-resolution results.
Equal-time comparison of path-traced global illumination solutions computed using our learning-based direct illumination sampling method (right) and a baseline light sampling method without learning (left). While both methods start off by sampling lights proportionally to their estimated unoccluded contribution, our method progressively incorporates information about the actual lights’ contributions, including their visibility, dramatically reducing image variance.
Direct illumination calculation is an important component of any physically-based renderer with a substantial impact on the overall performance. We present a novel adaptive solution for unbiased Monte Carlo direct illumination sampling, based on online learning of the light selection probability distributions. Our main contribution is a formulation of the learning process as Bayesian regression, based on a new statistical model of direct illumination. The net result is a set of regularization strategies to prevent over-fitting and ensure robustness even in early stages of calculation, when the observed information is sparse. The regression model captures spatial variation of illumination, which enables aggregating statistics over relatively large scene regions and, in turn, ensures fast learning rate. We complete a scalable solution by adopting a light clustering strategy from the Lightcuts method, and reduce variance through the use of control variates. As a main design feature, the resulting algorithm is virtually free of any preprocessing, which enables its use for interactive progressive rendering, while the online learning still enables super-linear convergence.
Petr Vévoda, Ivo Kondapaneni, Jaroslav Křivánek.
Bayesian online regression for adaptive direct illumination sampling.
ACM Transactions on Graphics (Proceedings of SIGGRAPH 2018), 37(4), 2018.
Links and Downloads
Many thanks to Ludvík Koutný (a.k.a. rawalanche) for modeling the test scenes. The work was supported by the Charles University Grant Agency project GAUK 1172416, by the grant SVV-2017-260452, and by the Czech Science Foundation grant 16-18964S.