Research

→   2018   2017   2016   2015   2014   2013   2012   2011   2010   2009   2008

 Towards a Principled Kernel Prediction for Spatially Varying BSSRDFs
        @ EG Workshop on Material Appearance 2018
        Oskar Elek and Jaroslav Křivánek

2018

[html]Project page            [pdf]Paper            [pdf][pptx]Conference slides            [bib]BibTeX entry

While the modeling of sub-surface translucency using homogeneous BSSRDFs is an established industry standard, applying the same approach to heterogeneous materials is predominantly heuristical. We propose a more principled methodology for obtaining and evaluating a spatially varying BSSRDF, on the basis of the volumetric sub-surface structure of the simulated material. The key ideas enabling this are a simulation-data driven kernel for aggregating the spatially varying material parameters, and a structure-preserving decomposition of the subsurface transport into a local and a global component. Our current results show significantly improved accuracy for planar materials with spatially varying scattering albedo, with added discussion about extending the approach for general geometries and full heterogeneity of the material parameters.

One of the key challenges we faced in the texture fabrication project is an efficient prediction of the optimized object's appearance. The used prediction algorithm -- path tracing -- is of course highly accurate, but comes at a cost that's arguably prohibiting the use of our method in practical settings (i.e., where the time budget for preparing a single print job is counted in minutes rather than hours). It is however the accuracy of the prediction method that yields the high quality reproduction we've been able to achieve.

Thus the focus of this project is to design a prediction method that achieves visually comparable accuracy at a fraction of the cost of a full, path-traced solution. Our current results are an early prototype, formulated as a spatially varying BSSRDF which can be applied to simulate sub-surface scattering in translucent, heterogeneous materials. Despite its early stage, the method already produces results that are visually accurate (see the image above), and clearly challenge the current state-of-the-art algorithms that can be applied in this setting.

Realistically simulating how translucent ('waxy') materials look is computationally expensive, because photons can scatter under their surface many, possibly thousands of times. Our model specifically designed for heterogeneous materials (for instance stone, plastic, or organic matter) can describe their properties statistically, and therefore skip a large part of the costly simulation. One way to imagine this is as an 'oracle' which tells the simulation what the material approximately looks like underneath its surface, so that it's not necessary to explicitly trace the path of each individual photon anymore.


 A Unified Framework for Efficient BRDF Sampling based on Parametric Mixture Models
        @ EG Symposium on Rendering 2018
        Sebastian Herholz, Oskar Elek, Jens Schindel, Jaroslav Křivánek, and Hendrik Lensch

2018

[html]Project page            [pdf]Paper            [bib]BibTeX entry

Virtually all existing analytic BRDF models are built from multiple functional components (e.g., Fresnel term, normal distribution function, etc.). This makes accurate importance sampling of the full model challenging, and so current solutions only cover a subset of the model's components. This leads to sub-optimal or even invalid proposed directional samples, which can negatively impact the efficiency of light transport solvers based on Monte Carlo integration. To overcome this problem, we propose a unified BRDF sampling strategy based on parametric mixture models (PMMs). We show that for a given BRDF, the parameters of the associated PMM can be defined in smooth manifold spaces, which can be compactly represented using multivariate B-Splines. These manifolds are defined in the parameter space of the BRDF and allow for arbitrary, continuous queries of the PMM representation for varying BRDF parameters, which further enables importance sampling for spatially varying BRDFs. Our representation is not limited to analytic BRDF models, but can also be used for sampling measured BRDF data. The resulting manifold framework enables accurate and efficient BRDF importance sampling with very small approximation errors.

This project has been born already when working on our product sampling paper back in 2016. We were motivated by the fact that to enable the product sampling, it is necessary to precompute and store large databases of tabulated parametric mixtures for every material (BRDF) in the scene and its individual configurations. This gets impractical or even straight-up prohibitive, once the scene contains 100s of materials, or even worse, spatially varying ones (see the image above).

The method proposed here answers this problem by creating an analytic meta-fit over the BRDF hyper-parametric space. As a representation we use parametric mixture models, such as the Gaussian or skewed Gaussian mixtures. This means that for every material and its configuration, we can obtain -- in a closed form -- the corresponding parametric mixture which then serves as a tightly fitting density function for importance sampling.

There are several advantages to this approach. First, the parametric mixture model representation is agnostic to the particular features of the fitted BRDF, so that different models (physically based, empirical, or even measured) can be sampled within the same unified method. Second, the representation is compatible with our product importance sampling, enabling high quality rendering of global illumination in difficult scenes, as intended. Third, the proposed representation is actually versatile enough that we numerically exceed the sampling quality compared to state-of-the-art dedicated sampling methods.

The only shortcoming is the current lack of support for anisotropic BRDFs, being caused by their higher dimensionality. We are hoping to address this in future, by designing a more robust fitting method that scales to such high-dimensional functions.

To make the rendering of realistic images efficient, the state-of-the-art simulations based on ray tracing have to adapt their behavior to each particular material used in the scene. This is difficult since every material can interact with light very differently. Here we propose a solution around this problem: to mathematically describe the materials using their actual 'visual' features, instead of doing so from their low-level physical properties. As a result, not only we make the simulation faster, but also make the life of its developers easier -- since they don't need to optimize every simgle material individually.


 Real-time Light Transport in Analytically Integrable Quasi-heterogeneous Media
        @ Central Europen Seminar on CG 2018
        Tomáš Iser, sup. Oskar Elek

2018 2018 2018

[pdf]Paper            [avi]Video            [pdf]Conference slides

Our focus is on the real-time rendering of large-scale volumetric participating media, such as fog. Since a physically correct simulation of light transport in such media is inherently difficult, the existing real-time approaches are typically based on low-order scattering approximations or only consider homogeneous media. We present an improved image-space method for computing light transport within quasi-heterogeneous, optically thin media. Our approach is based on a physically plausible formulation of the image-space scattering kernel and analytically integrable medium density functions. In particular, we propose a novel, hierarchical anisotropic filtering technique tailored to the target environments with inhomogeneous media. Our parallelizable solution enables us to render visually convincing, temporally coherent animations with fog-like media in real time, in a bounded time of only milliseconds per frame.

This project presents an image-space method to simulate physically-based multiple scattering at real-time speeds. It picks up where our previous project on this topic left off. The result is a marked improvement over the original method (improved filtering and support for variable-density media), yet still runs at around 5 ms/frame on modest hardware (GTX 660). Tomáš Iser (student at our group) has done a great job here, and it shows: his BSc. thesis had received the Dean's award, and the subsequent paper was awarded "Best Paper", "Best Video" and "2nd Best Talk" at CESCG (non peer-reviewed student conference).

We look how far pixels are from the camera, and based on that calculate how many tiny water or dust particles the light in each pixel hits. We then weaken and blur that light as much as optical theory tells us. This is fast enough, so that it's possible to simulate believable fog or dust in 3D computer games.


 Scattering-aware Texture Reproduction for 3D Printing
        @ ACM SIGGRAPH Asia 2017
        Oskar Elek*, Denis Sumin*, Ran Zhang, Tim Weyrich, Karol Myszkowski, Bernd Bickel, Alexander Wilkie, Jaroslav Křivánek
        (*joint first authors)

2017

[html]Project page            [pdf]Article []            [bib]BibTeX entry

Color texture reproduction in 3D printing commonly ignores volumetric light transport (cross-talk) between surface points on a 3D print. Such light diffusion leads to significant blur of details and color bleeding, and is particularly severe for highly translucent resin-based print materials. Given their widely varying scattering properties, this cross-talk between surface points strongly depends on the internal structure of the volume surrounding each surface point. Existing scattering-aware methods use simplified models for light diffusion, and often accept the visual blur as an immutable property of the print medium. In contrast, our work counteracts heterogeneous scattering to obtain the impression of a crisp albedo texture on top of the 3D print, by optimizing for a fully volumetric material distribution that preserves the target appearance. Our method employs an efficient numerical optimizer on top of a general Monte-Carlo simulation of heterogeneous scattering, supported by a practical calibration procedure to obtain scattering parameters from a given set of printer materials. Despite the inherent translucency of the medium, we reproduce detailed surface textures on 3D prints. We evaluate our system using a commercial, five-tone 3D print process and compare against the printer's native color texturing mode, demonstrating that our method preserves high-frequency features well without having to compromise on color gamut.

This project started by asking a modest question: "can we counteract the negative effects of sub-surface scattering on the quality of textured 3D prints?". These effects are typically loss of fine detail and undesired color blending. Two years later, we ended up developing a complete prototype pipeline for color reproduction on photo-polymer 3D printers, touching on the subjects of optical characterization of translucent materials, predictive rendering, color management and separation, nonlinear optimization and appearance fabrication itself.

Why is this such a difficult problem? Well as usual, multiple reasons. First, photo-polymer materials are inherently translucent, which is an essential property when it comes to enabling the UV-light curing process. The issue thus cannot be solved by simply making the materials optically denser. Second, the structure of the problem is much more difficult than other, seemingly similar ones (such as image sharpening/enhancement, or cross-talk compensation in stereo projectors/displays). For instance, from the surface perspective of the volumetric light transport, the resulting point spread function has a large long-tailed support, and moreover a significant spatial and directional variation. The unwanted effects also happen ex post, that is, only after the object is fabricated; this means that any compensation has to be capable of quantitative prediction before the object is optimized and can be realized physically.

The main achievement of our effort is the demonstration that it is indeed possible to compansate for unwanted effects of material translucency without modifying the materials themselves. That is, given the necessary information as well as computational resources. Many questions still remain unanswered though: beyond the obvious issues of computational efficiency and adaptation of the pipeline to arbitrary geometries, we lack the understanding of human perception as far as heterogeneous translucent objects go. What are the cues that we use to distinguish translucent objects, and how to optimize for minimizing that perception?

Check the project page for additional materials (detailed description of the measurement methodology, conference slides etc.).

We wanted to improve how 3D printers reproduce textures and colors on the surface of manufactured objects. The algorithm we invented simulates how light interacts with virtual 3D prints, and then compensates for any unwanted effects on the surface. These are color mismatch and/or blurring, and happen because the plastic printed materials are -- and need to be! -- translucent ('waxy').


 Product Importance Sampling for Light Transport Path Guiding
        @ EG Symposium on Rendering 2016
        Sebastian Herholz, Oskar Elek, Jiří Vorba, Hendrik Lensch, Jaroslav Křivánek

2017

[html]Project page            [pdf]Article []            [pdf]Conference slides            [bib]BibTeX entry

The efficiency of Monte Carlo algorithms for light transport simulation is directly related to their ability to importance-sample the product of the illumination and reflectance in the rendering equation. Since the optimal sampling strategy would require knowledge about the transport solution itself, importance sampling most often follows only one of the known factors -- BRDF or an approximation of the incident illumination. To address this issue, we propose to represent the illumination and the reflectance factors by the Gaussian mixture model (GMM), which we fit by using a combination of weighted expectation maximization and non-linear optimization methods. The GMM representation then allows us to obtain the resulting product distribution for importance sampling on-the-fly at each scene point. For its efficient evaluation and sampling we preform an up-front adaptive decimation of both factor mixtures. In comparison to state-of-the-art sampling methods, we show that our product importance sampling can lead to significantly better convergence in scenes with complex illumination and reflectance.

Given the recent revival of light path guiding, the natural question to ask is how to utilize the knowledge about the radiance distribution in a scene to achieve optimal path space sampling. As the theory of zero-variance sampling implies, the optimal strategy is to sample according to the full illumination integrand of the rendering equation, which translates to the product of the incident radiance and the material BRDF at any given location.

This paper proposes a solution to achieve just that. We make use of the fact that the Gaussian distribution allows deriving a product distribution in closed-form, and that the resulting distribution is again Gaussian. We therefore learn the distributions in the form of Gaussian mixtures, for both the incident radiance and BRDFs in the simulated scene, and efficiently sample according to these. Introducing only a mild overhead, this sampling strategy is the first practical method to sample proportionally to the full illumination integrand, and is optimal up to the approximation error caused by our discrete representation. The work has received the "2nd Best Student Paper" award.

If you want to render photo-realistic images of 3D scenes, all modern algorithms that can do that also suffer from some kind of error. For algorithms based on ray tracing, the error is visible as 'noise' in the computed image. The theory of light transport tells us that, to minimize the noise, we always need to shoot rays in the directions where light comes from but also where materials reflect most light to -- at the same time! Our work allows us to do that optimally, based on well designed statistical approximations.


 Efficient Methods for Physically-based Rendering of Participating Media
        PhD Thesis @ Max Planck Institut Informatik 2015
        Oskar Elek, sup. Tobias Ritschel and Hans-Peter Seidel

2014

[pdf]Thesis text []            [pdf][pptx]Defense slides

This thesis proposes several novel methods for realistic synthesis of images containing participating media. This is a challenging problem, due to the multitude and complexity of ways how light interacts with participating media, but also an important one, since such media are ubiquitous in our environment and therefore are one of the main constituents of its appearance. The main paradigm we follow is designing efficient methods that provide their user with an interactive feedback, but are still physically plausible.

The presented contributions have varying degrees of specialisation and, in a loose connection to that, their resulting efficiency. First, the screen-space scattering algorithm simulates scattering in homogeneous media, such as fog and water, as a fast image filtering process. Next, the amortised photon mapping method focuses on rendering clouds as arguably one of the most difficult media due to their high scattering anisotropy. Here, interactivity is achieved through adapting to certain conditions specific to clouds. A generalisation of this approach is principal-ordinates propagation, which tackles a much wider class of heterogeneous media. The resulting method can handle almost arbitrary optical properties in such media, thanks to a custom finite-element propagation scheme. Finally, spectral ray differentials aim at an efficient reconstruction of chromatic dispersion phenomena, which occur in transparent media such as water, glass and gemstones. This method is based on analytical ray differentiation and as such can be incorporated to any ray-based rendering framework, increasing the efficiency of reproducing dispersion by about an order of magnitude.

All four proposed methods achieve efficiency primarily by utilising high-level mathematical abstractions, building on the understanding of the underlying physical principles that guide light transport. The methods have also been designed around simple data structures, allowing high execution parallelism and removing the need to rely on any sort of preprocessing. Thanks to these properties, the presented work is not only suitable for interactively computing light transport in participating media, but also allows dynamic changes to the simulated environment, all while maintaining high levels of visual realism.

Blood, sweat and tears: the definitive compilation of my doctoral work at MPI. Now with an informal introduction, 30 pages of relevant rendering and optics background, and extended discussion of what follows from all this. Please enjoy and leave a like ;)


 Spectral Ray Differentials
        @ EG Symposium on Rendering 2014
        Oskar Elek, Pablo Bauszat, Tobias Ritschel, Marcus Magnor, Hans-Peter Seidel

2014

[html]Project page            [pdf]Article []            [pdf]Derivation            [pdf]Raw images            [pdf]Slides            [bib]BibTeX entry

Light refracted by a dispersive interface leads to beautifully colored patterns that can be rendered faithfully with spectral Monte-Carlo methods. Regrettably, results often suffer from chromatic noise or banding, requiring high sampling rates and large amounts of memory compared to renderers operating in some trichromatic color space. Addressing this issue, we introduce spectral ray differentials, which describe the change of light direction with respect to changes in the spectrum. In analogy with the classic ray and photon differentials, this information can be used for filtering in the spectral domain. Effectiveness of our approach is demonstrated by filtering for offline spectral light and path tracing as well as for an interactive GPU photon mapper based on splatting. Our results show considerably less chromatic noise and spatial aliasing while retaining good visual similarity to reference solutions with negligible overhead in the order of milliseconds.

Caustics are image-like phenomena resulting purely from a variable distribution of light caused by refraction -- for instance under a glass of wine or at the bottom of a swimming pool. While the traditional challenge in the rendering community has been efficiently solving for the light transport as such, we focused on the phenomenon that goes hand-to-hand with refraction: dispersion of light. While we all know dispersion in the form of rainbows, this phenomenon occurs virtually on all refractive objects due to spectral variability of the refractive index.

Our work here describes a reconstruction approach, which is based on tracing partial derivatives with respect to a change of light frequency; we call these 'spectral differentials', following an already established nomenclature in rendering. Spectral differentials inform the renderer about the direction in which the dispersion predominantly occurs, so that a higher-quality reconstruction can be performed. This is applicable in offline Monte-Carlo rendering, but also in the real-time domain (as demonstrated in this video). This work has received the "Best Student Paper" award at EGSR 2014.

 Progressive Spectral Ray Differentials
        @ Vision, Modelling and Visualization workshop 2014
        Oskar Elek, Pablo Bauszat, Tobias Ritschel, Marcus Magnor, Hans-Peter Seidel

2014

[pdf]Article            [pdf]Conference slides            [bib]BibTeX entry

Light travelling though refractive objects can lead to beautiful colourful illumination patterns resulting from dispersion on the object interfaces. While this can be accurately simulated by stochastic Monte-Carlo methods, their application is costly and leads to significant chromatic noise. This is greatly improved by applying spectral ray differentials, however, at the cost of introducing bias into the solution. We propose progressive spectral ray differentials, adapting concepts from other progressive Monte-Carlo methods. Our approach takes full advantage of the variance-reduction properties of spectral ray differentials but progressively converges to the correct, unbiased solution in the limit.

An extension of the above work. As with other reconstruction methods assigning a finite support to the phenomenon in question, our basic method, too, suffers from spatial bias. This extension addresses this issue: inspired by progressive photon mapping approaches, we systematically shrink the spatial support defined by the traced differentials, so that in the limit the solution consistently converges to the ground truth. Among other benefits, this enables working with refraction of extreme magnitudes -- for instance produced by virtual metamaterials as shown in the above image (contrasted with a caustic produced by regular diamond in the leftmost panel). Many animated examples are also available in the project page.


 Principal-Ordinates Propagation for Real-Time Rendering of Participating Media
        @ Elsevier Computers and Graphics 2014
        Oskar Elek, Tobias Ritschel, Carsten Dachsbacher, Hans-Peter Seidel

2014

[html]Project page            [pdf]Article []            [pdf]Derivations            [avi]Video []            [bib]BibTeX entry

Efficient light transport simulation in participating media is challenging in general, but especially if the medium is heterogeneous and exhibits significant multiple anisotropic scattering. We present Principal-Ordinates Propagation, a novel finite-element method that achieves real-time rendering speeds on modern GPUs without imposing any significant restrictions on the rendered participated medium. We achieve this by dynamically decomposing all illumination into directional and point light sources, and propagating the light from these virtual sources in independent discrete propagation domains. These are individually aligned with approximate principal directions of light propagation from the respective light sources. Such decomposition allows us to use a very simple and computationally efficient unimodal basis for representing the propagated radiance, instead of using a general basis such as spherical harmonics. The resulting approach is biased but physically plausible, and largely reduces the rendering artifacts inherent to existing finite-element methods. At the same time it allows for virtually arbitrary scattering anisotropy, albedo, and other properties of the simulated medium, without requiring any precomputation.

One of the long-lasting challenges in rendering has been to efficiently simulate optically dense media with significent anisotropic scattering. These media (comprising clouds, smoke, vapor, various liquids, etc.) have been notoriously difficult to handle even for Monte-Carlo and other offline methods. Most approaches therefore apply the similarity theory and treat the media as isotropically scattering, which then leads to lack of directionally dependent features that often define the appearance of these media (such as silver lining in clouds).

Here we propose a novel way to handle anisotropic scattering in media. The core idea of "principal-ordinates propagation" is to decompose the incoming illumination into a discrete set of salient directions (similar to instant radiosity methods) and propagate the light energy along these directions (ordinates) separately. This key step enables an efficient way to represent both the intensity and directional distribution of the propagated radiance using the unimodal Henyey-Greenstein distribution (similar to a spherical Gaussian). Using different propagation grid geometries, we can compute volumetric transport from directional and point sources, and use this to propagate environment illumination, local recflections, and even camera importance to achieve a real-time reproduction of difficult anisotropic effects -- see the accompanying video.

 Interactive Light Scattering with Principal-Ordinate Propagation
        @ Graphics Interface 2014
        Oskar Elek, Tobias Ritschel, Carsten Dachsbacher, Hans-Peter Seidel

2014

[pdf]Paper            [pdf]Conference slides            [bib]BibTeX entry

Earlier version of the above work, published at the Graphics Interface conference, where it received the Michael A. J. Sweeney Award for "Best Student Paper". Compared to the above Computers & Graphics article, this version lacks the isotropic residual propagation phase, which is however mainly a performance improvement. The term "principal-ordinates propagation" has been coined in this paper already.


 Real-Time Screen-Space Scattering in Homogeneous Environments
        @ IEEE Computer Graphics and Applications 2013
        Oskar Elek, Tobias Ritschel, Hans-Peter Seidel

2013 2013 2013 2013

[pdf]Article            [avi]Video            [bib]BibTeX entry

This work presents an approximate algorithm for computing light scattering within homogeneous participating environments in screen space. Instead of simulating the full global illumination in participating media we model the scattering process by a physically-based point spread function. To do this efficiently we apply the point spread function by performing a discrete hierarchical convolution in a texture MIP map. We solve the main problem of this approach, illumination leaking, by designing a custom anisotropic incremental filter. Our solution is fully parallel, runs in hundreds of frames-per-second for usual screen resolutions and is directly applicable in most existing 2D or 3D rendering architectures.

In this project we tried to approximate light scattering in homogeneous media (most notably water and fog) by an image-space post-processing algorithm (requiring just a depth buffer as an additional input). This is possible because from the user's perspective, the high-level behavior of scattering is similar to blurring (see our 2017 fabrication project). The result is a very fast post-processing procedure that takes only a couple of milliseconds for HD images and generates results comparable to path tracing in the intended conditions. As such it can be seamlessly integrated into game engines as a better substitute for the standard exponential fog.


 Interactive Cloud Rendering Using Temporally-Coherent Photon Mapping
        @ Graphics Interface 2012
        Oskar Elek, Tobias Ritschel, Alexander Wilkie, Hans-Peter Seidel

2012 2012 2012 2012

[pdf]Paper            [pdf]Conference slides            [pdf]Video            [bib]BibTeX entry

This work presents a novel interactive algorithm for simulation of light transport in clouds. Exploiting the high temporal coherence of the typical illumination and morphology of clouds we build on volumetric photon mapping, which we modify to allow for interactive rendering speeds -- instead of building a fresh irregular photon map for every scene state change we accumulate photon contributions in a regular grid structure. This is then continuously being refreshed by re-shooting only a fraction of the total amount of photons in each frame. To maintain its temporal coherence and low variance, a low-resolution grid is initially used, and is then upsampled to the density field resolution on a physical basis in each frame. We also present a technique to store and reconstruct the angular illumination information by exploiting properties of the standard Henyey-Greenstein function, namely its ability to express anisotropic angular distributions with a single dominating direction. The presented method is physically-plausible, conceptually simple and comparatively easy to implement. Moreover, it operates only above the cloud density field, thus not requiring any precomputation, and handles all light sources typical for the given environment, i.e. where one of the light sources dominates.

My work on cloud rendering continued after finishing the master thesis and resulted in a paper presented at the Graphics Interface conference in May 2012. The algorithm has been improved in several regards since the thesis and is now much closer to practical usability, as in addition to supporting dynamic light sources it now also supports dynamic media, while not requiring any precomputations at all. Link to the ACM Digital Library here.

 Interactive Cloud Rendering Using Temporally-Coherent Photon Mapping
        @ Elsevier Computers and Graphics 2012
        Oskar Elek, Tobias Ritschel, Alexander Wilkie, Hans-Peter Seidel

[pdf]Article            [bib]BibTeX entry

An extended version of the above work, adding more details about the upsampling, impostor caching, and some other aspects of the method. Link to the ACM Digital Library here.


 Physically-based Cloud Rendering on GPU
        MSc Thesis @ Charles University 2011
        Oskar Elek, sup. Alexander Wilkie

2011 2011 2011 2011

[pdf]Thesis text            [pdf]Defense slides

The rendering of participating media is an interesting and important problem without a simple solution. Yet even among the wide variety of participating media the clouds stand out as an especially difficult case, because of their properties that make their simulation even harder. The work presented in this thesis attempts to provide a solution to this problem, and moreover, to make the proposed method to work in interactive rendering speeds. The main design criteria in designing this method were its physical plausibility and maximal utilization of specific cloud properties which would help to balance the complex nature of clouds. As a result the proposed method builds on the well known photon mapping algorithm, but modifies it in several ways to obtain interactive and temporarily coherent results. This is further helped by designing the method in such a way which allows its implementation on contemporary GPUs, taking advantage of their massively parallel sheer computational power. We implement a prototype of the method in an application that renders a single realistic cloud in interactive framerates, and discuss possible extensions of the proposed technique that would allow its use in various practical industrial applications.

My master thesis attempted to deal with the problem of realistic interactive cloud rendering. It is a natural continuation of my previous work on atmospheric rendering. The main point in the thesis is the feasibility of performing an interactive physically-plausible light simulation in clouds; in this case based on a custom temporary-coherent photon mapping algorithm.


 Real-time Spectral Scattering in Large-scale Natural Participating Media
        @ Spring Conference on CG 2010
        Oskar Elek, Petr Kmoch

2010 2010 2010 2010
2011 2011 2011 2011 2011 2011 2011

[pdf]Paper            [pdf]Conference slides            [bib]BibTeX entry

Real-time rendering of participating media in nature presents a difficult problem. The reason is that realistic reproduction of such media requires a proper physical simulation in all cases. In our work we focus on real-time rendering of planetary atmospheres and large areas of water. We first formulate a physically-based model for simulation of light transport in these environments. This model accounts for all necessary light contributions -- direct illumination, indirect illumination caused by the scattered light and interreflections between the planetary surface and the atmospheric volume, as well as reflections from the seabed. We adopt the precomputation scheme presented in the previous works to precompute the colours of the arbitrarily dense atmosphere and large-scale water surfaces into a set of lookup tables. All these computations are fully spectral, which increases the realism. Finally we utilize these tables in a GPU-based algorithm that is capable of rendering a whole planet with its atmosphere from all viewpoints above the planetary surface. This approach is capable to achieve hundreds of frames per second on today's graphics hardware.

This paper is a summation of my work on atmospheric scattering, and it also adds support for scattering calculations in large water volumes. All computations are now spectral. The work has won the "Best SCCG 2010 presentation" award. Link to the ACM Digital Library here.

The method has been used by Niels Fröhling in the Oblivion Graphics Extender (OBGEv3) project to obtain sky, Sun and clouds' colour. OBGE is an extension/mod of the widely popular TES IV: Oblivion game. Some videos of the preliminary version can be found here and here.


 Layered Materials in Real-time Rendering
        @ Central European Seminar on CG 2010
        Oskar Elek, sup. Alexander Wilkie

An increasingly patinated copper lion head

[htm]Project page            [pdf]Paper            [pdf]Conference slides

Today's games and other real-time 3D applications often use only basic empirical models for modelling the appearance of materials and rely on complex geometry and texturing to make them more visually appealing. In this paper we explore the possibilities of bringing more physically plausible models to real-time 3D graphics. We do this by implementing the layered BRDF of Weidlich and Wilkie on GPU. This model utilizes the well-known Torrance-Sparrow and Oren-Nayar microfacet models. We show how to make this layered model useful for real-time rendering through various optimizations. Then we derive two specialized models based on this basic layered model. These two models attempt to simulate the appearance of metallic car paints and metallic patinas.

This work started as an interest in the layered model of Weidlich and Wilkie (described in the paper Arbitrarily Layered Micro-Facet Surfaces, see here). Since it is capable of producing very nice images, I wanted to find out if the model can be computed at real-time rates on the GPU. The paper has been published at CESCG 2010 (non peer-reviewed student conference).


 Rendering Parametrizable Planetary Atmospheres with Multiple Scattering in Real-time
        @ Central European Seminar on CG 2009
        Oskar Elek, sup. Petr Kmoch

2009 2009 2009 2009

[pdf]Paper fulltext            [pdf]Conference slides

In the field of physically-based rendering of natural phenomena, rendering of atmospheric light scattering takes a very important place. Real-time rendering of the sky and planetary atmospheres in general is essential for all outdoor computer games, various simulators, virtual worlds and even for animated movies. In our work we present an accurate and fast method for real-time rendering of parametrizable planetary atmospheres. This is achieved by precomputing the complex volumetric scattering equations into a set of compact lookup tables. The correct atmospheric colour values are then fetched from these in a fragment shader during rendering. The method is capable of rendering planetary atmospheres on today's graphics hardware at the speed of hundreds of frames per second.

I have broadened the work from my BSc thesis and published a paper with the results at CESCG 2009 (non peer-reviewed student conference). The work has won the "Best CESCG 2009 paper" and the "Best CESCG 2009 presentation" awards.


 Rendering Planetary Atmospheres in Real-time
        BSc Thesis @ Charles University 2008
        Oskar Elek, sup. Petr Kmoch

2008 2008 2008 2008 2008

[pdf]Thesis text

In the field of photorealistic rendering of physical phenomena, the rendering of atmospheric light scattering takes a very important place. Real-time rendering of sky and atmosphere in general is essential for all outdoor computer games, various simulators, virtual worlds or even for animated movies. It is a very difficult task, but thanks to the advancement of dedicated graphics hardware we can reach it today. In my thesis I present an accurate and fast method for real-time rendering of planetary atmospheres. This is achieved by precomputing complex single-scattering equations into a set of lookup tables. The correct atmospheric colour values are then fetched from these in the fragment shader. The presented method is then implemented in a program that is capable of rendering realistic atmosphere in hundreds of FPS.

My first research project was my bachelor thesis, building on the preceding software project on real-time atmosphere rendering.