SkyGAN generates cloudy sky images from a user-chosen sun position that are readily usable as an environment map in any rendering system. We leverage an existing clear sky model to produce the input to our neural network which enhances the sky with clouds, haze and horizons learned from real photographs.

SkyGAN: Realistic Cloud Imagery for Image-based Lighting

Abstract

Achieving photorealism when rendering virtual scenes in movies or architecture visualizations often depends on providing a realistic illumination and background. Typically, spherical environment maps serve both as a natural light source from the Sun and the sky, and as a background with clouds and a horizon. In practice, the input is either a static high-resolution HDR photograph manually captured on location in real conditions, or an analytical clear sky model that is dynamic, but cannot model clouds. Our approach bridges these two limited paradigms: a user can control the sun position and cloud coverage ratio, and generate a realistically looking environment map for these conditions. It is a hybrid data-driven analytical model based on a modified state-of-the-art GAN architecture, which is trained on matching pairs of physically-accurate clear sky radiance and HDR fisheye photographs of clouds. We demonstrate our results on renders of outdoor scenes under varying time, date and cloud covers. Our source code and a dataset of 39 000 HDR sky images are publicly available at https://github.com/CGGMFF/SkyGAN.

BibTex Citation

				
					@article{https://doi.org/10.1111/cgf.14990,
author = {Mirbauer, Martin and Rittig, Tobias and Iser, Tomáš and Křivánek, Jaroslav and Šikudová, Elena},
title = {SkyGAN: Realistic Cloud Imagery for Image-based Lighting},
journal = {Computer Graphics Forum},
volume = {n/a},
number = {n/a},
pages = {e14990},
keywords = {modelling; natural phenomena, rendering; image-based rendering, rendering; atmospheric effects},
doi = {https://doi.org/10.1111/cgf.14990},
url = {https://onlinelibrary.wiley.com/doi/abs/10.1111/cgf.14990},
eprint = {https://onlinelibrary.wiley.com/doi/pdf/10.1111/cgf.14990},
abstract = {Abstract Achieving photorealism when rendering virtual scenes in movies or architecture visualizations often depends on providing a realistic illumination and background. Typically, spherical environment maps serve both as a natural light source from the Sun and the sky, and as a background with clouds and a horizon. In practice, the input is either a static high-resolution HDR photograph manually captured on location in real conditions, or an analytical clear sky model that is dynamic, but cannot model clouds. Our approach bridges these two limited paradigms: a user can control the sun position and cloud coverage ratio, and generate a realistically looking environment map for these conditions. It is a hybrid data-driven analytical model based on a modified state-of-the-art GAN architecture, which is trained on matching pairs of physically-accurate clear sky radiance and HDR fisheye photographs of clouds. We demonstrate our results on renders of outdoor scenes under varying time, date and cloud covers. Our source code and a dataset of 39 000 HDR sky images are publicly available at https://github.com/CGGMFF/SkyGAN.}
}