A clear sky image is being encoded into a neural network which generates a cloudy version of it. Then this image is being used in a renderer as an environment map for lighting a scene.
Our method generates cloudy sky images from a user-chosen sun position that are readily usable as an environment map in any rendering system. We leverage an existing clear sky model to produce the input to our neural network which enhances the sky with clouds, haze and horizons learned from real photographs.

SkyGAN: Towards Realistic Cloud Imagery for Image Based Lighting

Abstract

Achieving photorealism when rendering virtual scenes in movies or architecture visualizations often depends on providing a realistic illumination and background. Typically, spherical environment maps serve both as a natural light source from the Sun and the sky, and as a background with clouds and a horizon. In practice, the input is either a static high-resolution HDR photograph manually captured on location in real conditions, or an analytical clear sky model that is dynamic, but cannot model clouds. Our approach bridges these two limited paradigms: a user can control the sun position and cloud coverage ratio, and generate a realistically looking environment map for these conditions. It is a hybrid data-driven analytical model based on a modified state-of-the-art GAN architecture, which is trained on matching pairs of physically-accurate clear sky radiance and HDR fisheye photographs of clouds. We demonstrate our results on renders of outdoor scenes under varying time, date, and cloud covers.

BibTex Citation

				
					@inproceedings{mirbauer_skygan_2022,
	title = {{SkyGAN}: {Towards} {Realistic} {Cloud} {Imagery} for {Image} {Based} {Lighting}},
	copyright = {Attribution 4.0 International License},
	isbn = {978-3-03868-187-8},
	shorttitle = {{SkyGAN}},
	url = {https://diglib.eg.org:443/xmlui/handle/10.2312/sr20221151},
	doi = {10.2312/sr.20221151},
	abstract = {Achieving photorealism when rendering virtual scenes in movies or architecture visualizations often depends on providing a realistic illumination and background. Typically, spherical environment maps serve both as a natural light source from the Sun and the sky, and as a background with clouds and a horizon. In practice, the input is either a static high-resolution HDR photograph manually captured on location in real conditions, or an analytical clear sky model that is dynamic, but cannot model clouds. Our approach bridges these two limited paradigms: a user can control the sun position and cloud coverage ratio, and generate a realistically looking environment map for these conditions. It is a hybrid data-driven analytical model based on a modified state-of-the-art GAN architecture, which is trained on matching pairs of physically-accurate clear sky radiance and HDR fisheye photographs of clouds. We demonstrate our results on renders of outdoor scenes under varying time, date, and cloud covers.},
	language = {en},
	urldate = {2022-07-22},
	booktitle = {Eurographics {Symposium} on {Rendering}},
	publisher = {The Eurographics Association},
	author = {Mirbauer, Martin and Rittig, Tobias and Iser, Tomáš and Krivánek, Jaroslav and Šikudová, Elena},
	year = {2022},
}