Generating textures with a GAN

Can GANs learn to generate good textures via differentiable rendering?

Differentiable/inverse rendering can find input parameters such as camera position, object’s shape, or its texture from a target image. Using a simple differential rasteriser, available e.g. in PyTorch3D, the goal is to train an image-based Generative adversarial network (GAN) to produce textures, which (after applying to a known object shape and rendering) produce plausible appearance of the object. The resulting GAN+rasteriser network can be trained on a large dataset of textured 3D models of furniture.

Ultimately, the network should be able to create a texture for a 3D model that does not have a texture nor its mapping to the 3D object’s surface – for this an existing unwrapping tool will be used.

(intended as an implementation+experimental thesis)