Thesis & Project Topics
List of topics ideas:
What clouds are we looking at?
Weather webcams continuously take pictures of the sky and landscape for meteorologists and the general public to get an impression of the current weather situation. They are a great tool to verify the forecast and see the local deviation.
For this project we would like to classify the types of clouds that are visible in the images and what the weather situation currently is. Is it sunny? Are we seeing rain clouds?
You will be using machine learning (eg. auto-encoders) and dimensionality reduction techniques (eg. t-SNE, PCA) to find clusters in the images. These groupings mean that similar clouds / weather conditions are depicted in the images. You will look at self-supervised techniques in order to minimize the amount of manual labelling necessary.
We have a large collection (16+ million) of webcam images from the Czech Meteorological Service (CHMI) that covers 98 locations over 18+ months in 5 minute intervals. This dataset can be a valuable asset to the research community, if there is proper annotation and meta-data for each image available. Your thesis will contribute to this list of additional knowledge we have over the images and help researchers to train better models with this data in the future.
Create the little world
Apply an intelligent tilt-sift transform on images to get a realistic picture of “the little world”.
Use DL for depth estimation and apply blur filter accordingly.
Standalone app or GIMP plugin.
Do the nets see what we see?
Is there a difference in the visual activation in humans and in deep networks when selecting the category of an object?
Towards steerable surface reflectance
The surface finish greatly impacts the appearance of an object. If it is smooth, light is reflected almost mirror-like whereas roughening surfaces lets them appear more glossy and eventually completely matte. Current 3D printing techniques achieve such high resolutions, that it might become possible to influence the surface roughness and thus the directionally dependent reflectance.
Luongo et al.  demonstrated promising results in their paper on a SLA printer. They encoded directional information in the surface by overlaying it with a random noise pattern that was informed by a model of the curing process inside the 3D printer.
We would like to get a similar understanding about our Prusa SL1 printer and want to extend the amount of control one has over the surface reflectance. In particular, we want to know how subsurface structures filled with air could affect the directionality of the reflectance? Can multi-material printing allow for more variety in the effects one can replicate on a single surface together?
In architecture visualization, physically-based rendering allows for the accurate prediction of the irradiance levels in different parts of the building. This helps architects, for example, to maximize the use of natural light in their designs. Current rendering systems, however, do not model the dynamics of the human visual systems when it comes to light-dark-adaptation. This is important in the design of areas with brightness transitions, like entrance areas and hallways.
For example, consider a highway tunnel: To allow for a more graceful brightness-adaptation when entering, tunnel lights are more powerful around the entrance than they are further in. The goal of this thesis is the design and implementation of a physiologically correct camera model for light-dark adaptation.
Can GANs learn to generate good textures via differentiable rendering?
Differentiable/inverse rendering can find input parameters such as camera position, object’s shape, or its texture from a target image. Using a simple differential rasteriser, available e.g. in PyTorch3D, the goal is to train an image-based Generative adversarial network (GAN) to produce textures, which (after applying to a known object shape and rendering) produce plausible appearance of the object. The resulting GAN+rasteriser network can be trained on a large dataset of textured 3D models of furniture.
Ultimately, the network should be able to create a texture for a 3D model that does not have a texture nor its mapping to the 3D object’s surface – for this an existing unwrapping tool will be used.
(intended as an implementation+experimental thesis)
Where's the sky?
Task: Build a modular system that takes a big resolution HDR image and semantically segments it. Already existing networks can be modified and used. The number of semantic classes must include but is not limited to sky (clouds possibly), buildings, vegetation.
Preferred tools: Python or Matlab
Hack a 360 degree camera
In Rendering spherical (360°), high dynamic range (HDR) images are used as backgrounds and for lighting 3D objects with a realistic light source. For most cases, outdoor captures are used to mimic a realistic sky and sun illumination.
Traditionally, a capture setup for these images consists of a heavy tripod with a panoramic head that can rotate a high-end DSLR around its central point. This gear allows for capturing several pictures in different directions with several exposures that are all taken from one single point. Later in post-processing step, these get stitched to a single panoramic and HDR image. We possess such a setup and use it frequently to capture images of clouds.
Unfortunately all this gear is very heavy and bulky to carry around. We are looking for a more portable solution, that can be setup quickly and delivers not as precise, but reasonable images. For this we bought a state-of-the-art, 360°, pocket camera that is easy to setup and can be controled wirelessly. The factory app does not allow for an easy capture of HDR images though, which is why we started looking for a custom software solution. Initial tests on reverse-engineering the communication protocol showed it is possible to communicate with the camera using a few tricks.
We would like to develop a platform-independent (mobile/web) app that can talk to the camera and capture time lapses as well as exposure-varying sequences. This would allow for the camera to be taken on daily trips and capture environment images wherever you are in the background. This data is supporting machine-learning efforts in our other sky related projects.
This project is intended as an individual software project (NPRG045).
Cut the PDF
Task: Build a modular system that takes a PDF of a scanned journal, extracts pictorial and textual data, performs an analysis of the various data types, and saves the results for later statistical analysis.
Preferred tools: Python or Matlab
Appearance Prediction for regular 3D Printers
Fused Deposition Modeling (FDM) based 3D printers exhibit often very coarse layer-heights where individual layers are visible by naked eye. Inaccuracies in the printer cause layers to shift slightly resulting in an uneven surface and overall deviation from the intended 3D geometry. The glossy plastic reflections on these 3D prints are majorly influenced by the direction the printhead moved while extruding the cylindrically shaped material. Previews of these paths in the printer’s slicing software are very rudimental and serve more a visualization purpose.
What we are interested in is an accurate rendering that depicts effects such as:
- accurate geometry including printing-inaccuracies and material melting
- realistic reflections (trivial)
- subsurface scattering of fillament material
The purpose of this project is to allow for virtual 3D print experimentation without the need to actually print. A virtual prediction allows for virtual tweaking and automatic optimizations that are impossible till today. This cuts down on the number of iterations till users are happy with their objects and saves wasted copies, that are unusable due to undesired appearances. This is a severe problem that our collaborators face in their daily industrial work.
This project can be taken as individual software project (NPRG045), Bachelor or Master thesis.