Thesis & Project Topics

List of topics ideas:

Current Topics

Inverse sandblasting for fun and profit

After printing an object using a Polyjet 3D printer, postprocessing is applyed to create the final surface finish. Sandblasting and tumbling are common postprocessing techniques. In order to not “eat” into the object geometry during this polishing, the printer can add a padding layer around the object. However, due to the object geometry, the abrasive processes removes material in a non-uniform way.

The goal of this thesis is to use standard erosion simulation techniques to find spatially varying, optimal object wraps, such that, after a certain amount of abrasion, the resulting object exactly matches the specified measurements.

Towards steerable surface reflectance

The surface finish greatly impacts the appearance of an object. If it is smooth, light is reflected almost mirror-like whereas roughening surfaces lets them appear more glossy and eventually completely matte. Current 3D printing techniques achieve such high resolutions, that it might become possible to influence the surface roughness and thus the directionally dependent reflectance.

Luongo et al. [2019] demonstrated promising results in their paper on a SLA printer. They encoded directional information in the surface by overlaying it with a random noise pattern that was informed by a model of the curing process inside the 3D printer.

We would like to get a similar understanding about our Prusa SL1 printer and want to extend the amount of control one has over the surface reflectance. In particular, we want to know how subsurface structures filled with air could affect the directionality of the reflectance? Can multi-material printing allow for more variety in the effects one can replicate on a single surface together?

Past Topics

Discover the objects in museum virtual tour

Process a video stream on a mobile phone to detect objects in a museum.
Identification is possible through a lightweight neural network. The model should offer sufficient accuracy and speed in recognizing different types of exhibits (size, material) in diverse conditions (lighting, location, background, viewing angles). At the same time, it should consider the limitations of the mobile device, particularly the limited computing power, memory, and battery capacity.

Illustration taken from

already taken

What clouds are we looking at?

Weather webcams continuously take pictures of the sky and landscape for meteorologists and the general public to get an impression of the current weather situation. They are a great tool to verify the forecast and see the local deviation.

For this project we would like to classify the types of clouds that are visible in the images and what the weather situation currently is. Is it sunny? Are we seeing rain clouds?
You will be using machine learning (eg. auto-encoders) and dimensionality reduction techniques (eg. t-SNE, PCA) to find clusters in the images. These groupings mean that similar clouds / weather conditions are depicted in the images. You will look at self-supervised techniques in order to minimize the amount of manual labelling necessary.

We have a large collection (16+ million) of webcam images from the Czech Meteorological Service (CHMI) that covers 98 locations over 18+ months in 5 minute intervals. This dataset can be a valuable asset to the research community, if there is proper annotation and meta-data for each image available. Your thesis will contribute to this list of additional knowledge we have over the images and help researchers to train better models with this data in the future.

already taken

In architecture visualization, physically-based rendering allows for the accurate prediction of the irradiance levels in different parts of the building. This helps architects, for example, to maximize the use of natural light in their designs. Current rendering systems, however, do not model the dynamics of the human visual systems when it comes to light-dark-adaptation. This is important in the design of areas with brightness transitions, like entrance areas and hallways.

For example, consider a highway tunnel: To allow for a more graceful brightness-adaptation when entering, tunnel lights are more powerful around the entrance than they are further in. The goal of this thesis is the design and implementation of a physiologically correct camera model for light-dark adaptation.

already taken

Can GANs learn to generate good textures via differentiable rendering?

Differentiable/inverse rendering can find input parameters such as camera position, object’s shape, or its texture from a target image. Using a simple differential rasteriser, available e.g. in PyTorch3D, the goal is to train an image-based Generative adversarial network (GAN) to produce textures, which (after applying to a known object shape and rendering) produce plausible appearance of the object. The resulting GAN+rasteriser network can be trained on a large dataset of textured 3D models of furniture.

Ultimately, the network should be able to create a texture for a 3D model that does not have a texture nor its mapping to the 3D object’s surface – for this an existing unwrapping tool will be used.

(intended as an implementation+experimental thesis)

already taken

Where's the sky?

Task: Build a modular system that takes a big resolution HDR image and semantically segments it. Already existing networks can be modified and used. The number of semantic classes must include but is not limited to sky (clouds possibly), buildings, vegetation.
Preferred tools: Python or Matlab

already taken

Hack a 360 degree camera

In Rendering spherical (360°), high dynamic range (HDR) images are used as backgrounds and for lighting 3D objects with a realistic light source. For most cases, outdoor captures are used to mimic a realistic sky and sun illumination.

Traditionally, a capture setup for these images consists of a heavy tripod with a panoramic head that can rotate a high-end DSLR around its central point. This gear allows for capturing several pictures in different directions with several exposures that are all taken from one single point. Later in post-processing step, these get stitched to a single panoramic and HDR image. We possess such a setup and use it frequently to capture images of clouds.

Unfortunately all this gear is very heavy and bulky to carry around. We are looking for a more portable solution, that can be setup quickly and delivers not as precise, but reasonable images. For this we bought a state-of-the-art, 360°, pocket camera that is easy to setup and can be controled wirelessly. The factory app does not allow for an easy capture of HDR images though, which is why we started looking for a custom software solution. Initial tests on reverse-engineering the communication protocol showed it is possible to communicate with the camera using a few tricks.

We would like to develop a platform-independent (mobile/web) app that can talk to the camera and capture time lapses as well as exposure-varying sequences. This would allow for the camera to be taken on daily trips and capture environment images wherever you are in the background. This data is supporting machine-learning efforts in our other sky related projects.
This project is intended as an individual software project (NPRG045).

already taken

Cut the PDF

Task: Build a modular system that takes a PDF of a scanned journal, extracts pictorial and textual data, performs an analysis of the various data types, and saves the results for later statistical analysis.
Preferred tools: Python or Matlab

already taken

Create the little world

Apply an intelligent tilt-sift transform on images to get a realistic picture of “the little world”.
Use DL for depth estimation and apply blur filter accordingly.
Standalone app or GIMP plugin.

already taken

Do the nets see what we see?

Is there a difference in the visual activation in humans and in deep networks when selecting the category of an object?

already taken