Thesis & Project Topics

List of topics:

Create the little world

Apply an intelligent tilt-sift transform on images to get a realistic picture of “the little world”.
Use DL for depth estimation and apply blur filter accordingly.
Standalone app or GIMP plugin.

Eye is the window to the disease

Detect optic disc in retinal images. Use CV methods, compare with deep learning results.

Do the nets see what we see?

Is there a difference in the visual activation in humans and in deep networks when selecting the category of an object?

Explore and improve point cloud sampling options

A set of points in 3D space (point cloud) is a way of representing surface shape of an object e.g. for neural network-based classification approaches. Depending on the desired application, a good point cloud covers the whole object surface uniformly, without large clusters of points near each other, or, given a limited/fixed number of points, complex parts of the object (edges, curved parts) are sampled more densely, omitting points on flat surfaces, thus focusing on the “important” parts of the object. Point clouds can be produced by LiDAR scanners or generated from polygonal meshes by sampling the surface.

There are multiple possibilities how to generate a point cloud, e.g. uniformly sampling the object surface or using a low-discrepancy sequence; and post-processing techniques such as farthest point sampling to remove some of the sampled point. The goal of this project is to explore and possibly improve existing mesh-to-point-cloud conversion methods e.g. by making farthest point sampling more robust to outlier points, using a local-feature-prediction neural network such as PCPNet for selection of important points or combining multiple point sampling/selection methods.

Appearance Prediction for regular 3D Printers

Fused Deposition Modeling (FDM) based 3D printers exhibit often very coarse layer-heights where individual layers are visible by naked eye. Inaccuracies in the printer cause layers to shift slightly resulting in an uneven surface and overall deviation from the intended 3D geometry. The glossy plastic reflections on these 3D prints are majorly influenced by the direction the printhead moved while extruding the cylindrically shaped material. Previews of these paths in the printer’s slicing software are very rudimental and serve more a visualization purpose.

What we are interested in is an accurate rendering that depicts effects such as:

  • accurate geometry including printing-inaccuracies and material melting
  • realistic reflections (trivial)
  • subsurface scattering of fillament material

The purpose of this project is to allow for virtual 3D print experimentation without the need to actually print. A virtual prediction allows for virtual tweaking and automatic optimizations that are impossible till today. This cuts down on the number of iterations till users are happy with their objects and saves wasted copies, that are unusable due to undesired appearances. This is a severe problem that our collaborators face in their daily industrial work.
This project can be taken as individual software project (NPRG045), Bachelor or Master thesis.

Towards steerable surface reflectance

The surface finish greatly impacts the appearance of an object. If it is smooth, light is reflected almost mirror-like whereas roughening surfaces lets them appear more glossy and eventually completely matte. Current 3D printing techniques achieve such high resolutions, that it might become possible to influence the surface roughness and thus the directionally dependent reflectance.

Luongo et al. [2019] demonstrated promising results in their paper on a SLA printer. They encoded directional information in the surface by overlaying it with a random noise pattern that was informed by a model of the curing process inside the 3D printer.

We would like to get a similar understanding about our Prusa SL1 printer and want to extend the amount of control one has over the surface reflectance. In particular, we want to know how subsurface structures filled with air could affect the directionality of the reflectance? Can multi-material printing allow for more variety in the effects one can replicate on a single surface together?

Optical Barcodes Embedded in 3D Prints

Optical barcodes are used all around us: whether to identify products in the supermarket or link to a webpage from a poster, we use them in our daily life. As we mostly handle 3D objects, we would naturally like to identify 3D objects directly without the need of a 2D printed label stuck on top of it.

Embedding a barcode in a 3D print is easy, but the recognition tends to be tricky due to the uneven surface, surface roughness, thin features or holes, and even subsurface scattering light.

Maia et al. [2019] showed how an encoding of information can me made in such a way, that is robust to the mentioned distortions arising from 3D fabrication. They show how the layer-by-layer nature of 3D printing can be used to encode information in the layers without changing the geometry. An accompaniyng decoding algorithm reads back the original information from a single photo of the object and can even be used to 3D reconstruct the geometry. Check out their presentation at SIGGRAPH 2019 and the below video for more insight.

The drawback of their method is that the appearance of an object (color, surface finish) is drastically affected by their method and leaves you with a rather unaestetic zebra-like object. We would like to know if one would be able to encode the information in a less appearance-intrusive way by altering different surface properties than color. Can we find a trade-off between decodability and appearance distortion? Is it possible to hide the patterns inside a texture somehow?