Bachelor, Master and ISP topics at CGG in 2020

This is a sorted list of topics from different fields that we currently do research in. Please see the tags for an accessment of the difficulty of the project and if it is suitable for your what you are looking for. Note this is by far not a complete list, but it should give you a good idea of what we are interested in. Feel free to approach us if you have ideas of your own / ideas for changes on the topics and we will see how we can accomodate your wishes.

3D Printing

LayerCodes
Optical Barcodes Embedded in 3D Prints

Optical barcodes are used all around us: whether to identify products in the supermarket or link to a webpage from a poster, we use them in our daily life. As we mostly handle 3D objects, we would naturally like to identify 3D objects directly without the need of a 2D printed label stuck on top of it.
Embedding a barcode in a 3D print is easy, but the recognition tends to be tricky due to the uneven surface, surface roughness, thin features or holes, and even subsurface scattering light.
Maia et al. [2019] showed how an encoding of information can me made in such a way, that is robust to the mentioned distortions arising from 3D fabrication. They show how the layer-by-layer nature of 3D printing can be used to encode information in the layers without changing the geometry. An accompaniyng decoding algorithm reads back the original information from a single photo of the object and can even be used to 3D reconstruct the geometry. Check out their presentation at SIGGRAPH 2019 and the below video for more insight.

The drawback of their method is that the appearance of an object (color, surface finish) is drastically affected by their method and leaves you with a rather unaestetic zebra-like object. We would like to know if one would be able to encode the information in a less appearance-intrusive way by altering different surface properties than color. Can we find a trade-off between decodability and appearance distortion? Is it possible to hide the patterns inside a texture somehow?
Contact
Tobias Rittig
CGG Group Member
Homepage Email
Keywords:
RESEARCH
3D PRINTING
HARDWARE
MASTER

Microstructure 3D Printing
Towards steerable surface reflectance

Project teaser
Luongo et al. [2019] show control over surface reflectance under two viewing directions for three objects. A glossy and matte sphere, a glossy and matte bunny and directionally encoded information on a planar slab.
The surface finish greatly impacts the appearance of an object. If it is smooth, light is reflected almost mirror-like whereas roughening surfaces lets them appear more glossy and eventually completely matte. Current 3D printing techniques achieve such high resolutions, that it might become possible to influence the surface roughness and thus the directionally dependent reflectance.
Luongo et al. [2019] demonstrated promising results in their paper on a SLA printer. They encoded directional information in the surface by overlaying it with a random noise pattern that was informed by a model of the curing process inside the 3D printer.
We would like to get a similar understanding about our Prusa SL1 printer and want to extend the amount of control one has over the surface reflectance. In particular, we want to know how subsurface structures filled with air could affect the directionality of the reflectance? Can multi-material printing allow for more variety in the effects one can replicate on a single surface together?
Contact
Tobias Rittig
CGG Group Member
Homepage Email
Keywords:
RESEARCH
3D PRINTING
HARDWARE
MASTER

Rendering FDM 3D Prints
Appearance Prediction for regular 3D Printers

Project teaser
This comparison of different layer-height settings shows the different (anisotropic) reflections on the surface [Source: all3dp.com].
Fused Deposition Modeling (FDM) based 3D printers exhibit often very coarse layer-heights where individual layers are visible by naked eye. Inaccuracies in the printer cause layers to shift slightly resulting in an uneven surface and overall deviation from the intended 3D geometry. The glossy plastic reflections on these 3D prints are majorly influenced by the direction the printhead moved while extruding the cylindrically shaped material. Previews of these paths in the printer's slicing software are very rudimental and serve more a visualization purpose.
What we are interested in is an accurate rendering that depicts effects such as:
  • accurate geometry including printing-inaccuracies and material melting
  • realistic reflections (trivial)
  • subsurface scattering of fillament material
The purpose of this project is to allow for virtual 3D print experimentation without the need to actually print. A virtual prediction allows for virtual tweaking and automatic optimizations that are impossible till today. This cuts down on the number of iterations till users are happy with their objects and saves wasted copies, that are unusable due to undesired appearances. This is a severe problem that our collaborators face in their daily industrial work.
This project can be taken as individual software project (NPRG045), Bachelor or Master thesis.
Contact
Tobias Rittig
CGG Group Member
Homepage Email
Keywords:
RENDERING
3D PRINTING
ISP (NPRG045)
BACHELOR
MASTER
INDUSTRY COLLABORATION

Sky Appearance

Environment Map Capture
Hack a 360 degree camera

Project teaser
A 360 degree photograph captured using a professional panoramic capture setup.
In Rendering spherical (360°), high dynamic range (HDR) images are used as backgrounds and for lighting 3D objects with a realistic light source. For most cases, outdoor captures are used to mimic a realistic sky and sun illumination.
Traditionally, a capture setup for these images consists of a heavy tripod with a panoramic head that can rotate a high-end DSLR around its central point. This gear allows for capturing several pictures in different directions with several exposures that are all taken from one single point. Later in post-processing step, these get stitched to a single panoramic and HDR image. We possess such a setup and use it frequently to capture images of clouds.
Unfortunately all this gear is very heavy and bulky to carry around. We are looking for a more portable solution, that can be setup quickly and delivers not as precise, but reasonable images. For this we bought a state-of-the-art, 360°, pocket camera that is easy to setup and can be controled wirelessly. The factory app does not allow for an easy capture of HDR images though, which is why we started looking for a custom software solution. Initial tests on reverse-engineering the communication protocol showed it is possible to communicate with the camera using a few tricks.
We would like to develop a platform-independent (mobile/web) app that can talk to the camera and capture time lapses as well as exposure-varying sequences. This would allow for the camera to be taken on daily trips and capture environment images wherever you are in the background. This data is supporting machine-learning efforts in our other sky related projects.
This project is intended as an individual software project (NPRG045).
Contact
Martin Mirbauer
CGG Group Member
Homepage Email
Keywords:
SKY
CAPTURE
HACKING
APP DEVELOPMENT
HARDWARE
ISP (NPRG045)

Vision projects

Eye tracking vs. deep net activation
Do the nets see what we see?

Project teaser
Different activations on an airplane image.
Is there a difference in the visual activation in humans and in deep networks when selecting the category of an object?
Contact
Elena Sikudova
CGG Group Member
Homepage Email
Keywords:
PROGRAMMING
DEEP NETWORKS
HEAT MAPS
BACHELOR
ISP (NPRG045)

Old manuscript analysis
What do they talk about?

Project teaser
Late medieval manuscript.
Detect lines of text. Prepare for OCR. Possibly train tesseract (or other).
Contact
Elena Sikudova
CGG Group Member
Homepage Email
Keywords:
IMAGE PROCESSING
OCR
BACHELOR
ISP (NPRG045)

Optic disc detection
Eye is the window to the disease

Project teaser
Optic disc.
Detect optic disc in retinal images. Use CV methods, compare with deep learning results.
Contact
Elena Sikudova
CGG Group Member
Homepage Email
Keywords:
IMAGE PROCESSING
DEEP LEARNING
BACHELOR
MASTER
ISP (NPRG045)

Traffic data segmentation pipeline
Self-driving cars are the future (?)

Project teaser
Kitti streo dataset example (http://www.cvlibs.net/datasets/kitti/eval_scene_flow.php?benchmark=stereo).
Implement a processing pipeline for object detection and localization in traffic environment from RGB-D data (i.e. color images with per-pixel depth data). There are many detection algorithms for traffic related objects using artificial neural networks. The task will be to find the best performing ones with available source codes, install them on a dedicated PC, and get familiar with them. Afterwards, the task will be to create a program that will compute 3D positions of the detected objects from the provided depth data. The output of the pipeline should be written into a format called OpenDrive.
Contact
Elena Sikudova
CGG Group Member
Homepage Email
Keywords:
IMAGE PROCESSING
DEEP LEARNING
BACHELOR
MASTER
ISP (NPRG045)

Deep learning

Better Mesh to Point Cloud Conversion
Explore and improve point cloud sampling options

Project teaser
Original mesh and differently sampled point clouds (farthest point sampling, naive uniform sampling, Lloyd sampling and Sobol sequence sampling)
A set of points in 3D space (point cloud) is a way of representing surface shape of an object e.g. for neural network-based classification approaches. Depending on the desired application, a good point cloud covers the whole object surface uniformly, without large clusters of points near each other, or, given a limited/fixed number of points, complex parts of the object (edges, curved parts) are sampled more densely, omitting points on flat surfaces, thus focusing on the "important" parts of the object. Point clouds can be produced by LiDAR scanners or generated from polygonal meshes by sampling the surface.
There are multiple possibilities how to generate a point cloud, e.g. uniformly sampling the object surface or using a low-discrepancy sequence; and post-processing techniques such as farthest point sampling to remove some of the sampled point. The goal of this project is to explore and possibly improve existing mesh-to-point-cloud conversion methods e.g. by making farthest point sampling more robust to outlier points, using a local-feature-prediction neural network such as PCPNet for selection of important points or combining multiple point sampling/selection methods.
Contact
Martin Mirbauer
CGG Group Member
Homepage Email
Keywords:
MESH
POINT CLOUD
SAMPLING
ISP (NPRG045)
BACHELOR THESIS