In general, I focus on visual computing, which is a computer science field encompassing real-time and offline computer graphics, image processing, appearance fabrication, 3D printing, and more.
In particular, my current Ph.D. research aims to enable highly-accurate full-color 3D printing, which is a part of the predictive rendering pipeline. That is critical for everyone relying on 3D appearance manufacturing and visual prototyping such as designers, movie studios, architects, or personalized print services.
Advanced 3D Graphics for Movies and Games (practicals)
The full list of my publications follows. You can also visit my Google Scholar profile.
In full-color inkjet 3D printing, a key problem is determining the material configuration for the millions of voxels that a printed object is made of. The goal is a configuration that minimises the difference between desired target appearance and the result of the printing process. So far, the techniques used to find such a configuration have relied on domain-specific methods or heuristic optimization, which allowed only a limited level of control over the resulting appearance.
We propose to use differentiable volume rendering in a continuous materialmixture space, which leads to a framework that can be used as a general tool for optimising inkjet 3D printouts. We demonstrate the technical feasibility of this approach, and use it to attain fine control over the fabricated appearance, and high levels of faithfulness to the specified target.
Modern non-destructive approaches for quality control in manufacturing often rely on X-ray computed tomography to measure even difficult-to-reach features. Unfortunately, such measurements require hundreds or thousands of calibrated X-ray projections, which is a time-consuming process and may cause bottlenecks. In the recent state-of-the-art research, tens and hundreds of projections are still required.
In this thesis, we examine the radiography physics, technologies, and existing solutions, and we propose a novel approach for non-destructive dimensional measurements from a limited number of projections. Instead of relying on computed tomography, we formulate the measurements as a minimization problem in which we compare our parametric model to reference radiographs. We propose the whole dimensional measurements pipeline, including object parametrizations, material calibrations, simulations, and hierarchical optimizations. We fully implemented the method and evaluated its accuracy and repeatability using real radiographs of real physical objects. We achieved accuracy in the range of tens or hundreds of micrometers, which is almost comparable to industrial computed tomography, but we only used two or three reference radiographs. These results are significant for industrial quality control. Acquiring two or three radiographs only takes a couple of seconds, so we significantly reduce the X-ray machine time and the time required to detect manufacturing errors.
Our focus is on the real-time rendering of large-scale volumetric participating media, such as fog. Since a physically correct simulation of light transport in such media is inherently difficult, the existing real-time approaches are typically based on low-order scattering approximations or only consider homogeneous media.
We present an improved image-space method for computing light transport within quasi-heterogeneous, optically thin media. Our approach is based on a physically plausible formulation of the image-space scattering kernel and analytically integrable medium density functions. In particular, we propose a novel, hierarchical anisotropic filtering technique tailored to the target environments within homogeneous media. Our parallelizable solution enables us to render visually convincing, temporally coherent animations with fog-like media in real time, in a bounded time of only milliseconds per frame.
The focus of this thesis is the real-time rendering of participating media, such as fog. This is an important problem, because such media significantly influence the appearance of the rendered scene. It is also a challenging one, because its physically correct solution involves a costly simulation of a very large number of light-particle interactions, especially when considering multiple scattering. The existing real-time approaches are mostly based on empirical or single-scattering approximations, or only consider homogeneous media.
This work briefly examines the existing solutions and then presents an improved method for real-time multiple scattering in quasi-heterogeneous media. We use analytically integrable density functions and efficient MIP map filtering with several techniques to minimize the inherent visual artifacts. The solution has been implemented and evaluated in a combined CPU/GPU prototype application. The resulting highly-parallel method achieves good visual fidelity and has a stable computation time of only a few milliseconds per frame.