The state of ART 2.0 on release day
The About and Gallery sections present some of the innovative features of ART. However, in its initial state on release day, the system unfortunately also has a number of areas where it is still incomplete, or offers only sub-par performance.
None of the following issues are necessarily deal-breakers for research use: so fixing them was not a priority in getting ART out of the door. We’d rather have the system available to the general public in its current form, than spend yet more time fixing things that we cannot write publications about.
Performance Issues
-
Overall rendering speed is low. And with this, we unfortunately mean seriously low, by the standards of the year 2018. But at least you can afterwards query the polarisation state of the result pixels, which is not something you can do in many other systems. A significant part of the low performance is due to the following three issues:
-
The only path space integrator which is available is a uni-directional path tracer, so ART is not very efficient at rendering caustics and transparent materials, especially in outdoor settings with their tiny high-energy solar lightsource. ART 1.x used to have a bi-directional tracer, photon tracing capabilities and even a Metropolis renderer. But as these were written for a significantly different technological base, it would be easier to write new code than to port these modules.
-
Raw raycasting performance is very low, at least compared to modern raycasting libraries like Embree. ART still casts individual rays (not packets), does not optimise shadow rays, and does not delegate any functionality to the GPU. The only mitigating circumstance we can put forward in this regard is that a research system like ART is still potentially useful even if it is not the fastest thing out there: plus we wanted to get it to work first, release it, and optimise later.
ART raycasting is specific insofar as we do not break down our models into micro-polygons before rendering: all raycasting is directly done on the terminal nodes of the CSG scene graph, which are high-level objects like entire meshes, spheres, tori, and such. Due to this, we cannot simply switch to using Embree internally, as it has no notion of CSG operators. In a collaboration with our colleagues at ČVUT we even tried rigging Embree to be CSG-capable: but performance in this mode turned out even worse than our own acceleration structure, which is a hybrid kD-tree that directly includes CSG operators during kD-traversal via an operator stack. Embree performing poorly in this role actually sort of stands to reason, as it was not really designed to be used that way.
Probably the best avenue for improvement would be optimisation of the existing acceleration structure, via addition of packet tracing, and by switching to a bounding interval hiearachy internally, instead of a kD-tree (this looks like an ideal fit for a BIH). Also, as a pretty low hanging fruit, Embree could be used as internal raycasting accelerator for loaded PLY meshes, instead of our homegrown kD-trees. -
Shading language evaluations are slow. As we do not break down the scene into pre-shaded micro-polygons before rendering, any shading language expressions encountered when a path segment is computed have to be evaluated from scratch, to yield the BSSDRF at the sample point in question. Arguably, and at least up to a point, this is a feature, not a bug, as this allows us to side-step the very resource hungry pre-shading step entirely: all our rendering is directly done on the typically quite small original CSG scene graph. Plus we will never see artefacts due to incorrect micro-polygon generation, or wrong lobe selection after shading. Still. Fast this ain’t, unfortunately.
-
Side note: the polarisation functionality offered by ART is actually not the major reason for its low performance. Somewhat surprisingly, these capabilities cause only a comparatively minor performance hit, as can seen by comparing rendering times of a scene with and without the
-p
flag. The three points listed above are far more important with regard to speed loss.
-
-
ART rendering threads can be very memory hungry. Essentially, this is because the current threaded image sampler uses a separate result image for each thread - and spectral images are huge, especially polarised ones. This design is not entirely crazy, though, as we use non-trivial splatting kernels which spread stochastic samples over several adjoining pixels: so having rendering threads work on separate tiles in a single image would not actually ensure that each thread only writes to an image region allocated to it (the splatting kernels cause overlap, even if the tiles themselves are only adjacent). And as floating point additions are not thread safe, concurrent writes to the same pixel must never happen: giving each thread its own image to work with was an easy solution. However, this could of course be improved by using a more sophisticated work allocation scheme.
Missing High-Level Features
-
3D modelling front-ends like Maya or Blender are not supported. At the moment, ART is a command line-only system, and all modelling has to be done “by hand” by editing text files. For research purposes, this is sufficient, and as integration into such front-ends is a major undertaking in its own right, we never even attempted this. Besides, standard 3D front-ends typically lack the capability to handle (bi-)spectral data, and cannot deal with spectral result data (let alone polarised spectral result data). So we ourselves will not attempt anything of the sort in the foreseeable future, either.
-
ART cannot read standard 3D graphics formats, such as the scene descriptions which are used by Mitsuba or Maya. As file format handling in ART is quite modular - one can cleanly define a new
Arf...
file type to parse any given format - such capabilities could easily be added, at least from a structural viewpoint (with PLY meshes taken as an example how to do this). But as with the previous point, the main issue seems to be that most mainstream formats do not support (bi)spectral data. Probably the best candidate for near-term addition would be the.xml
-based format used by Mitsuba. -
The system has not been thoroughly validated w/r to units of computation, i.e. the physical units of radiance used during computation. Given that ART renders images which are very similar to what Mitsuba delivers, it can’t be all that much off. But this ought to be properly looked at for the entire system at some point.
Missing Rendering Technology
-
Non-trivial environment maps are not supported. The only such things you currently get in ART are either uniformly diffuse emissive environments, or the Hosek sky model. However, the structure of the system would make addition of a properly sampled non-trivial envmap lightsource fairly easy. The main reason we never added anything of the sort so far is that ART is a spectral renderer: and as we don’t know of any spectral envmaps we could have used, what would have been the point? (chicken, meet egg, and all that)
In this context, it is worth noting that all the classic envmaps which are used throughout computer graphics research (eucalyptus grove, Uffizi, etc.) are actually RGB HDR images. ART can, in fact, synthesise emissive spectra for RGB images: it uses measured monitor primaries for this purpose. But this would essentially mean illuminating your scene with emissions from a hemispherical LCD monitor of sorts: YMMV on how useful that would be in practice. -
The shading language lacks a sane front-end, and is functionally incomplete. As can be seen from the gallery images, the shading language definitely has potential, even in its current unfinished form. However, requiring end users to manually plug shading language nodes together in
.arm
code is asking a bit much, even for seasoned graphics geeks. Plus some coordinate system manipulations are still not ported from ART 1.x (access to 2D coordinates is limited, current examples work with 3D coordinates,WORLDSPACE_COORDINATES
cannot be further transformed etc.), and the Voronoi tiling code from ART 1.x is also still missing. -
There is no displacement mapping. As we do not break down the scene into pre-shaded micro-polygons before rendering, adding this feature would be a quite complex technical problem. OTOH, if we were to switch to such a pre-shading step (which we have no current plans to do, though), adding this feature would be a snap.
-
Texture mapping functionality is still rudimentary, although the overall design already features all the necessary hooks for this to eventually become fully functional. In fact, the design allows for shapes to have multiple texture coordinate systems, depending on purpose and user choice. The interaction of the shading language sub-system with these multiple coordinate systems needs to be sorted out, though. Plus the next point currently also gets in the way:
-
Conversion of RGB values to spectra is limited: the current solution only really makes sense for emissive RGB values, as it is based on adding up the RGB spectra of LCD monitor primaries. So the resulting spectra are not really plausible when used as reflectance values. But there are publications out there which show how to do this.
-
Mesh import could do with some improvements, most notably a feature which assigns ART surface materials to colours defined for individual polygons in a model.
-
The available tone mapper works, but is rather primitive. Given that we have spectral HDR input images available in ART, it is almost a crime to just convert them to colour space, and then run a simple linear tone reproduction operator on them. Instead of, say, a psycho-physically plausible tone reproduction operator which uses the available spectral data to properly model features like rod and cone response for various adaptation states of the eye.
Stability Issues
- Very large imported PLY meshes sometimes fail to render, for reasons we do not yet fully understand. Usually it helps to slightly rotate them: the issue seems to be with the kD-tree builder.