Photorealistic graphics – labs (2020/2021)

New lab credit

A new system reflecting time-constrained conditions in the 2020/2021 school year has been introduced. You don't have to do extension modules and an animation script at the same time.

You can choose from two options:
A. either make an extension module into our Ray-tracer
B. or produce a video with the help of built-in components and ready-made modules that are already available.

1. Ray-tracer extensions

For option A., you'll need to work on one extensions to our C# Ray-Tracer. You'll have to negotiate your proposal with me (to reduce the risk of too complex ideas).

Please use GitHub, GitLab or other public platform for your work. You will need to open your (forked) repository to perform the revision, and perhaps to allow access to your code for other students ( setting repository visibility).

2. Documentation

You have to write a brief documentation in the MarkDown language to give instructions how to use your module, to show examples of the scene using your module, etc. See already published extensions for inspiration.

3. Working on the animation

For option B., you have to come with some ideas, find at least one existing extensions and use it in your animation script (perhaps there will be some tuning of used modules or even adding a fix or two...).

See existing animation scenes (already published extensions) for inspiration, there will be a special video-lab dedicated to instructions how to write an animation script.

4. Final rendering and publishing of animations

Each animation frame will have to be rendered, video-file will be encoded (see the animation page) and the final product will have to be published using YouTube, Vimeo or similar public service.

Rules and hints for lab projects

Current list of extensions (inclusing 2019/2020 ones) is in this shared file . Please go there before considering a new idea or while thinking about your animation.

You have to use you own GIT repository for your extensions and animation development. Use faculty Gitlab or GitHub. You have to start with forking our Grcis repository https://github.com/pepcape/grcis

I have tried the whole workflow with GitLab (GitHub will be probably similar) and my actual instructions are in this directory. Key points:

  • you have to do so called Git repository Fork from the original repository
  • you have to use the master branch for everything. I recommend using this branch for your development as well, you would benefit from it – you will automatically receive updates of GrCis library, ...
  • after finishing your work please put your results as a subdirectory of newmodules. One directory per extension! (named FirstnameSurname-NoduleName)
  • you will be regularly committing and pushing your work to your "Forked" repository. This repository will be the place of your export – this repository and directory will be announced publicly to your colleagues.
  • you will be receiving eventual updates of the GrCis code via the master branch. So you are strongly discouraged from making any changes of the GrCis structure. You are free to add your new files – if you need to change any system files, create a descendant class and override the function you need

You may create your repository as private (please share it with me: https://gitlab.mff.cuni.cz/pelikan or https://github.com/pepcape) and set public access later. You will be using GIT repo both for C# development and documentation in MarkDown format (.md, see Mastering Markdown).

Your main project will be 048rtmontecarlo-script, this is the solution you have to use. For animation there is the 048rtmontecarlo-script solution, it uses the same code-base of ray-tracing, the same extension modules and the same format of scene definition CSscript. My recommendation is to use the first project for development of extensions and the second one for animations.

Important: create a new directory under the newmodules for each extension, this is necessary for later sharing of extensions. Use special namespace (without diacritics):

namespace FirstnameSurname
for your extensions. Hints for advanced extensions will be mentioned later.
You have to keep your extension source files in the directory during the whole rest of the semester. You colleagues will be using them! You may of course improve them (fix bugs) or improve documentation but you have to keep functionality (interface) stable.

At any time you can start working on your animation. Use the project 062animation-script, it is compatible with 048, you can use the same extensions, your scene-definition script can be transformed into an animation script easily

Advanced extensions

A convenient and powerful mechanism for scene-definition scripts is the context property-set. Every script is called with global variable

  Dictionary<string, object> context;
Standard property names are defined in the RayCastingScripts.cs source file. I'll show you the most important properties:
  • PropertyName.CTX_PREPROCESSING (in) – if defined and true, this is the "preprocessing" stage: you can put any simulation which computes shared data object used in later stages (scene definition and rendering). If you don't need preprocessing, you can ignore it and continue in defining the scene
  • PropertyName.CTX_SCRIPT_PATH (in) – full path of CSscript from which the scene is currently read. You can use it fo referencing other files (e.g. animation script)
  • PropertyName.CTX_ALGORITHM (out) – you can override the default algorithm (RayTracing). Just create your new algorithm (must implement the interface IRasterImage) and set it here
  • PropertyName.CTX_SYNTHESIZER (out) – you can override the default image synthesizer algorithm (SupersamplingImageSynthesizer or AdaptiveSupersampling). Create your own algorithm (must implement the interface IRenderer) and set it here
  • PropertyName.CTX_WIDTH (in, out) – image width in pixels (if you don't modify it, a default value from form UI is used)
  • PropertyName.CTX_HEIGHT (in, out) – image height in pixels (if you don't modify it, a default value from form UI is used)
  • PropertyName.CTX_SUPERSAMPLING (in, out) – supersampling factor, number of samples per pixel (if you don't modify it, a default value from form UI is used)
  • PropertyName.CTX_TOOLTIP (out) – string defining tool-tip for the Parameters input line in the UI. Definition script can access the string via the string param global, this is the most dynamic way how to modify scene or animation (user just edits the text line on the form). So this property can be used for hinting an user what dindividual parameters mean..
  • PropertyName.CTX_START_ANIM (in, out) – animation starting time in seconds (if you don't modify it, a default value from form UI is used)
  • PropertyName.CTX_END_ANIM (in, out) – animation ending time in seconds (if you don't modify it, a default value from form UI is used)
  • PropertyName.CTX_TIME (in) – if present, then single image will be computed, not the whole animation (chance to reduce preprocessing simulation)
  • PropertyName.CTX_FPS (in, out) – frames-per-second for an animation (if you don't modify it, a default value from form UI is used)

Framework and scene-definition scripts, precomputing procedure

Your definition script can be run up to two times before the actual rendering. If you don't define your scene in the 1st pass (scene.Intersectable), the script will be called once again.

Both times your script gets the same context, you can use it for passing values from the first phase to the second one. At the beginning, only the properties marked "in" are defined. After your script is finished, some properties will be retrieved from the context (see the previously mentioned "out" properties).

See some of the sample scene scripts in the data/rtscenes directory.

Changing the shade() function

You'll need to change behavior of the RayTracing.shade() function, there is a "heavy-weight" solution – create a new descendant of class RayTracing and override the virtual shade() function. This solution will be described later.

Or you can use our recently added versatility: you can plug in your own function which defines behavior of recursive ray-tracing. This basically means that after standard procedures – finding the nearest intersection, processing attributes and textures – your own logic is used instead of default Whitted-like ray-tracing (shadow rays, light contribution and recursion for reflected and refracted rays..).

This is important: neither shadows nor light contributions will be done if you plug in your own function. And of couse no reflected and refracted rays.

You can look at an example in the CylindersGlow.cs scene. There is a lambda function:

RecursionFunction del = (Intersection i, Vector3d dir, double importance, out RayRecursion rr) =>
{
  double direct = 1.0 - i.TextureCoord.X;
  direct = Math.Pow(direct * direct, 8.0);

  rr = new RayRecursion(
    new double[] {direct, direct, 0.0},
    new RayRecursion.RayContribution(i, dir, importance));

  return 144L;
};
The function must meet the requirements of the RecursionFunction delegate type:
  • return value long – you can return anything (this is used for adaptive subsampling, so please return different values for different recursion setups)
  • Intersection i – processed intersection is passed to you, the most useful members are: CoordWorld, TextureCoord, Normal, SurfaceColor, Material, Textures (list of textures in order of application), Solid.
  • Vector3d dir – ray direction
  • double importance – ray importance (before the hit). It is a good idea to reduce it a little before passing ot to recursive rays
  • out RayRecursion rr – result = total contribution of this intersection

RayRecursion is a class defining contribution of this ray/intersection. There is a direct component (color array double[] DirectContribution) – unconditional contribution to the result, regardless of light sources or other rays. It can be used for effects such as "glow", etc.
An then there is an arbitrary array of recursion rays (RayContribution), each ray has its optional multiplicator (double[] coefficient) and ray parameters (origin p0 and direction p1).

Ray-tracing framework just adds (optional) direct contribution and for each item in optional array of recursion rays calls another instance of the shade() function. Results are combined using (optional) multiplicators.
If you need to process light sources, you have to copy the appropriate part of the original shade() function into your plugin function. The same stands for modifications of original Whitted's system (reflection + refraction processing).

How to plug-in your own recursion-function: attach it as the PropertyName.RECURSION attribute to any part of your scene. See examples in CylinderGlow.cs.

One more modification: you can avoid shadow computations for the solids equipped with the PropertyName.NO_SHADOW attribute. It must be set locally in all scene nodes which need to be excluded from the light-obstacle mechanism (i.e. this attribute does not inherit).

Replacing the whole IImageFunction

How to specify a whole alternative IImageFunction which want to use instead the default one? Just set the context[PropertyName.CTX_ALGORITHM] context property in your scene-definition script. Examples can be seen in some default scene-scripts, your code could look like:

  context[PropertyName.CTX_ALGORITHM] = new MyExtendedRayTracing();

Animation script vs. scene script

They are basically the same, the only difference is that in animation application (062animation-script) the last three properties (PropertyName.CTX_START_ANIM...) are actually used.

If a single frame is computed (in the 062 application), then you can use the presence of the PropertyName.CTX_TIME attribute as an indication of that. For instance, you need not to compute the whole simulation in the preprocessing phase...

Lab plan

Lab plan for the 2020/2021 summer semester:

# Date
Video
Content
1 19. 3. 2021 video Introduction, GrCis, Ray-tracing in GrCis, Visual Studio, Subversion & Git repos,
Ray-tracing demo (048rtmontecarlo-script), time measurements, ..
2 19. 3. 2021 video Demonstration of the Shading Interpolation
x video RT architecture survey: complete pgr-99-grcis-rt slideshow video
3 coming soon Scene definition options (program vs. CS-script, builtin vs. new modeules...)
4 coming soon New lab credit: TBD
5 coming soon Details of most popular extension options (animated cameras, system for accelerated triangle meshes or RayRecursion extension for general "beyond Whitted" ray-tracing...)
6 coming soon Advanced CS-script for animations/simulations (pre-processing stage, shared vs. specific objects...)
7 coming soon Making video using Ray-tracing (animation script, frame sequence rendering, video encoding)
x no deadline Presentation of animations

Extension ideas

Will be discussed in one of the video-labs, some of them will have insiprational web pages based on former students' tasks. Please ignore deadlines, points and other formal stuff. It won't be applicable in this school-year. Think of these topics as inspiration rather than specific assignments.

  • Animated camera controlled by a text script – inspiration in old tasks RT scene animation, Animated real-time camera. Consider Catmull-Rom splines for trajectory interpolation (see slides 38 to 42).
  • Acceleration technique over triangles (might be integrated with a new solid/shape..?) – Efficient sphereflake for RT, Efficient parametric surface for RT. Consider general accelerating techniques based on Boundary Volume Hierarchies, you could put the topmost level bounding boxes in our scene nodes.
  • Interesting new surface appearanceOrange peel or other bumpy surface. Furry-looking surfaces, etc.
  • Glowing material (à la lightsaber) – object contributing light to the scene but only if hit by a ray
  • Particle system simulation (blast, dissolve, fireworks...) – see wiki page for links
  • Ocean waves or water simulation – something not very complicated, like method in this article
  • Background color functions – skydome, sky with clouds, "holodeck" from Star Trek (see interface IBackground)

Tasks from the past (some of them are in Czech)

022. 360°/180° equirectangular camera

Ray-generator (ICamera) for panoramatic 360°/180° equirectangular camera, demo scene, upload panorama to FB/Google.

021. Layered reflectance model Weidlich-Wilkie

Implementation of the Weidlich-Wilkie reflectance model. Include material definition class and a demo scene.

062. RT animation with panoramic camera

Animated panormic camera, optional RT scene animation, optional animation script format. Making of video file, uploading to YouTube..

089. Efficient sphereflake for RT

Fractal object called "sphereflake", speeding up the computing of intersections.

097. Orange peel

Noise function used for orange peel imitation (bump-texture).

119. Hashing for noise generators

Hash functions are used in deterministic noise function generators. System for hash function testing (visualization in 2D).

022. Alternative camera

Ray generator for ray-based rendering methods. Implementation of a new camera (panoramic, fish-eye, ..) + demo scene.

061. RT scene design

Design of an interesting 3D scene for Ray-tracing using the CSharpScript format.

062. RT scene animation

RT scene animation, camera animation is possible (perhaps the one from 022) + demo animation script. Making of video file, uploading to YouTube..

063. Flame animation

Procedural 2D flame animation. Implementation of your own (deterministic) noise function.

072. 3D noise-based texture

Continuous 3D noise function + some 3D texture based on it (wood, marble, ...).

079. Implicit surface for RT

Implicit function-defined surface for ray-tracing + demo scene.

088. Depth of field camera

Depth of field simulation (real camera lens with true aperture) using distributed ray-tracing technique.

065. Efficient parametric surface for RT

General parametric surface (defined symbolically) for ray-tracing. Suitable acceleration technique.

Credit requirements

Students have to earn a total of at least 50 points, upper limit is 80 points (at least 16 points have to be from 3D graphics assignments).
Deadline for winter-term credit is 28. 2. 2020 !
Deadline for summer-term credit is 31. 6. 2020 !

Credit points will be added to examination result (max 100 points) and the final grade will be determined using the following Grading Table:
150+ points A (výborně)
130 to 149 points B (velmi dobře)
110 to 129 points C (dobře)
less than 110 points F (nevyhověl(a))

How to earn credit points

A. assignments running through the whole term (implemented in C#)

Assignments are given in the labs (practicals), one assignment will be given in each lab. Students will have at least two weeks to deliver a solution. Number of points awarded for a solution depends on the difficulty, solution quality, robustness, and elegance. Additional bonus points can be given, e.g. in case of a contest.

B. transfer from previous year

90% of last-year points could be transferred from the previous year (upon explicit student's request via email).

Contests

Some assignments have a well-defined quantitative criterion for solution comparison, based on which one can assemble a chart of best solutions. Such tasks will be marked as "CONTEST" and there will be a public chart displaying the best achieved results. In case the assignment is handed in by at least 10 students/teams, the three best solutions will obtain a premium of 10, 6, and 3 pts, respectively. (In case the solution was done in a team, all its members get the premium.) Only solutions submitted before the deadline are allowed to enter the contest!

Rules

Since grading of the assignments is very time consuming, your are kindly requested to follow several simple rules:
  • Mail messages have to be sent from an e-mail identity where your civil first name and surname can be easily determined.
    (Use something like "Malcolm Reynolds" <captain.mal@gmail.com>.)
  • One message = one solution. If you want to ask an additional question or send a complaint, use a different email message!
  • Message subject has to contain the class code (NPGR003) and assignment ID number (a three-digit decimal number). Everything else is optional.
  • You absolutely must follow rules for source-file naming. There is a clear definition on each assignment's web page. Don't forget to write your name into a comment on the first line of the source file!
  • Do not send any unwanted extra files (there are exceptions but they are quite rare)
  • Never send huge mail attachments! If you want to send an image (demonstration of a good/bad case, input picture, etc.), use low resolution, or send an URL to, say, gyazo. Large files (e.g. videos) should be uploaded to a file sharing site (Dropbox, Google drive, OneDrive, etc.) and you will send the file URL in the email instead.
  • TEST functionality of your code. When grading, the code could be run in conditions slightly different from the obvious ones. If it fails, points go down. So make sure to produce robust code! Handle all (even weird) input conditions well.
  • Sometimes we will include special comments into source files to mark places where your code/changes should go. Please don't feel restricted by those comments. You can insert any variable/support class/data structure as long as it stays in the source file declared for consignment. Don't add more source files to the task project - they will be disregarded during the grading (and therefore you will not receive any points, because the program will probably not compile).

Submission of assignment solutions

Simply attach the requested source code to a mail message. Uncompressed, unecrypted, follow rules declared earlier (message subject, source file naming)!

Penalty for late submission

There is a penalty of 1 point for every day of delay after an assignment deadline.

Lab schedule

See Tue 12:20 SW1 (ground floor, Rotunda) Lockdown option: there is no lab at the time declared in SIS. Video-recordings of some labs (in Czech language) will be available on the Slides page. Video-consulting hours are scheduled after each exam (see actual exam dates in SIS, connect to the video-meeting after the last student) via MS Teams. You can ask me by e-mail for a private video consultation.


Copyright (C) 2000-2021 J.Pelikán, last change: 2021-06-10 00:49:19 +0200 (Thu, 10 Jun 2021)