This page is dedicated for information about lab practices for Realtime graphics on GPU (NPGR019). For information about lectures, please visit the lecturer's page. Last year's labs can be overviewed here.
Please, for all communication try to use my personal e-mail address: {name}.{surname}.88{at}gmail{dot}com, where name and surname refers to my name. Be considerate and fill properly your name in your e-mail client and use some meaningful subject ideally containing the code of the lecture (NPGR019). All of this so I don't need to pair suspiciously looking nicknames to your real names in SIS and/or fish your messages from the spam folder. I'm usually responsive so if I don't reply within day or two it means you've probably triggered gmail's spam filter.
In order to pass the lab practices, you need to pick 2 assignments from the pool below,
implement them and submit them by e-mail by 31. 7. 2021 6. 6. 2021.
Use separate messages for each project and use a meaningful subject like "NPGR019 - {name of the project}".
I will send you e-mail confirming delivery followed by further questions (if any) and the result
or may ask you for some further improvements or fixes.
Important: Labs will be held in an online form via YouTube videos and online consultations via the appropriate channel on our Discord server. Consultations will be held at times of scheduled practices, i.e., 1220-1350 Wednesdays and Thursdays (optionally).
3. 3. & 4. 3. 2021
Will be held online so we get to know each other.
9. 3. 2021 - YouTube video
Gentle introduction to a first OpenGL program.
16. 3. 2021 - YouTube video
Overview of the assignment topics. I used some presentations from the NPGR033 course.
23. 3. 2021 - YouTube video
Deep dive through vertex buffers, introduction to a first 3D scene. Updated the sources with better naming convention for the camera transformation matrix. I also put the sources to a GitHub repository (it's also linked above in the general information). I mentioned some topics that are further covered in the links below, I'll be covering camera and depth buffer next time:
2. 4. 2021 - YouTube video
Camera transformation and moving around in the 3D scene, MSAA quick introduction. 02-3D-Scene controls:
5. 4. 2021 - YouTube video
Camera perspective projection. Working with framebuffers. Depth buffer - reading its contents, storing linear Z, comparison of the two. OpenGL clip control extension (core since 4.5) that can be used to remap depth range from [-1, 1] to [0, 1] to exploit ways how to improve depth buffer precision. Other resources:
03-DepthBuffer controls (new or different controls, rest is the same as above):
13. 4. 2021 - YouTube video
Explaining index buffers and their usage. Textures - creation, sampling. Quick introduction to RenderDoc graphics debugger. 04-Texturing controls (new or different controls, rest is the same as above):
27. 4. 2021 - YouTube video
Explaining texture space addressing modes. Talking about various ways how to do geometry instancing. Explaining Uniform Buffer Objects. 05-Texturing controls (new or different controls, rest is the same as above):
4. 5. 2021 - YouTube video 5. 5. 2021 - YouTube video
Finished instancing using SSBO. Outstanding comments on texturing - avoid sampling in (warp) divergent conditional branching codes because of missing derivations. Introduction to basic and advanced lighting with normal mapping. Additionally I had several remarks about HDR tonemapping and gamma corrected color spaces (i.e., sRGB). Additional info:
06-Shading specific controls:
13. 5. 2021 - YouTube video
Talking about multipass forward renderer with stencil/volume shadows using stencil buffer and geometry shader. Additional info:
07-ShadowVolumes specific controls:
21. 5. 2021 - YouTube video 24. 5. 2021 - YouTube video
Finishing up geometry shaders from the last time, gentle introduction to compute shaders. Second part: compute shaders in detail on the flocking simulation demo. Same program written using CUDA. Additional info:
08-Flocking (OpenGL 4.6!) specific controls:
21. 5. 2021 - YouTube video
Deferred rendering approach using light volumes. 09-Deferred specific controls:
You should implement 2 of the following assignments. You can also choose to implement one with all the bonuses if present as it amounts to 2 assignments in a row. OpenGL and C/C++ are preferred and strongly recommended for the implementation. The ideal way is to use code you'll already have from lab practices and adapt it to the chosen assignment.
Optimally, I shall be provided with an easy to compile and run solution that doesn't need too much overhead to get up and running. You can use whatever resources you'd like (textures, models, model loading libraries, etc.) but I suggest you keep the assignment as simple as possible and focus only on the task at hand. All of the required geometry and textures for the assignments can be usually hardcoded in the program, e.g., use simple planes, cubes, cylinders, spheres - just as you see in my examples. Provide interactive camera in all submissions.
The source code must be well commented so I can understand what you are trying to achieve and can see that you understand your code. The handed-in assignments should ideally compile and run on Windows machine under MSVS 2017, I'll be using that as a primary testing machine. Linux, or other platforms are possible after discussion but in all cases I expect that your solution will compile and run without any unreasonable effort (see above).
Don't forget to bundle all needed external resources and/or .dll files needed to run. Make sure you clean the project before packing so you don't send me compiler generated object files, MSVS debug symbol and intellisense databases, etc. For reference, all of my example programs shown during the labs last year were under 3 MB packed in a .zip file including textures, i.e., if you're sending me something grossly exceeding 100 MB then something is wrong. Finally, before sending your submissions, please make sure that the program can be compiled and run on a different PC (or at least from different folder on your PC - hardcoded paths to your username documents folder may be one of the problems). If the filesize exceeds 10 MB, please upload the solution somewhere (Google drive for instance) and send me a link instead.
Implement a simple scene that uses directional light to cast shadows. A common extension to basic shadow map used to fight perspective aliasing is called Cascaded Shadow Maps, where we render the scene into several shadow maps based on the distance from the camera. A scene you create should be sufficiently large, e.g., long alley of poles, and should provide varied geometry to assess how you fought with common shadow map artifacts like shadow acne and Peter panning. At the least you should there at least some spheres and cylinders or some other curved surfaces.
Screen Space Ambient Occlussion is very popular method for approximating decrease of light intensity in corners and crevices. Such and effect is normally produced by global illumination which is, however, impractical for real-time scene hence the approximation. The easiest and more straightforward way of doing this is taking the scene depth buffer and calculating depth difference of each texel vs. some average around sampled texel.
Create a simple scene with a pool of water and implement a water surface shader with the properties summarized below. The normal map can be either calculated from the surface displacement or can be supplied as an animated texture to the fragment shader. For an inspiration look at how GTA V handled this - scroll down to reflections, there's an explanation what is needed for really nice water effect.
Parallax mapping is a fairly popular method for visual enhacement of rendered surfaces. Create a scene that will contain surfaces using this technique, e.g., cobble stones, pebbles, brick wall, etc.
Implement a simple volumetric data visualization using basic methods like maximum intensity projection, average intensity projection, summed intensity projection, transfer functions. An exemplar data set with data format description.
Employ tessellation shaders to implement displacement mapping, i.e., take low polygonal surface with details in a texture and use it to create more geometrically complex rendered model. To get an idea, you can have a look at this article. You can either use some heightmap texture or generate it in the program using procedural noise.
Copyright (C) 2015-2021 Martin Kahoun