Realtime graphics on GPU - labs summer semester 2020/2021

This page is dedicated for information about lab practices for Realtime graphics on GPU (NPGR019). For information about lectures, please visit the lecturer's page. Last year's labs can be overviewed here.

Please, for all communication try to use my personal e-mail address: {name}.{surname}.88{at}gmail{dot}com, where name and surname refers to my name. Be considerate and fill properly your name in your e-mail client and use some meaningful subject ideally containing the code of the lecture (NPGR019). All of this so I don't need to pair suspiciously looking nicknames to your real names in SIS and/or fish your messages from the spam folder. I'm usually responsive so if I don't reply within day or two it means you've probably triggered gmail's spam filter.

In order to pass the lab practices, you need to pick 2 assignments from the pool below, implement them and submit them by e-mail by 31. 7. 20216. 6. 2021. Use separate messages for each project and use a meaningful subject like "NPGR019 - {name of the project}". I will send you e-mail confirming delivery followed by further questions (if any) and the result or may ask you for some further improvements or fixes.

Lab practices overview

Important: Labs will be held in an online form via YouTube videos and online consultations via the appropriate channel on our Discord server. Consultations will be held at times of scheduled practices, i.e., 1220-1350 Wednesdays and Thursdays (optionally).

Introduction

3. 3. & 4. 3. 2021

Will be held online so we get to know each other.


  • Introduction, formalities
  • Assignments

Hello triangle

9. 3. 2021 - YouTube video

Gentle introduction to a first OpenGL program.



Assignments overview, buffers introduction

16. 3. 2021 - YouTube video

Overview of the assignment topics. I used some presentations from the NPGR033 course.


  • Updated source code for 01-Introduction and 02-3dScene programs - both should run with OpenGL 3.3 and GLSL 330
  • Hello triangle with buffers

Vertex buffers in-depth, introduction to a 3D scene

23. 3. 2021 - YouTube video

Deep dive through vertex buffers, introduction to a first 3D scene. Updated the sources with better naming convention for the camera transformation matrix. I also put the sources to a GitHub repository (it's also linked above in the general information). I mentioned some topics that are further covered in the links below, I'll be covering camera and depth buffer next time:


3D scene - moving around

2. 4. 2021 - YouTube video

Camera transformation and moving around in the 3D scene, MSAA quick introduction.
02-3D-Scene controls:


  • W, S, A, D, R, F - camera movement
  • Mouse RMB + move - camera orientation
  • Enter - reset camera transformation
  • F1 - Enable/Disable MSAA
  • F2 - Enable/Disable Wireframe
  • F3 - Enable/Disable Backface culling
  • F4 - Enable/Disable Depth testing
  • F5 - Enable/Disable Vsync

Projection, depth buffer

5. 4. 2021 - YouTube video

Camera perspective projection. Working with framebuffers. Depth buffer - reading its contents, storing linear Z, comparison of the two. OpenGL clip control extension (core since 4.5) that can be used to remap depth range from [-1, 1] to [0, 1] to exploit ways how to improve depth buffer precision. Other resources:



03-DepthBuffer controls (new or different controls, rest is the same as above):


  • +/- - Zoom in/out
  • Backspace - FOV reset
  • F6 - Depth buffer visualization
  • 1 - Color visualization (default)
  • 2 - Non-linear depth buffer visualization
  • 3 - Linear depth visualization
  • 4 - Difference between depth buffer and linear depth

Index buffers, textures

13. 4. 2021 - YouTube video

Explaining index buffers and their usage. Textures - creation, sampling. Quick introduction to RenderDoc graphics debugger.
04-Texturing controls (new or different controls, rest is the same as above):


  • 1 - Nearest neighbour filtering (default)
  • 2 - Bilinear filtering
  • 3 - Trilinear filtering
  • 4 - Anisotropic filtering
  • 5 - Anisotropic filtering with clamp to edge addressing
  • 6 - Anisotropic filtering with mirrored repeat addressing

Instancing

27. 4. 2021 - YouTube video

Explaining texture space addressing modes. Talking about various ways how to do geometry instancing. Explaining Uniform Buffer Objects.
05-Texturing controls (new or different controls, rest is the same as above):


  • F6 - Enable/disable instancing
  • 1 - Draw 1 cube
  • 2 - Draw 125 cubes
  • 3 - Draw 1000 cubes
  • 4 - Draw 15 625 cubes
  • 5 - Draw 125 000 cubes
  • 6 - Draw 1000 000 cubes

Instancing, shading

4. 5. 2021 - YouTube video
5. 5. 2021 - YouTube video

Finished instancing using SSBO. Outstanding comments on texturing - avoid sampling in (warp) divergent conditional branching codes because of missing derivations. Introduction to basic and advanced lighting with normal mapping. Additionally I had several remarks about HDR tonemapping and gamma corrected color spaces (i.e., sRGB).
Additional info:


06-Shading specific controls:

  • F6 - Enable/disable HDR rendering

Shadow volumes using geometry shaders

13. 5. 2021 - YouTube video

Talking about multipass forward renderer with stencil/volume shadows using stencil buffer and geometry shader. Additional info:


07-ShadowVolumes specific controls:

  • F1 - Enable/disable MSAA
  • F2 - Enable/disable wireframe rendering
  • F3 - Enable/disable VSYNC
  • F4 - Enable/disable HDR rendering
  • F5 - Enable/disable light animation
  • F6 - Toggle between Z-pass/Z-fail algorithm

Flocking simulation using compute shaders

21. 5. 2021 - YouTube video
24. 5. 2021 - YouTube video

Finishing up geometry shaders from the last time, gentle introduction to compute shaders. Second part: compute shaders in detail on the flocking simulation demo. Same program written using CUDA. Additional info:


08-Flocking (OpenGL 4.6!) specific controls:

  • F6 - Enable/disable turbo mode for boids

Deferred rendering using light volumes

21. 5. 2021 - YouTube video

Deferred rendering approach using light volumes.
09-Deferred specific controls:

  • F1 - Enable/disable VSYNC
  • F2 - Enable/disable light animation
  • 1 - Deferred lighting mode
  • 2 - Visualize linear Z buffer
  • 3 - Visualize world space normals
  • 4 - Visualize specularity G-Buffer
  • 5 - Visualize AO G-Buffer

Useful resources


  • GLFW - multi-platform library abstracting the window creation and more.
  • SDL - multi-platform library allowing window creation, sound handling etc.
  • GLAD - GL loader generator for .dll.
  • GLM - OpenGL Mathematics library. C++ header only mimicing GLSL.
  • Assimp - Open Source Asset Import Library.
  • STB Image - Open source, easy to use image loading library.

Semestral project assignments

You should implement 2 of the following assignments. You can also choose to implement one with all the bonuses if present as it amounts to 2 assignments in a row. OpenGL and C/C++ are preferred and strongly recommended for the implementation. The ideal way is to use code you'll already have from lab practices and adapt it to the chosen assignment.

Optimally, I shall be provided with an easy to compile and run solution that doesn't need too much overhead to get up and running. You can use whatever resources you'd like (textures, models, model loading libraries, etc.) but I suggest you keep the assignment as simple as possible and focus only on the task at hand. All of the required geometry and textures for the assignments can be usually hardcoded in the program, e.g., use simple planes, cubes, cylinders, spheres - just as you see in my examples. Provide interactive camera in all submissions.

The source code must be well commented so I can understand what you are trying to achieve and can see that you understand your code. The handed-in assignments should ideally compile and run on Windows machine under MSVS 2017, I'll be using that as a primary testing machine. Linux, or other platforms are possible after discussion but in all cases I expect that your solution will compile and run without any unreasonable effort (see above).

Don't forget to bundle all needed external resources and/or .dll files needed to run. Make sure you clean the project before packing so you don't send me compiler generated object files, MSVS debug symbol and intellisense databases, etc. For reference, all of my example programs shown during the labs last year were under 3 MB packed in a .zip file including textures, i.e., if you're sending me something grossly exceeding 100 MB then something is wrong. Finally, before sending your submissions, please make sure that the program can be compiled and run on a different PC (or at least from different folder on your PC - hardcoded paths to your username documents folder may be one of the problems). If the filesize exceeds 10 MB, please upload the solution somewhere (Google drive for instance) and send me a link instead.


Cascaded shadow maps

Implement a simple scene that uses directional light to cast shadows. A common extension to basic shadow map used to fight perspective aliasing is called Cascaded Shadow Maps, where we render the scene into several shadow maps based on the distance from the camera. A scene you create should be sufficiently large, e.g., long alley of poles, and should provide varied geometry to assess how you fought with common shadow map artifacts like shadow acne and Peter panning. At the least you should there at least some spheres and cylinders or some other curved surfaces.

Requirements

  • Render 4 shadow cascades
  • Sample them based on Z (you can use PSSM, for example)
  • Solve common artifacts, i.e., shadow acne, Peter panning, and shimmering
  • Filter the result, e.g., use Percentage Closer Filtering, or, Poisson Disk Sampling

Bonus

  • Use geometry shader to render all shadow maps in one pass
  • Implement receiver (also called adaptive) plane depth bias
  • Combine all cascades into screen space shadow map and use that to apply shadows to the scene
  • Alternative: Implement Variance Shadow Maps (as cascades, of course. Basic requirements still apply, bonuses replaced with VSM)

Screen Space Ambient Occlusion

Screen Space Ambient Occlussion is very popular method for approximating decrease of light intensity in corners and crevices. Such and effect is normally produced by global illumination which is, however, impractical for real-time scene hence the approximation. The easiest and more straightforward way of doing this is taking the scene depth buffer and calculating depth difference of each texel vs. some average around sampled texel.

Requirements

  • Create sufficiently complex scene that would nicely show the effect (with some corners, etc.)
  • Implement the most basic SSAO effect using just the scene depth buffer, random sampling and blurring
  • Apply the effect to the rendered scene

Volumetric water effect

Create a simple scene with a pool of water and implement a water surface shader with the properties summarized below. The normal map can be either calculated from the surface displacement or can be supplied as an animated texture to the fragment shader. For an inspiration look at how GTA V handled this - scroll down to reflections, there's an explanation what is needed for really nice water effect.

Requirements

  • Apply water waves (sin/cos will suffice), i.e., displacement
  • Use a normal map and Fresnel equations to reflect/refract light
  • Calculate the "fogged" refraction color based on the water pool depth

Parallax mapping

Parallax mapping is a fairly popular method for visual enhacement of rendered surfaces. Create a scene that will contain surfaces using this technique, e.g., cobble stones, pebbles, brick wall, etc.

Requirements

  • Create a fragment shader implementing parallax mapping
  • The scene should contain some moving light

Volumetric data visualization

Implement a simple volumetric data visualization using basic methods like maximum intensity projection, average intensity projection, summed intensity projection, transfer functions. An exemplar data set with data format description.

Requirements

  • Load or generate a volume and visualize it using some of the methods mentioned above

GPU detail tesselation

Employ tessellation shaders to implement displacement mapping, i.e., take low polygonal surface with details in a texture and use it to create more geometrically complex rendered model. To get an idea, you can have a look at this article. You can either use some heightmap texture or generate it in the program using procedural noise.

Requirements

  • Render a low-poly scene (terrain, cobblestones, etc.) with details in texture
  • Use tessellation shader to create more polygons close to the camera, i.e., displacement mapping
  • Make sure the generated mesh is water-tight, i.e., no T-cracks

Copyright (C) 2015-2021 Martin Kahoun