Posts

Ray Tracing in One Weekend - OpenGL Compute

Image
Executable Below! As an exercise for learning ray tracing I have implemented most of Ray Tracing in One Weekend, one of the most beloved ray tracing introduction books, in an OpenGL compute shader. The idea is to create an application that shoots multiple rays per pixel and checks for intersections with spheres. This way we procedurally generate a frame, avoiding traditional triangle rendering. I have built up quite a nice C++ framework over time for creating new OpenGL applications, loading shaders etc. so I could focus on the actual GLSL ray tracing program and only need a bit of C++ for dynamically adding, removing and altering the rendered objects.    Rasterization Traditional rendering is called rasterization: It is the process of taking vertices (points in 3D space, called 'vectors' in math) that make up a triangle and converting it to monitor pixels. We take the triangle and project it flat onto the scree...

Mouse picking techniques

Image
When creating any interactive 3D application being able to fly around a 3D scene and interact with objects makes it feel like all that math created something tangible. One of the simplest forms of interaction one would want in a game, or an engine, is being able to select objects by simply clicking on them. Before implementing this I was able to select entities by clicking on their name in the scene hierarchy list but that does not feel like you're interacting with the world, just the UI. Time to get to work. Ray Casting The first implementation we take a look at is ray casting. The idea is to cast a ray into our scene and check for intersection with any mesh in the current scene. A ray is just a mathematical structure that holds a 3 component vector stating the ray's origin and a 3 component vector stating its direction. struct   Ray  {      Ray () =  default ;      Ray ( Viewport &   viewport ,  glm ...

Entity Component System

I have been putting this topic off for a while but my codebase - in particular the way a scene is structured - has gotten out of control. I have a Scene class that has a container of SceneObject , with automatic for loop support. You loop over the entire scene and do something with each SceneObject . The Scene class has a point light, directional light and camera as member variables. SceneObject has been a culmination of "what do I need entities in my scene to have?". It has member variables for a name, bounding box, an albedo texture, a normal map texture, Bullet triangle mesh. Oh and it also inherits from Mesh and Transformable. ECS Time to refactor. The idea is to implement an Entity Component System (ECS), a software architectural pattern mostly used in games. The pattern is relatively straight forward but implementations can differ a lot from each other. One problem that ECS tries to solve is the cost and trouble that comes with traditional OOP. C++'s virtual ...

Render Pass Abstraction

I noticed that one of my cpp files was getting rather large and narrowed it down to the applications run function. I use this function as a giant main function, where there is a bunch of variable declerations and setup done before entering the applications primary while(running) loop. This file contains most of the render pipeline setup like textures, framebuffers and render buffers plus all the render execution code inside the while(running) loop. I actually do not mind working with big files, but it was getting harder to conditionally execute certain passes like Bloom or Ambient Occlusion and/or extend the pipeline. I will preface this by saying that I have done little research towards modern solutions to this particular problem, and that this is solely what I came up with and what works for me. Most software solutions come down to figuring out what works for the specific problem you are encountering, and common software patterns are often not directly applicable. I have done...

Screen Space Ambient Occlusion

Image
When lighting a scene in a way that is similar to real life you want to know how much light can get to a point in space, this is important for small corners and crevices where light photons have a hard time getting to. Ambient Occlusion algorithms are ways to approximate how much light can reach such points, and in this particular implementation I will be focusing on a view space (Screen Space) algorithm. The algorithm runs in a single fragment shader where it samples a random kernel around every fragment for depth values to calculate an occlusion factor, this result is later used in a lighting pass to lower the ambient component by this factor. I don't want to make this post too long since the technique itself is quite old by now and there are tons of resources already on implementing it. Now some for some things I personally encountered while implementing this technique. From what I read at first the algorithm does not play nice with normals calculated from a normal map ...