Enigma is a renderer working on the Ray Tracing technique written by a Computer Science freshman utilising C++.
- To Install
Clone the repo using
git clone --recursive https://github.com/extorc/Enigma
- Set the project up
cd build
cmake ../
Make sure cmake on your machine is configured to run the appropriate version of CXX compiler by running the cmake -G
command.
- Compile and execute
make
Enigma
This project was made in an effort to better understand C and C++ and Computational Graphics in general and it might involve practices that are not perfect.
For performance optimization, Valgrind
and Callgrind
along with Kcachegrind
were used on an Ubuntu WSL to understand the call sequence and instruction usage of the application and several improvements were made towards the end of the initial development.
The Project uses OpenGL to display framebuffers and to carry out matrix and vector calculations.
The application is entered from the main.cpp
where the window is created using a wrapper Window
struct. The GLFW and GLAD libraries are initialized in the createWindow
function implemented in Graphics.h
.
A Scene object is created to which Sphere, Plane and Material objects are pushed.
A glm::vec4
pixel vector is created to store the final framebuffer which is initialized to all zeroes. Another vector of the same dimension is created called accumilation
to store the pre-normalized pixel data. The pixel data will be first calculated and appended to accumilation every sample and then normalized and finalized into pixel vector.
A Camera
class is defined where we first generate all the rays, starting from the camera origin, pointing to the individual pixels situated FOV
distance away.
Inside the Render loop, processing is carried out per sample per pixel. So the processing is carried out a total of samples * pixels number of times. For each pixel sample, the corresponding generated Ray
is accessed, the color for the poxel is calculated using a renderer function called processPixel()
and appended to the accumilation vector. This is then normalized as mmentioned earlier.
if(sampleCount < maxSampleCount){
for(int j = 0; j < image_height; j++){
for(int i = 0; i < image_width; i++){ //For every pixel on the screen
Ray ray = {
glm::normalize(camera.rayDirections[j * image_width + i]),
camera.cameraPosition
}; //Generate a ray
auto color = renderer.processPixel(ray); //And process the ray
accumulation[j * camera.u + i] += color; //Accumulate the data processed
pixels[j * camera.u + i] = accumulation[j * camera.u + i]/(float)sampleCount; //Normalize pixel data based on sample count
}
}
sampleCount++; //Update sample count
}
The processPixel()
function is run for each sample of every function. The function traces the Ray generated in the camera class through its origin towards its direction using the trace()
function which returns a HitData
object.
Using this HitData, it is verified whether the ray intersects any scene objects, and if it does, the function generates the appropriate color for the pixel by calculating material and lighting parameters.
The Lighting and Materials in this Renderer is handled in a two-step process.
- First the normal vector at a point of the object is compared with the light direction and a Albedo appropriate diffuse color is generated.
Mat objectMat = scene->materialList[scene->objects[data.objectIndex]->matIndex];
float d = glm::max(glm::dot(data.normal, -scene->light), 0.0f);
glm::vec3 diffuseLighted = d * objectMat.Albedo;
- Then the specular fall-off of the material is calculated along with the roughness of the object material by comparing the reflected version of the incident ray with the light direction.
float s = std::pow(
glm::max(
glm::min(
glm::dot(
glm::reflect(ray.rayDirection, data.normal), -scene->light)
, 1.0f),
0.0f),
ROUGH_FUNC(objectMat.roughness));
glm::vec3 specularLighted = glm::vec3(1);
- These two stages are then combined into the
finalColor
. ThesefinalColor
calculations are done several times per pixel as the ray has to be bonuced off the surface of an object and intersect another object.
In order to implement this, the incident ray is reflected about the surface normal and ran through the trace function again in a for loop and the final color data is accumilated and returned to the main Render loop.
ray.origin = data.position + data.normal * 0.001f;
ray.rayDirection = glm::reflect(
ray.rayDirection,
data.normal + objectMat.roughness * glm::vec3(RAND(1.0f)-0.5f, RAND(1.0f)-0.5f, RAND(1.0f)-0.5f)
);
The Object system in the Renderer is handled by an Object parent class which is extended by (as of August 2024) 2 child classes, Plane and Sphere.
Each child class of Object must have Hit()
and an intersect()
function. The role of the intersect function is to check whether the ray intersects with the given object or not. If intersected, the function returns the distance and the nearest intersected object is found. This objects respective Hit function is called which then packages the Hit Data required for the Trace function to return for the pixel to be processed.
float closestT = FLT_MAX; //Keeping track of Closest distance of collision
int closestObjectIndex = -1; //Keeping track of the closest object at the closest distance
for(int i = 0; i < scene->objects.size(); i++){
float t = scene->objects[i]->intersect(ray); //Looping through all the object's respective intersect functions
if(t < closestT && t > 0){
closestT = t; //Updating the closest distance and object if we have a nearer candidate
closestObjectIndex = i;
}
}
if(closestObjectIndex < 0)
return miss(); //If no close object was detected, run the global miss function
return scene
->objects[closestObjectIndex]
->hit(ray, closestT, closestObjectIndex); //Otherwise return the data about this collision
The Sphere intersection is carried out using the quadratic equations. The following equation is solved for the sphere intersection.
This equation is found to be quadratic in t
and hence can be solved for using the quadratic formula
T is found using this formula and returned in the Sphere::intersect()
function.
The intersection with a plane in implemented by refering to this resource.
According to it, in order to find an intersection between a ray and a plane, we need to find a point which lies both on the ray and the place ie it satisfies both the equation of the ray P = O + tD
and the equation of the plane (P-S).N = 0
. Substituting P from the ray to the plane and solving for t, we get
T is found using this formula and returned in the Plane::intersect()
function.
The implementation for the Triangle Ray intersection is done using the Möller–Trumbore intersection algorithm.
Read More
This involves first checking whether the ray is parellel to the plane of the Triangle, if not then the ray must intersect the plane somewhere. It is the matter of checking if that intersection takes place within the body of the triangle.
This is achieved using barrycentric coordinates.
This is a coordinate system independant of the position and orientation of the object and represents any point in terms of the edges/vertices of the triangle. So a point in the plane can be represented as uV1 + vV2 + wV3
instead of x, y, z space. When this point is equated to the equation of the ray and we solve for u, v and t, we can check the bounds of these to find out whether the point is falling within the triangle.
T is found using this solution and returned in the Triangle::intersect()
function.