A path-tracing ray tracer written in Rust, based on the Ray Tracing in One Weekend book series. Runs as both a native CLI and a WebAssembly app (via Leptos).
In the real world, photons leave light sources, bounce around the scene, and some eventually reach your eye. Simulating this forward process is extremely wasteful -- most photons never reach the camera. Ray tracing reverses the process: we shoot rays from the camera into the scene and trace them backward toward the light.
A virtual camera sits at a point in space and looks through a rectangular viewport (the image). For each pixel, we construct a ray -- a half-line defined by an origin (the camera position) and a direction (toward that pixel on the viewport). The ray is then tested for intersection against every object in the scene.
For a sphere centered at C with radius r, a ray P(t) = A + td hits it when:
|P(t) - C|^2 = r^2
which expands into a quadratic in t. The discriminant tells us whether the ray misses (no real roots), grazes (one root), or pierces (two roots) the sphere.
When a ray hits a surface, the material decides what happens next:
-
Diffuse (Lambertian): the ray scatters in a random direction on the hemisphere around the surface normal. This models matte surfaces like painted walls. The outgoing direction is
normal + random_unit_vector(). -
Specular (Metal): the ray reflects about the surface normal:
reflected = d - 2(d . n)n. Afuzzparameter adds randomness to the reflection for brushed-metal effects. -
Dielectric (Glass): the ray either reflects or refracts according to Snell's law:
n1 sin(theta1) = n2 sin(theta2). When the angle is too steep for refraction (total internal reflection), the ray reflects instead. Schlick's approximation blends between reflection and refraction at glancing angles, which is what gives glass its characteristic rim reflections.
The core of the renderer is the ray_color function, which works recursively:
- Fire the initial ray from the camera through a pixel.
- Test for intersection with the scene. If the ray misses everything, return the background color (black).
- Collect emitted light -- if the surface is emissive (a light source), its emission contributes to the color.
- Scatter -- ask the material what happens to the ray. The material returns:
- An attenuation color (how much the surface absorbs -- e.g., a red wall attenuates green and blue).
- A scattered ray -- the new ray direction after the bounce.
- Recurse -- call
ray_coloragain with the scattered ray anddepth - 1. The returned color is multiplied by the attenuation. - Termination -- if
depthreaches 0, return black (no more light gathered). If the material absorbs the ray entirely (scatter returnsNone, as with pure light sources), return only the emitted color.
So yes, each ray is followed through its entire bounce chain before moving on to the next. The recursion unwinds like this:
ray_color(primary_ray, depth=50)
-> hits red wall, attenuation=(0.65, 0.05, 0.05), scatters
-> ray_color(scattered_ray_1, depth=49)
-> hits white floor, attenuation=(0.73, 0.73, 0.73), scatters
-> ray_color(scattered_ray_2, depth=48)
-> hits light source, emits (15, 15, 15), no scatter
-> returns (15, 15, 15)
-> returns (0.73*15, 0.73*15, 0.73*15) = (10.95, 10.95, 10.95)
-> returns (0.65*10.95, 0.05*10.95, 0.05*10.95) = (7.12, 0.55, 0.55)
The final pixel color is the product of all attenuations along the path, plus emitted light at each step. Most rays terminate well before max_depth because they either escape the scene (miss everything) or hit a light source. The depth limit is just a safety net to prevent infinite bounces in enclosed scenes like the Cornell box.
For each pixel, this entire bounce process is repeated samples_per_pixel times with slightly jittered ray directions. The results are averaged to produce the final color. This Monte Carlo averaging is what makes the image converge -- more samples = less noise.
Scanlines are rendered in parallel using Rayon (into_par_iter), so all CPU cores contribute. Within each scanline, pixels are processed sequentially, and within each pixel, all samples are computed one at a time.
A single ray per pixel produces jagged edges. The multi-sample approach above naturally handles this -- each sample ray is offset randomly within the pixel's area, so edge pixels get a mix of "hit" and "miss" rays, producing smooth gradients instead of hard stair-steps.
A real lens has a finite aperture, so objects outside the focal plane appear blurry. We simulate this by giving the camera a non-zero lens_radius. Each ray's origin is randomly offset within a disk (the lens), while still aimed at the focal plane. Points on the focal plane stay sharp; points in front or behind get blurred.
Raw linear color values look too dark on a monitor. We apply gamma correction (raising to the power 1/gamma, here gamma=2, so just a square root) before writing pixel values. This maps the linear light intensities to the nonlinear response of displays.
This ray tracer has no separate "light" abstraction -- there is no global sun or ambient light built into the engine. Instead, all illumination comes from materials. There are two modes depending on the scene:
In many ray tracers, there are two ways light enters the scene: a global background (like a sky gradient) that rays pick up when they miss all geometry, and explicit light-emitting objects. This codebase currently uses only emissive objects -- when a ray escapes the scene and hits nothing, ray_color returns black.
Emissive materials as light sources -- The DiffuseLight material has an emitted() function that returns a color. It does not scatter rays (no reflection or refraction) -- it just emits. When a bouncing ray lands on a DiffuseLight surface, that emission is the light that propagates back through the entire bounce chain. A DiffuseLight can be attached to any geometry: a quad on the ceiling (as in the Cornell box), a glowing sphere, etc. There is no point-light or directional-light primitive -- all lights are area lights, which naturally produces soft shadows.
Why this matters for convergence -- Since light only enters the scene through emissive surfaces, a ray must randomly bounce into one of those surfaces to gather any light at all. In the Cornell box, the ceiling light is small relative to the room, so most random bounce paths miss it and contribute nothing (black). This is why indoor scenes need many samples per pixel to converge to a clean image. A scene with larger or more numerous emissive surfaces converges faster because rays are more likely to find the light.
Adding a sky/background -- To make outdoor scenes brighter, you would change the background return in ray_color from black to a sky color (e.g., a blue-white gradient based on ray direction). This effectively turns the entire sky dome into an infinitely large light source, dramatically improving convergence for open scenes.
The EmissiveDielectric material combines both behaviors -- it refracts/reflects like glass and emits light, producing glowing glass objects that also illuminate their surroundings.
Testing every ray against every object is O(n) per ray. A Bounding Volume Hierarchy (BVH) wraps groups of objects in axis-aligned bounding boxes (AABBs) arranged in a binary tree. A ray first tests against the bounding box -- if it misses, the entire subtree is skipped. This reduces intersection tests to roughly O(log n) per ray.
| File | Purpose |
|---|---|
camera.rs |
Ray generation, render loop (parallelized with Rayon), gamma correction |
ray.rs |
Ray struct (origin, direction, time) |
hittable.rs |
Hittable trait and HitRecord |
sphere.rs |
Sphere intersection (static and moving spheres for motion blur) |
quad.rs |
Quad (parallelogram) intersection |
hittable_list.rs |
Collection of hittable objects |
bvh.rs |
Bounding Volume Hierarchy for acceleration |
aabb.rs |
Axis-Aligned Bounding Box |
material.rs |
Lambertian, Metal, Dielectric, DiffuseLight, EmissiveDielectric |
texture.rs |
Solid color, checkerboard, Perlin noise, image textures |
perlin.rs |
Perlin noise generator |
scenes.rs |
Pre-built scene definitions |
app.rs |
Leptos web frontend |
bin/cli.rs |
CLI entry point |
cargo run --release --bin cli -- --scene random -S 100 --out image.ppm
cargo run --release --bin cli -- --list-scenes
Options:
--scene <name>-- which scene to render-S <n>-- samples per pixel (default 1000)--out <file>-- output file (defaultimage.ppm, use-for stdout)
Available scenes: random, two-spheres, earth, two-perlin-spheres, quads, simple-light, emissive-glass, cornell-box