-
Notifications
You must be signed in to change notification settings - Fork 7
Hw3 Photon Mapping
Teaser images from [1]
You will use photon mapping to render a Cornell box with effects like caustics, light bleeding, and participating media. This goal can be subdivided into three parts:
- ray tracing (implementing an efficient ray tracer with enhanced sampling)
- photon mapping (implementing efficient storage/lookup of the photon map)
- lighting (implementing new lighting effects using the photon map)
See the "References" section at this end of this article for good reading material on the topic of photon mapping.
The sub-tasks in these three parts are subdivided into two categories: basic and advanced. Tasks in the Basic category will allow you to obtain a B-. The advanced tasks can be used to improve your grade and difficulty is annotated in parenthesis beside each item. Renderings having a visual quality as shown in the teaser image above will award you an A+.
- load and display the Cornell Box (start with basic rendering, just to get something on the screen).
- The model can be downloaded here: http://graphics.cs.williams.edu/data/meshes.xml
- There are many Cornell Box models included, including one with water, which is useful to demo caustics.
- implement a naive ray-intersection acceleration method.
- suggestion: compute a bounding box for each object at load time, check for an intersection with bounding box before checking for an intersection with the object's triangles.
- implement basic multi-threaded acceleration of ray tracing.
- suggestion:
#pragma omp parallel for
on your ray tracing loop.
- suggestion:
- implement improved supersampling by jittering your samples.
- suggestion: Give each sample a random offset from the center of the pixel in the range [-1,+1].
- implement a tent filter (aka. bilinear filter) on your supersamples to reduce noise.
-
(very easy) store the resulting colors of your ray tracing as floating point values and output an HDR image using
stbi_write_hdr
fromstb_image_write.h
. - (easy) benchmark your ray tracing program. How much time did your optimization save for your ray tracing? How does the composition of your scene affect the performance gains of your optimizations?
-
(easy) use multi-jittered sampling with N-rooks to further improve the look of your supersampling, which improves performance by requiring fewer samples to get good results.
- See this Pixar paper: http://graphics.pixar.com/library/MultiJitteredSampling/paper.pdf
- (easy) implement depth-of-field by simulating the intersection of your rays with a camera lens before intersecting them with the scene.
- (easy) implement world-space transformations of your objects, eg. moving spheres around with transformation matrix calculations. Tip: transforming an object is equivalent to doing the inverse transform on the ray. See chapter 10.8: https://www.cs.utah.edu/~shirley/books/fcg2/rt.pdf
-
(medium) Improve your cornell box rendering by having the following objects inside the box:
- a cube
- a matte sphere
- a reflective sphere
- a transparent refractive sphere
- an area light source with a relatively large surface area (eg: on the ceiling)
- a soft shadow effect (from the area lights)
- note: these requirements may conflict with photon mapping rendering features.
- (medium) implement constructive solid geometry (CSG) modeling using the intersection of half-spaces. For example, a cube can be built as the intersection of 6 half-spaces.
- (medium) improve the capabilities of your world-space transformation by building a scene graph. This should allow objects to be placed relative to a parent object.
-
(medium) improve the performance of your ray tracer by recursively subdividing the lists of triangles of objects and intersecting a whole subdivision's bounding box before testing any of the triangles inside it. This is known as a bounding volume hierarchy or BVH.
- Alternatively, accelerate your ray-intersection tests by implementing a uniform space subdivision. Split your screen into a 3D grid, and store the list of objects in every grid location. After that, only test your ray against objects that overlap the grid locations that the ray intersects.
-
(medium) use SSE intrinsics from
<xmmintrin.h>
to speed up your intersection tests.- SSE intrinsics are built-in functions that allow you to perform 4 floating point or integer instructions in parallel. You can use this to, for example, compute the ray intersection of 4 triangles in parallel, which can significantly accelerate intersecting a ray against a list of triangles.
- SSE intrinsics also include hardware implementations of common operations in graphics, like reciprocal square root.
- Be sure to compile for x64 (rather than x86/Win32) to make your life easier (wrt. data alignment).
- See: http://stackoverflow.com/questions/1389712/getting-started-with-sse
-
(hard) use a work-stealing scheduler to better leverage multi-core CPUs. This dynamic scheduling algorithm will allow more CPU cores to be focused on complicated regions of your rendering.
- Good talk on this topic: https://www.youtube.com/watch?v=iLHNF7SgVN4
- tl;dr: subdivide your rendering work recursively into tasks. After each subdivision, the current thread works on one half of the work and puts the other half of the work on a task queue that other threads can pick up work from.
- You may use a library like Intel TBB for this purpose (https://www.threadingbuildingblocks.org/).
- Bonus points for implementing your own. (Can be simple, like a single queue for all tasks.)
photon visualization from [2]
- generate emitted photons (up to a specified number) randomly from a square-shaped area light at the top of the Cornell box.
- use your ray tracer to simulate the path of the emitted photons as they bounce around the scene. At each intersection, use a random number to decide if the photon is reflected, transmitted, absorbed, based on material properties. Use the "Russian roulette" approach.
- add a photon to your photon map data structure for every bounce of a photon on a non-specular material.
- implement a debug visualization of the photons in your photon map, similar to the one in the image above. This can be done by computing the intersection of a line between the photon and the eye with the image plane.
- (easy or medium) store the photons in a balanced kd-tree for faster retrieval during rendering. Bonus points for implementing your own balanced kd-tree. Note the tree only need to be balanced at the end of the photon mapping pass, before moving on to rendering.
- (easy) tint the color of reflected photons by taking into account how the material absorbs or reflects the R G B spectra differently. For example, white light reflected on a red surface will reflect a red photon. This is used to implement diffuse inter-reflections (color bleeding).
- (medium) improve the smoothness of the visualization of the photon map by blurring the points on the debug visualization.
- (medium) allow multiple lights of different types in the scene, such as a point light or a directional light.
- (medium) build a projection map to ensure your improve the chance that your photons are actually going to hit something rather than flying off into the void.
-
(medium) implement a separate caustic photon map, which stores photons that have been through at least one specular reflection before hitting a diffuse surface.
- These photons should be generated in a separate pass, and the sampling should be biased towards shooting rays towards objects that produce caustics like glass and water (to get more accurate caustic light patterns).
- (hard) implement a separate volume photon map, which takes into consideration the scattering and absorption of the medium through which the photons are traveling, which allows you to model the path of light as it travels through fog or smoke.
- implement direct illumination by tracing a ray to the lights in the scene and summing their illumination if the ray to the light is not in shadow.
- implement specular reflections (ie. mirrors) by shooting a reflected ray.
- implement indirect illumination by sampling the associated photon map in a radius around the point to be shaded.
- filter the photon map samples using a cone filter
- (easy) use a gaussian filter instead of a cone filter for the photon map samples
- (medium) implement soft shadows by generating random points on the surface of lights in the scene and averaging the accumulated illumination on the point where light is measured. Sample more powerful lights first to get a better result with fewer samples.
-
(medium) allow the view to be changed without recomputing the photon map (only the rendering pass is recomputed).
- consider caching photon map results to the filesystem. This will make it easier to iterate on renderer design without having to recompute the photon map every time. Just don't forget to invalidate your cache if you do need to recompute the photon map...
- (medium) use a final gathering step to improve the look of indirect illumination. To do this, compute indirect illumination by shooting rays from the first point of intersection and computing the photon map's illumination at those locations. This will give much smoother indirect illumination.
- (medium) you might notice indirect illumination varies smoothly along surfaces. Leverage this spatial coherency by implementing irradiance caching.
- (hard) implement smoke/fog by using a volume photon map (see page 50 of [2]).
- (hard) render the Cornell box with water in it, with proper caustic effects. Bonus points for rendering realistic-looking water (as a counter-example, the water on page 46 of [2] does not look realistic.)
[1] - http://web.cs.wpi.edu/~emmanuel/courses/cs563/write_ups/zackw/photon_mapping/PhotonMapping.html (A good initial overview)
[2] - https://graphics.stanford.edu/courses/cs348b-01/course8.pdf (A thorough introductory course on the topic. Looks intimidating as a 78 page pdf, but it's really only ~30 pages of explanation. The rest is images and references.)
©Department of Computer Science, University of Victoria, 2015.