Voxelize the Hair into a Density Volume #18
Labels
Priority: High
Project: Renderer
Issues relating to the core hair renderer itself.
Type: Feature
Something new and shiny is being requested.
We'll need it if we want to make the ADSM more flexible, since it (right now) assumes the hair style density is constant throughout the volume, which is not completely true. While the quality is already acceptable for the hair styles we have, the technique will completely break down for non-uniform density styles. If we want to make the technique as flexible as DOM, we'll need to solve this properly. Tomb Raider solved this by splitting the style into several parts, with different constant densities for each part. While this works, it requires artist time and manual labor, instead we would like to find the hair density automatically, and store it in a volume. I believe we can make ADSM as general and good looking as DOM, but without having to pay the DOM price (rendering multiple shadow maps is bad, and you need a lot of them in the DOM).
For comparison, the paper by Sintorn and Assarsson (linked below) gets around 13 FPS for a comparable hair style at vastly lower resolutions (800x600!) on comparable hardware (to my MX150-2 laptop GPU, they used a GTX 280 back in 2009), using these Deep Opacity Maps. If we can generate the volumes on the GPU at e.g. 1ms or 0.5ms, and still have ADSM be quite generic as DOM, it will still be vastly faster than any DOM-based technique and still have similar quality. Our technique currently runs at 60 FPS at 1080p on my shitty laptop, and still has 2-3ms to spare (that's in the worst case, i.e. every fragment is processing some hair and reading from the shadow map, for a medium-range viewing perspective, we have around 10ms to spare in total) before we dip below 60 FPS. So yes, quite a bit faster than DOM...
This hair density volume can probably also be used for other useful things, like computing an approximated SSS a lá Frostbite 2 (the thickness-map one from GDC 2011) which AFAIK hasn't been attempted before in the area of hair rendering. The dual-scattering paper is the only "real-time" approximation of scattering we have around, so it would be cool if we could extend the Frostbite method to hair rendering, and have something that may be novel in the area of approximated subsurface scattering of hair (that is good enough for games) We could also maybe do some smart transparency handling by using this too. I've found a paper that seems to be doing something similar: "Hair Self-Shadowing and Transparency Depth Ordering using Occupancy Maps" by Sintorn and Assarsson, with a "Occupancy Map".
The text was updated successfully, but these errors were encountered: