Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Voxelize the Hair into a Density Volume #18

Closed
CaffeineViking opened this issue Dec 11, 2018 · 3 comments
Closed

Voxelize the Hair into a Density Volume #18

CaffeineViking opened this issue Dec 11, 2018 · 3 comments
Assignees
Labels
Priority: High Project: Renderer Issues relating to the core hair renderer itself. Type: Feature Something new and shiny is being requested.

Comments

@CaffeineViking
Copy link
Owner

CaffeineViking commented Dec 11, 2018

We'll need it if we want to make the ADSM more flexible, since it (right now) assumes the hair style density is constant throughout the volume, which is not completely true. While the quality is already acceptable for the hair styles we have, the technique will completely break down for non-uniform density styles. If we want to make the technique as flexible as DOM, we'll need to solve this properly. Tomb Raider solved this by splitting the style into several parts, with different constant densities for each part. While this works, it requires artist time and manual labor, instead we would like to find the hair density automatically, and store it in a volume. I believe we can make ADSM as general and good looking as DOM, but without having to pay the DOM price (rendering multiple shadow maps is bad, and you need a lot of them in the DOM).

For comparison, the paper by Sintorn and Assarsson (linked below) gets around 13 FPS for a comparable hair style at vastly lower resolutions (800x600!) on comparable hardware (to my MX150-2 laptop GPU, they used a GTX 280 back in 2009), using these Deep Opacity Maps. If we can generate the volumes on the GPU at e.g. 1ms or 0.5ms, and still have ADSM be quite generic as DOM, it will still be vastly faster than any DOM-based technique and still have similar quality. Our technique currently runs at 60 FPS at 1080p on my shitty laptop, and still has 2-3ms to spare (that's in the worst case, i.e. every fragment is processing some hair and reading from the shadow map, for a medium-range viewing perspective, we have around 10ms to spare in total) before we dip below 60 FPS. So yes, quite a bit faster than DOM...

This hair density volume can probably also be used for other useful things, like computing an approximated SSS a lá Frostbite 2 (the thickness-map one from GDC 2011) which AFAIK hasn't been attempted before in the area of hair rendering. The dual-scattering paper is the only "real-time" approximation of scattering we have around, so it would be cool if we could extend the Frostbite method to hair rendering, and have something that may be novel in the area of approximated subsurface scattering of hair (that is good enough for games) We could also maybe do some smart transparency handling by using this too. I've found a paper that seems to be doing something similar: "Hair Self-Shadowing and Transparency Depth Ordering using Occupancy Maps" by Sintorn and Assarsson, with a "Occupancy Map".

@CaffeineViking CaffeineViking added Type: Feature Something new and shiny is being requested. Project: Renderer Issues relating to the core hair renderer itself. Priority: High labels Dec 11, 2018
@CaffeineViking CaffeineViking self-assigned this Dec 11, 2018
@CaffeineViking
Copy link
Owner Author

I've implemented a simple voxelization scheme that finds how many vertices are in a voxel, it's running on the CPU right now, (single-threaded) and takes around ~15ms to generate 256³ data out of 1.8M vertices. Here is some pseudo-code for how it works, and the results I get when running it through ParaView as well:

volume_size = hair_geometry.aabb.max - hair_geometry.aabb.min;
voxel_size = volume_size / [ width, height, depth ];
volume_origin = hair_geometry.aabb.min;
for every hair_vertex in hair_geometry:
    voxel = (hair_vertex - volume_origin) / voxel_size;
    voxel = floor(voxel);
    // clamp voxels here.
    i = voxel.x + voxel.y*width + voxel.z*width*height;
    if voxel_count[i] != 255:
        voxel_count[i] += 1;
write(voxel_count, "out.raw"); 

2018-12-19

@Anteru
Copy link
Collaborator

Anteru commented Dec 19, 2018

Given this is no-AVX, no multi-threaded, I'm not even sure it's worth moving the generation to the GPU for now :) It's clearly going to be fast enough. The density map looks quite allright, but of course we need to try to use it to find out whether the algorithm needs some tweaks (like approximating the length of the hair through the voxel.) In any case, it looks rather promising to me.

@CaffeineViking
Copy link
Owner Author

CaffeineViking commented Dec 21, 2018

Yeah, I think we'll go with this for now. I've also implemented another approach which gives a higher-quality approximation, but at substantially higher cost. If the the results we get when using the density are good enough (e.g. in the shadowing or OIT-cases) even with the fast voxelization approach (we can compare it with the high-quality one), then we should go with the fast one. Or use some sort of hybrid scheme, where it uses the fast one for medium-to-far hair styles and the high-quality but slow for something like cutscenes.

Anyway, I think the issue is solved now. I'll create a new issue if we need to modify the voxelization scheme.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Priority: High Project: Renderer Issues relating to the core hair renderer itself. Type: Feature Something new and shiny is being requested.
Projects
None yet
Development

No branches or pull requests

2 participants