Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Estimate AO with the GPU Raymarcher #27

Closed
CaffeineViking opened this issue Jan 10, 2019 · 1 comment
Closed

Estimate AO with the GPU Raymarcher #27

CaffeineViking opened this issue Jan 10, 2019 · 1 comment
Assignees
Labels
Priority: High Project: Renderer Issues relating to the core hair renderer itself. Type: Feature Something new and shiny is being requested.

Comments

@CaffeineViking
Copy link
Owner

CaffeineViking commented Jan 10, 2019

Now that we have a reference AO solution (#25) we should try to approximate it by using our raymarcher. In a nutshell, we can do this by shooting rays from the camera toward the strand that is to be shaded, and then "counting" the number of strands in the way (which is the local density of hair calculated by the voxelization). This gives us the expected AO as seen from the camera's point-of-view. This is of course not the same thing that is being calculated by the raytracer, so an alternative solution would be to sample in the neighborhood around the strand we want to find the AO of (or even shoot rays from the strand in random directions, and accumulating the number of strands encountered). I think it's worth a shot to try both approaches, and see what seems most reasonable with the ground-truth (the second one is likely to match up with the raytracer).

@CaffeineViking CaffeineViking added Type: Feature Something new and shiny is being requested. Project: Renderer Issues relating to the core hair renderer itself. Priority: High labels Jan 10, 2019
@CaffeineViking CaffeineViking self-assigned this Jan 10, 2019
@CaffeineViking
Copy link
Owner Author

CaffeineViking commented Jan 14, 2019

I tried two different approaches:

  • "Local Ambient Occlusion in Direct Volume Rendering" by Hernell et al. This method essentially gathers the densities by summing up all the voxels within a sphere of radius r. The problem with this approach is that you need a very large r to get anything meaningful for hair styles. i.e. this is very expensive. This technique works fine for "normal" geometry, but for hair where occlusion propagates "further up" in the hair volume, this doesn't quite work, even for large values of r. I compared the raytraced and rasterized variants (using this method) and the result didn't match up.

  • "A Voxel-Based Rendering Pipeline for Large 3D Line Sets" Kanzler et al. in their "Ambient Occlusion" section, they just filter the line densities (like we already do). They also have some sort of pre-filtering pass in their more advanced method which we could look into if we think the current AO results aren't good enough. However, when comparing the raytraced and rasterized results of Kajiya-Kay + ADSM + 3x3x3 Filtered Density AO, I think the results are "close enough". I'll upload some images to Captain's Log so you can see this as well, but I personally think we have "bigger fish to fry", like fighting aliasing.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Priority: High Project: Renderer Issues relating to the core hair renderer itself. Type: Feature Something new and shiny is being requested.
Projects
None yet
Development

No branches or pull requests

1 participant