Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support for NeRFs and Gaussian splatting #4529

Open
roym899 opened this issue Dec 14, 2023 · 10 comments
Open

Support for NeRFs and Gaussian splatting #4529

roym899 opened this issue Dec 14, 2023 · 10 comments
Labels
enhancement New feature or request 🍏 primitives Relating to Rerun primitives 🔺 re_renderer affects re_renderer itself 📺 re_viewer affects re_viewer itself

Comments

@roym899
Copy link
Collaborator

roym899 commented Dec 14, 2023

Neural radiance fields (NeRFs) and Gaussian splatting recently received a lot of attention. These are 3D representations that can be optimized from posed image collections via differentiable rendering yielding near-photorealistic results.

There are a large number of follow-up works that adopt the main idea from the original works, but modify the network architecture, sampling procedure, exact rendering equation, relax the assumption of posed images, etc.

This makes it more difficult to support these directions out-of-the-box.

Describe a solution you'd like
In my opinion the best (and maybe only) way to add support is via plugins that allow custom datatypes (e.g., logging the 3D Gaussians, network weights, or whatever is the underlying representation) and custom rendering (e.g., given the camera parameters for the 3D view, and the logged data, let the plugin render RGB image + Z buffer, which are then combined with supported primitives on the Rerun side).

NeRFs can typically not be viewed in real-time, thus an adaptive rendering scheme should be implementable (i.e., resolution could easily be increased when the camera does not move, I believe nerfstudio does this).

Describe alternatives you've considered
An option might be to support whatever comes closest to a reference implementation for Gaussian splatting (e.g. this one). But I'm not convinced this is a viable solution at this point, when there's still a lot of research at the renderer level going on.

@roym899 roym899 added enhancement New feature or request 👀 needs triage This issue needs to be triaged by the Rerun team labels Dec 14, 2023
@Wumpf Wumpf added 🔺 re_renderer affects re_renderer itself 📺 re_viewer affects re_viewer itself 🍏 primitives Relating to Rerun primitives and removed 👀 needs triage This issue needs to be triaged by the Rerun team labels Dec 19, 2023
@genemerewether
Copy link

Here's a fun web-based real-time renderer for Gaussian splats in case you haven't seen it: https://github.com/antimatter15/splat

@emilk
Copy link
Member

emilk commented Jan 29, 2024

I have a branch emilk/gaussian-splats we're I've experimented with importing .ply files containing gaussian splats, as can be downloaded from https://poly.cam/tools/gaussian-splatting. It is currently blocked on implementing proper transparency (sorting the points back-to-front):

@emilk emilk mentioned this issue Jan 31, 2024
10 tasks
@Wumpf
Copy link
Member

Wumpf commented Feb 2, 2024

This looks a pretty good gaussian splat impl, better than the ones floating around on slack and discord so far.
https://github.com/KeKsBoTer/web-splat
wgpu & egui based. uses radix sort compute shader for splat rendering (leaving that poor lil cpu alone!) (therefore only works in Webgpu enabled browser ofc)

@KeKsBoTer
Copy link

I was looking into visualizing 3D Gaussian Splatting reconstructions with rerun.io and discovered this issue.
@Wumpf thanks for mentioning my renderer!
I would like to help with the integration of a 3D Gaussian Splatting renderer.

My suggestion would be to use the GPU Raxis sort if WGPU is available. Otherwise, use a Bitonic sorter on the CPU as a fallback (like poly.cam or this implementation GaussianSplats3D).

@Wumpf
Copy link
Member

Wumpf commented Feb 13, 2024

Thank you @KeKsBoTer I'll get back to you on that :)
I think the main difficulties we'll have is to formulate how the splats are ingested from the api and how to integrate the splat rendering with the rest of our (fairly primitive) renderer in a meaningful way. I figure as long as the interaction is between splat and opaque it's not that hard since splat rendering can still participate in depth testing. Once other transparent objects come in it gets a bit trickier without implementing fully general order independent transparency (arguably not that far off once splat sorting is there). Alternatively we'll just have a dedicated view for the time being and don't allow other objects in it

@KeKsBoTer
Copy link

KeKsBoTer commented Feb 28, 2024

I created a separate crate for our radix sort implementation: https://crates.io/crates/wgpu_sort.

@Wumpf
Copy link
Member

Wumpf commented Mar 1, 2024

nice!! love it. So good to have this as a separate, well documented and even benchmarked library!

@Wumpf
Copy link
Member

Wumpf commented Mar 1, 2024

@KeKsBoTer getting a bit offtopic of the original ticket here, but the bit about subgroup handling in the sorting algorithm gives me pause. I was scrolling a little bit in the shader code to understand the exact subgroup size dependency. As understand it's that https://github.com/KeKsBoTer/wgpu_sort/blob/master/src/radix_sort.wgsl#L267 assumes that any atomic writes by the same subgroup are immediately visible to any other subgroup member upon atomic load?
Since intel (and I believe also newer AMD which support both 64 and 32 wide subgroups) uses compiler heuristics to determine the subgroup size, this may break arbitrarily depending on driver updates, yes?
In that case we wouldn't really be able to risk using this since we can't really predict when the assumption holds. The easiest way out of it would be to add a subgroup control native-only feature to wgpu to control this, but that would ofc preclude WebGPU.
... did I get all this right? I feel like I only have a very vague picture of what's going on :)

@KeKsBoTer
Copy link

@Wumpf Yes you are correct. I mention this in the Limitations of the package REAME. As long as wgpu has no subgroup control (which it will have soon hopefully) it can potentially break.
We estimate/guess the subgroup size by sorting a small list and checking if the sorting is correct.
This is not a 100% reliable method but I have never seen it break.

To fix this problem in the meantime you can simply set the subgroup size to 1 when compiling the shader.
This will make the sorting 100% safe but also slower. here is a comparison for my NVIDIA A5000:

Subgroup size 10k 100k 1 Million 8 Million
32 109.31µs 110.636µs 318.018µs 1.6525ms
1 391.55µs 389.031µs 869.413µs 4.162672ms

The sorting is roughly 3x slower but still more than fast enough to sort Gaussian Splatting scenes that typically have around 1 to 5 million points.

I hope this answers your questions / concerns.

@Wumpf
Copy link
Member

Wumpf commented Mar 1, 2024

thanks for clearing this up! also nice benchmarks there again, super cool that you can test it that quickly as well :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request 🍏 primitives Relating to Rerun primitives 🔺 re_renderer affects re_renderer itself 📺 re_viewer affects re_viewer itself
Projects
None yet
Development

No branches or pull requests

5 participants