Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

GPU picking #6991

Closed
wants to merge 22 commits into from
Closed

Conversation

torsteingrindvik
Copy link
Contributor

@torsteingrindvik torsteingrindvik commented Dec 20, 2022

gpu_picking.mp4

Meshes, UI elements, sprites

many-sprites.mp4

The many_sprites example, sprite transparency works (i.e. the picking only happens within the visible part of the sprite even though it is rendered on top of a rectangle).

picking_depth.mp4

Skinned meshes, depth queries, see top right text

Make 2d/3d objects and UI nodes pickable!
Try it via cargo r --example picking or (temporarily) cargo r --example animated_fox.

Opening this as a draft because there's still lots to do and I'm not sure GPU based picking will be accepted.
So I'd like to know if this is something we might merge if it's in a better state, if so I can keep working on it.

What it does for end users

Users may iterate over EventReader<PickedEvent>. These events provide an Entity. Additionally, either Picked or Unpicked.
The events fire when the cursor moves over an entity (cached, i.e. events only sent when the picked/unpicked status changes).

Users may add a component Picking to cameras.
This component then allows querying for Option<Entity> and depth (f32) at any coordinate.

How it works summary

  • A picking component is added to a camera (or cameras)
  • Textures and buffers (for picking and depth (for 3d)) is prepared for each
  • The texture is an extra color attachment in 2d, 3d, UI passes
  • Each pass draws the entity index instead of an RGBA color to this extra attachment
  • A node after the UI pass copies the picking texture and depth texture to buffers
  • An app world system iterates over cursor events, does a lookup into the buffer to see if an entity exists
  • Send relevant events to end users.
  • A system in PreUpdate maps the buffers, allowing users to access them CPU side
  • A system in PostUpdate unmaps, allowing future GPU copies

API example

A full example is shown in examples/apps/picking.rs.

// The `Picking::default()` component has been added to a camera.

// New API, query based
fn picking(
    mut cursor_moved: EventReader<CursorMoved>,
    picking_cameras: Query<(&Picking, &Camera)>,
) {
    for (picking, camera) in picking_cameras.iter() {
        for moved in cursor_moved.iter() {
            let coordinates = moved.position.as_uvec2();

            let depth = picking.depth(camera, coordinates);

            if let Some(entity) = picking.get_entity(camera, coordinates) {
                info!("Entity {entity:?} is at depth {depth}");
            }
        }
    }
}

// Old API (moved away from this after review comments), event based
fn picking_text(mut pick_events: EventReader<PickedEvent>, mut texts: Query<&mut Text>) {
    for pick_event in pick_events.iter() {
        let PickedEvent { entity, event } = pick_event;

        match event {
            PickedEventVariant::Picked => {
                if let Ok(mut text) = texts.get_mut(*entity) {
                    for section in text.sections.iter_mut() {
                        section.style.color = COLOR_HOVERED;
                    }
                }
            }
            PickedEventVariant::Unpicked => {
                if let Ok(mut text) = texts.get_mut(*entity) {
                    for section in text.sections.iter_mut() {
                        section.style.color = COLOR_NORMAL;
                    }
                }
            }
        }
    }
}

Remaining work

If this approach gets a "green light" at least this list should be tackled:

  • Viewports support
  • Unpick when cursor moves outside of window? No longer event based
  • Touch events support No longer event based
  • Render nodes now assume picking is used. Will need to use Option<PickingTextures> instead of assuming it.
  • The 3D pass is currently only implemented for the Opaque3d part
  • Specialize pipelines based on whether picking is used or not. For example, consider adding/removing the entity index vertex attr based on picking/no picking.
  • Figure out how to copy out the depth texture when msaa > 1. copy_texture_to_buffer requires msaa == 1.
  • Try using RenderTarget instead of cameras. This would allow off-screen picking. Wait, this shouldn't matter. The source is still a camera, but the target changes. Shouldn't matter for picking.
  • Make picking example use more types of render passes, e.g. alpha mask, transparent 3D. Useful both as an example and for testing.
    • Consider breaking up this PR to separate out the example such that review is less intimidating (based on diff size).
  • Performance checks

Open questions

  • What performance crimes have I done? I followed the path of least resistance in order to get the entity index available to the relevant shaders
    • Especially, is a texture-to-buffer each frame a perf killer?
  • Is sprite batching affected by adding entity indices as a vertex attr? No longer adding a vertex attr
  • Where vertex attrs are used, would a uniform be better served?
    • Vertex attrs no longer used for sprites. A storage buffer is used instead.
    • TODO: The same for UiBatch.
  • Is WASM supported by this approach? I'm not aware of which GPU capabilities we can/cannot use in WASM builds.
    • I'm currently using a few storage buffers. Afaik storage buffers can't be used on the web yet. We can work around this by using uniform buffers instead (web only) and either have a limited amount of entities supported, or have several backing uniform buffers and no entity limit (I think).
  • Should depth be supported? If so, should depth queries be added in another PR? It's not strictly picking related (or required). We have it working for 3D cameras now though in this PR. See [Merged by Bors] - Add depth and normal prepass  #6284.

Also the below image shows the transparency_3d example's output after the main passes.
Top image is color, middle is picking, bottom is depth.
The transparent pass does not write to the depth buffer. How would we then report correct picking depth?
picking vs depth

Bugs

  • Sometimes maximizing the window has crashed the app. I think this has to do with the picking buffer being out of sync with the physical window size.
  • If a window's scale factor isn't 1.00, the picking is wrong. It still generates events, but at the wrong locations. Needs investigation.
    • I now inspected the scale override example in RenderDoc, picking works correctly when the scale is set differently. Perhaps I fixed this along the way somehow.
  • Some times on my system the app keeps rendering (no frame drops) but I can't move the cursor for >1 seconds at times. Not sure why. I can't find the perpetrator in tracy since the rendering still works smoothly.
    • This only happened on a specfic GPU on a desktop I borrowed... or it was fixed along the way.

Future work / out of scope

Single/few pixel optimization

https://webglfundamentals.org/webgl/lessons/webgl-picking.html

See this article.
There is an optimization available where only a pixel (or a few pixels) around the cursor is rendered instead of the whole scene.
I have not attempted this.

Drag selection

Perhaps if the current approach (i.e. whole texture with entity indices) is kept, drag selection would be relatively easy?
If we look at a rectangle within the texture and look at which entity indices are contained within, we know what was selected.

Increasing pick area

Using exactly the mesh/uinode as the source for picking might be problematic, having it too precise may be annoying for end users.
Perhaps there are some tricks we can do to either the picking texture or just do lookups of the texture in an area to get around this.

Debug overlay

We could add a debug overlay which shows the picking texture.

Activity log

2022 January

  • Entity index value 0 works
  • Split screen example works
    • Had to allow several passes without clearing
  • Sprites work
    • Don't add entity index to vertex, also don't add it to a per-sprite-uniform, due to batching.
      Use a storage buffer which works with batches instead.
  • Sprite transparency works
    • Default is that if alpha == 0.0, picking buffer does nothing at that pixel
    • If alpha > 0.0, the alpha is set to 1.0 (in the shader). This means partial transparency will be picked.
      It's important to set the alpha to 1.0, else the entity index will be alpha blended between other values which makes no sense.
  • Both SpriteBatch and UiBatch work now in a similar way: They have a storage buffer each (two for sprites because the sprite pipeline has two vertex buffers). The storage buffers contain one entity index per object in the batch (ui or sprite). The storage buffer is indexed by vertex index, since both UI nodes and sprites predictably have one object every 6 vertices.
  • All main 3d passes work.

@james7132 james7132 added A-Rendering Drawing game state to the screen A-Editor Graphical tools to make Bevy games C-Enhancement A new feature labels Dec 20, 2022
@aevyrie
Copy link
Member

aevyrie commented Dec 20, 2022

First off, this is awesome! I've been hoping for this for quite a while 🙂. I have a ton of opinions, because I've been working on bevy_mod_picking for OSS and production software for the past 2 years.

My main suggestion would be to take a step back from the user facing API, and instead just expose the CPU side entity index buffer. The picking example could add the shim code needed to look up the entity at the cursor position, but I'd suggest not baking it into the user facing API simply because of how much discussion needs to happen around how that behaves, which could quickly derail this PR. We could merge as-is, but the API would break significantly.

My preference would be for us to just steal the architecture I've written here: https://github.com/aevyrie/bevy_mod_picking/tree/beta to give users maximum flexibility (in another PR). I'm obviously biased, but I've spent hundreds of hours on this and have worked through quite a bit of review feedback on that architecture alone.

Edit: concretely, I'd suggest removing these:

  • An app world system iterates over cursor events, does a lookup into the buffer to see if an entity exists
  • Send relevant events to end users.
  • Unpick when cursor moves outside of window?
  • Touch events support

Some added questions:

  • How does this work with multiple windows?
  • Does it work with skinned meshes?
  • How do I opt out if I don't want the overhead?

@torsteingrindvik
Copy link
Contributor Author

@aevyrie Thanks a lot for the feedback!

My main suggestion would be to take a step back from the user facing API, and instead just expose the CPU side entity index buffer.

I agree this is likely safer (in terms of not breaking APIs which will definitely not be correct on first iterations). I'll remove the user facing APIs.

The picking example could add the shim code needed to look up the entity at the cursor position

Then I'll do this.

My preference would be for us to just steal the architecture I've written here

I should read your code next and the review you linked.
How compatible would you say a buffer you can do a look-up via coordinates is with your architecture (this might be obvious when I read your beta branch but I'd like your thoughts anyway).

How does this work with multiple windows?

Currently a buffer and textures are tied to a camera.
Multiple windows will have multiple cameras. So this should just work, but I'll try it and see what happens.
One problem right now is that the events fired off right now do not say which window the cursor event happened in.
But if we remove the user facing API and expose the buffers we don't need to tackle that (good!).

On the other hand, multiple cameras within a window like for splitscreen is problematic mainly due to not handling viewports, but that's just a TODO not an architectural issue.

Does it work with skinned meshes?

Yes- I edited the original post to add a video of picking the animated fox example.
Added the commit too, but that's just in case it's useful for discussion, not intended to merge.

How do I opt out if I don't want the overhead?

Since buffers and textures are only added to cameras if they have the picking component, there should be negligible (practically zero) overhead.
So currently it's opt-in, not opt-out.

Note that there is some work to do here since I have to do some pipeline specialization and such to make this true, since currently I have hardcoded that the pipelines expect picking.

@aevyrie
Copy link
Member

aevyrie commented Dec 21, 2022

How compatible would you say a buffer you can do a look-up via coordinates is with your architecture (this might be obvious when I read your beta branch but I'd like your thoughts anyway).

The entire thing is built around the goal of making picking backends simple and composable. The API for writing a backend is dead simple: given a list of pointer locations (RenderTarget and position), for each pointer send an event with the list of entities and their depth under that pointer. The plugin then coalesces all of these backend events, and figures out the picking state for all pointers as well as any events (click, drag, hover, etc). This makes it possible to compose backends for e.g. a custom renderer, egui, a physics engine, etc, and get a single valid output.

A shader picking backend looking up coordinates in a buffer would probably be <150 LOC. The only complication would be depth, though we might be able to use the depth prepass to get depth data at that position, unless you can build that into the pipeline? Alternatively, the backend could just set the depth to a constant but configurable value for all entities, so you could composite the results from other backends above or below that depth. For example, the current egui backend reports all depths at -1.0, to ensure those intersections are always on top of the results from other backends.

Currently a buffer and textures are tied to a camera.

That sounds right to me. When trying to make picking work across viewports and even render-to-texture, I found that using RenderTargets was the most flexible way to accomplish this. IIRC this has a 1:1 mapping to cameras.

Continuing from the previous question, if each camera had a picking buffer as a component, this would make things pretty straightforward. The backend would look at the render target of the pointer, find the camera with the matching render target, then look up the pointer position in that buffer.

@torsteingrindvik
Copy link
Contributor Author

The only complication would be depth, though we might be able to use the depth prepass to get depth data at that position, unless you can build that into the pipeline?

Worked on this today. Please see the new video, depth (from the GPU's perspective) can now be directly queried (side note: Only works for MSAA == 1 right now).
I haven't yet looked at how your plugin determines depth (please chime in).

I found that using RenderTargets was the most flexible way

I will put this as a TODO. This seems like the correct way to me, since it will allow also picking from Image, so a user does not need to render to screen to do picking.

Also, I got rid of events. Please check out the new API example.

When I get around to fixing such this PR such that Picking is not required, I think I could try rebasing your picking plugin on top of this, then making a backend based on this PR.

@aevyrie
Copy link
Member

aevyrie commented Dec 28, 2022

Only works for MSAA == 1 right now).

This seems like a sensible limitation for the entity buffer itself, I think it needs to be left aliased. I assume your intent is that the picking pipeline just won't specialize on the MSAA key.

I haven't yet looked at how your plugin determines depth (please chime in).

The backends all use raycasting or simple 2d hit testing, and return depth as the world-space distance from the camera. It looks like in your example you might be using the nonlinear (infinite reverse-z) GPU depth. I think world space depth might be more useful when reporting depth from a method. Internally though, it makes sense for the buffer itself to store depth in GPU-native format.

Also, I got rid of events. Please check out the new API example.

Makes sense to me!

When I get around to fixing such this PR such that Picking is not required

It might be helpful to rename the Picking Component to something more precise, like EntityDepthBuffer, assuming the component isn't removed entirely.

I think I could try rebasing your picking plugin on top of this, then making a backend based on this PR.

That sounds fantastic! I have a bunch of examples I use for testing, we could use that to see if everything is working as expected. 🙂

@torsteingrindvik
Copy link
Contributor Author

torsteingrindvik commented Jan 15, 2023

This seems like a sensible limitation for the entity buffer itself, I think it needs to be left aliased. I assume your intent is that the picking pipeline just won't specialize on the MSAA key.

I agree there shouldn't be a need for MSAA for picking.
However the issue right now is that if MSAA is set to 4, the main 3d pass only produces a depth texture with MSAA set to 4 too.
And wgpu does not allow copying multisampled textures directly to buffers. So I'd have to figure out a step in-between to fix that.

Also, I notice the 3d transparency pass does not write depth info. But I think picking a semi-opaque object should be possible. So there is a mismatch there- I added an image to the OP to show what I mean.

Also there is the depth prepass PR which I haven't looked at closely. Perhaps that clears things up. Perhaps there is no need for depth related changes in this PR after that.

It might be helpful to rename the Picking Component to something more precise, like EntityDepthBuffer, assuming the component isn't removed entirely.

I agree, I'll purge Picking and opt for EntityIndex or something when things shape up more.

I'm a bit closer to being ready to try it on your plugin now.
I added a video in the OP that shows sprite picking which works with sprite alpha.

EDIT: I forked bevy_mod_picking and tried pointing to my bevy fork on the picking branch. Strange compile issues happen then, I think it's because of several bevy checkouts in the dependency graph.
I tried doing a [patch.crates-io] section to override it instead but I get a warning "Patch was not used in the crate graph".
Any help on this appreciated.

Signed-off-by: Torstein Grindvik <torstein.grindvik@nordicsemi.no>
Signed-off-by: Torstein Grindvik <torstein.grindvik@nordicsemi.no>
Signed-off-by: Torstein Grindvik <torstein.grindvik@nordicsemi.no>
Signed-off-by: Torstein Grindvik <torstein.grindvik@nordicsemi.no>
Signed-off-by: Torstein Grindvik <torstein.grindvik@nordicsemi.no>
Signed-off-by: Torstein Grindvik <torstein.grindvik@nordicsemi.no>
Signed-off-by: Torstein Grindvik <torstein.grindvik@nordicsemi.no>
Signed-off-by: Torstein Grindvik <torstein.grindvik@nordicsemi.no>
Signed-off-by: Torstein Grindvik <torstein.grindvik@nordicsemi.no>
Signed-off-by: Torstein Grindvik <torstein.grindvik@nordicsemi.no>
Signed-off-by: Torstein Grindvik <torstein.grindvik@nordicsemi.no>
Signed-off-by: Torstein Grindvik <torstein.grindvik@nordicsemi.no>
Signed-off-by: Torstein Grindvik <torstein.grindvik@nordicsemi.no>
Signed-off-by: Torstein Grindvik <torstein.grindvik@nordicsemi.no>
Signed-off-by: Torstein Grindvik <torstein.grindvik@nordicsemi.no>
…round. Pro: Picking not only on mouse movement

Signed-off-by: Torstein Grindvik <torstein.grindvik@nordicsemi.no>
Signed-off-by: Torstein Grindvik <torstein.grindvik@nordicsemi.no>
Signed-off-by: Torstein Grindvik <torstein.grindvik@nordicsemi.no>
Signed-off-by: Torstein Grindvik <torstein.grindvik@nordicsemi.no>
Signed-off-by: Torstein Grindvik <torstein.grindvik@nordicsemi.no>
Signed-off-by: Torstein Grindvik <torstein.grindvik@nordicsemi.no>
Signed-off-by: Torstein Grindvik <torstein.grindvik@nordicsemi.no>
@aevyrie
Copy link
Member

aevyrie commented Jan 18, 2023

EDIT: I forked bevy_mod_picking and tried pointing to my bevy fork on the picking branch. Strange compile issues happen then, I think it's because of several bevy checkouts in the dependency graph.
I tried doing a [patch.crates-io] section to override it instead but I get a warning "Patch was not used in the crate graph".
Any help on this appreciated.

I can make a mod_picking branch that uses this PR. Ping me on discord if you don't hear back within a day or two.

@JoeHowarth
Copy link

Just wondering if you still plan to work on this @torsteingrindvik or @aevyrie : )

@torsteingrindvik
Copy link
Contributor Author

Just wondering if you still plan to work on this @torsteingrindvik or @aevyrie : )

I stopped because I wasn't entirely sure if this would be accepted into Bevy (if polished and clean up etc).
(And life stuff).
@JoeHowarth would you want to adopt the PR?

@aevyrie
Copy link
Member

aevyrie commented May 2, 2023

I'm happy to help review, I would love to see this merged.

@nicopap
Copy link
Contributor

nicopap commented May 5, 2023

GPU picking is very useful. I think it's absolutely necessary in order to support mesh animations. I'd also be happy to give a review.

@Jeff425
Copy link

Jeff425 commented May 18, 2023

@torsteingrindvik @aevyrie Do you two think it would be possible to get this in for the next release? At least to the point where a plugin like aevyrie's bevy_mod_picking could use it as a backend? This seems like the last massive hurdle to get over in order to make animations fully functional. Awesome work btw! :)

@torsteingrindvik
Copy link
Contributor Author

torsteingrindvik commented May 18, 2023

@torsteingrindvik @aevyrie Do you two think it would be possible to get this in for the next release? At least to the point where a plugin like aevyrie's bevy_mod_picking could use it as a backend? This seems like the last massive hurdle to get over in order to make animations fully functional. Awesome work btw! :)

I briefly tried rebasing this PR and seems a lot has changed in Bevy.
I'm quite busy a for at least a week or two, but I think I'd like to pick this up again after.

In the meantime I'm also happy if either

  1. Anyone wants to try to rebase against Bevy main (to help me out), or
  2. Continue this PR altogether

If not I might try rebasing again, or I may recreate the PR on main to get back into it properly.

I'm not sure exactly when the next release is (sometime in June?) so I can't promise anything from my side.

@aevyrie
Copy link
Member

aevyrie commented May 18, 2023

We (foresight) could sponsor @IceSentry to work on this ticket part-time at work if you're okay with him taking it over the finish line. I believe this PR predates the depth prepass which IceSentry wrote, so he might be well suited to help out.

@torsteingrindvik
Copy link
Contributor Author

We (foresight) could sponsor @IceSentry to work on this ticket part-time at work if you're okay with him taking it over the finish line. I believe this PR predates the depth prepass which IceSentry wrote, so he might be well suited to help out.

Sure, if they (@IceSentry) would like that this sounds fine by me.

@IceSentry
Copy link
Contributor

@torsteingrindvik Like @aevyrie mentioned, I'm happy to either help you or take it over.

Considering the age of the PR and how much rendering stuff has changed since then it might be easier to just start over. Let me know what you prefer.

@torsteingrindvik
Copy link
Contributor Author

torsteingrindvik commented May 18, 2023

@torsteingrindvik Like @aevyrie mentioned, I'm happy to either help you or take it over.

Considering the age of the PR and how much rendering stuff has changed since then it might be easier to just start over. Let me know what you prefer.

My gut feeling is that if I were to pick it up myself I'd be happier starting from scratch.

EDIT: So if you want to pick it up then likely that's a less painful path for you too 🙂

@IceSentry
Copy link
Contributor

My gut feeling is that if I were to pick it up myself I'd be happier starting from scratch.

Yep, that's my plan.

I'll open a draft PR sometimes next week and add you as a co-author.

@aevyrie
Copy link
Member

aevyrie commented May 18, 2023

Awesome, thanks @IceSentry.

@torsteingrindvik thanks for all the work you did here, we'll be sure to properly credit you in the new branch.

@IceSentry IceSentry mentioned this pull request Jun 8, 2023
@torsteingrindvik
Copy link
Contributor Author

Superseded via #8784 so closing.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
A-Editor Graphical tools to make Bevy games A-Rendering Drawing game state to the screen C-Enhancement A new feature
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

7 participants