Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Convert map_async from being async to being callback based #2698

Merged
merged 1 commit into from
May 31, 2022

Conversation

cwfitzgerald
Copy link
Member

Connections

This is vaguely related to all sorts of polling issues.

Description

This removes the async from the map_async and (future) on_submitted_work_done api. We removed this for a couple reasons.

  • It is a non-trivial task to poll a future once, because you need to set up all the noop waker stuff to be able to call poll.
  • These functions being async, because of how the async ecosystem generally works, implies there is a reactor somewhere to make the future resolve. This is not the case in wgpu and it being a callback makes it clearer that this is called by other code.
  • The underlying apis for both wgpu-core and wasm are callback based, so we just expose this directly without getting some fairly complicated infrastructure involved.

This is the first in a couple PRs improving our async processing and backpressure story.

Testing

Ran test cases.

@cwfitzgerald
Copy link
Member Author

cwfitzgerald commented May 28, 2022

This one is a review party, as we're all stakeholders here :)

@cwfitzgerald cwfitzgerald force-pushed the async-to-callback branch 3 times, most recently from 85e7ff7 to 3933334 Compare May 28, 2022 07:35
Copy link
Collaborator

@grovesNL grovesNL left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this is a great idea and definitely makes the async control flow around poll a lot more explicit. That's really important, because the control flow around buffer mapping regularly causes confusion, especially when trying to wait for one of the futures to resolve inside on an existing event loop.

@jimblandy already mentioned it on Matrix but it might be nice if we could consistently use callbacks or futures across the API. Using callbacks everywhere could be interesting because it would probably allow a lot of projects to avoid async/executors entirely (i.e. assuming they don't use futures anywhere besides wgpu).

The biggest downsides to me are that it feels like we'd be another step removed from the web API, so it's slightly harder to cross-reference JS examples/tutorials, and poll is still required for now (i.e. unless we address the async reactor issue somehow).

Another minor downside is that it seems like a lot of Rust ecosystem is moving towards futures (from callback-based APIs), so moving towards callbacks might be a little surprising to people.

Overall I slightly prefer to go ahead with this approach and addressing the downsides however we can (e.g. great documentation explaining how it would map to the equivalent web API calls, still consider ways to eliminate poll, etc.).

It is a non-trivial task to poll a future once, because you need to set up all the noop waker stuff to be able to call poll.

Agreed, the current indirection through wakers feels a bit heavy. It's really nice to be able to await to wait for mapped buffers in general though, so using channels to accomplish approximately the same thing seems reasonable.

These functions being async, because of how the async ecosystem generally works, implies there is a reactor somewhere to make the future resolve. This is not the case in wgpu and it being a callback makes it clearer that this is called by other code.

I was hoping we'd eventually have some kind of reactor/integration to drive polling, so we could remove poll from the majority of user-code (except when really granular control is necessary). I like that approach because most people wouldn't have to worry about this difference between native/web (even if automatic polling had some kind of opt-out for the overhead of a thread or similar).

The underlying apis for both wgpu-core and wasm are callback based, so we just expose this directly without getting some fairly complicated infrastructure involved.

The promise<->future mapping seems pretty common in Rust code working with web APIs that return Promises. Mapping promises to Rust callbacks is probably not typical, but might be appropriate here.

wgpu/examples/capture/main.rs Show resolved Hide resolved
wgpu/src/util/belt.rs Show resolved Hide resolved
wgpu/src/lib.rs Show resolved Hide resolved
Copy link
Member

@jimblandy jimblandy left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I love this change. It's such a relief to see the Rust code able to just pass a closure naturally.

I have some concerns about the unsafe handling.

wgpu/src/lib.rs Outdated Show resolved Hide resolved
wgpu/src/lib.rs Show resolved Hide resolved
wgpu/src/lib.rs Outdated Show resolved Hide resolved
wgpu/src/lib.rs Outdated Show resolved Hide resolved
wgpu/src/lib.rs Outdated Show resolved Hide resolved
wgpu-core/src/resource.rs Outdated Show resolved Hide resolved
wgpu-core/src/resource.rs Outdated Show resolved Hide resolved
wgpu/examples/hello-compute/main.rs Outdated Show resolved Hide resolved
wgpu-core/src/device/mod.rs Outdated Show resolved Hide resolved
wgpu-core/src/device/queue.rs Outdated Show resolved Hide resolved
@cwfitzgerald cwfitzgerald force-pushed the async-to-callback branch 4 times, most recently from e7dbfed to 0858a18 Compare May 30, 2022 20:07
@cwfitzgerald
Copy link
Member Author

cwfitzgerald commented May 30, 2022

Alright, I think I've applied all the direct code comments, thank you for the thorough review both of you!

As for the more abstract stuff: (cc @grovesNL)

I was hoping we'd eventually have some kind of reactor/integration to drive polling, so we could remove poll from the majority of user-code (except when really granular control is necessary). I like that approach because most people wouldn't have to worry about this difference between native/web (even if automatic polling had some kind of opt-out for the overhead of a thread or similar).

I've always pushed really hard against this as it makes it way too easy to write really terrible code without knowing what you're doing:

let (sender, receiver) = flume::bounded(1);
let mapping = buffer.slice(..).map_async(.., |_| sender.send(()));
receiver.recv().unwrap();

With any kind of automatic polling this will work and we've just invented a slightly more verbose glReadBuffer. I'm kinda unhappy with the web that it forces the runtime to poll for you, but I get why you need to on the web.

It's not that much more code to do the obviously anti-pattern thing.

let (sender, receiver) = flume::bounded(1);
let mapping = buffer.slice(..).map_async(.., |_| sender.send(()));
device.poll(Maintain::Wait(None)) // very clearly a hard wait.
receiver.recv().unwrap();

I don't want to stop people from committing code sins, I just want them to know the sins they are committing.

There's absolutely a balance to be struck here. I don't want to make this artificially hard, but I want users to understand the consequences of the code they write and gpu -> cpu communication has big consequences for performance if done wrong.

I think the route forward is just stellar documentation. I am planning on writing a "so you want to read data from the gpu" article on the wiki that we can link in the docs once it's written.

jimblandy already mentioned it on Matrix but it might be nice if we could consistently use callbacks or futures across the API. Using callbacks everywhere could be interesting because it would probably allow a lot of projects to avoid async/executors entirely (i.e. assuming they don't use futures anywhere besides wgpu).

This is an interesting question, because it's basically "how much do we want to warp our api to allow the default native first code to work on wasm" and there are good arguments in both directions, but probably outside the scope of this PR.

@grovesNL
Copy link
Collaborator

There's absolutely a balance to be struck here. I don't want to make this artificially hard, but I want users to understand the consequences of the code they write and gpu -> cpu communication has big consequences for performance if done wrong.

Yeah definitely, I can appreciate wanting to make the performance trade-off more explicit. I think we could achieve that through documentation and examples instead of requiring extra function calls though. People get confused about when/how to call poll, and it feels a bit strange that poll might not be required for the callback to fire depending on whether submissions are going on around it, for example.

My concern is mostly that poll is misleading when targeting the web (especially Wait which can't be implemented), so that's why I'd prefer for automatic polling to be the default (i.e. optionally opt-in to manual polling on native only).

Either way both approaches should work fine with callbacks instead of Futures, so I don't think it blocks anything here.

wgpu/Cargo.toml Show resolved Hide resolved
@cwfitzgerald cwfitzgerald merged commit 32af4f5 into gfx-rs:master May 31, 2022
@cwfitzgerald cwfitzgerald deleted the async-to-callback branch May 31, 2022 15:22
moz-v2v-gh pushed a commit to mozilla/gecko-dev that referenced this pull request Jun 6, 2022
Minor changes are needed to the `mapAsync` implementation due to:
gfx-rs/wgpu#2698

Differential Revision: https://phabricator.services.mozilla.com/D147805
jamienicol pushed a commit to jamienicol/gecko that referenced this pull request Jun 6, 2022
Minor changes are needed to the `mapAsync` implementation due to:
gfx-rs/wgpu#2698

Differential Revision: https://phabricator.services.mozilla.com/D147805
tychedelia added a commit to tychedelia/nannou that referenced this pull request Oct 10, 2023
zmitchell pushed a commit to zmitchell/splatter that referenced this pull request Oct 18, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants