Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Roadmap for 3D #891

Open
raphlinus opened this issue Apr 29, 2020 · 13 comments
Open

Roadmap for 3D #891

raphlinus opened this issue Apr 29, 2020 · 13 comments
Labels
discussion needs feedback and ideas write-up Detailed information / thoughts

Comments

@raphlinus
Copy link
Contributor

This issue captures my thinking on 3D, but I'm open to discussion, especially if people bring a lot of energy and motivation.

There's a lot of interest in allowing access to 3D graphics from druid, but at the same time it's not in any way blocking Runebender, so it's hard to prioritize.

That said, one reason I'm very interested in 3D API's is to support better 2D rendering. There are at least two paths to that right now - Pathfinder and piet-gpu. These are fairly different in their approaches, as Pathfinder is designed for compatibility with installed GPU hardware and drivers, while piet-gpu explores cutting-edge compute capabilities. As of recently Pathfinder exposes enough of the 2D imaging model that we could consider it, and there is already a Pathfinder piet backend in progress.

Thus, we need to consider the approach to 3D in layers. One is: what should druid-shell expose? That layer can be consumed by druid to provide the best piet experience possible, even without exposing 3D. The other is: what should druid expose?

There is also the question of complexity. There are many, many 3D API's out there, with a complex matrix of compatibility and capabilities. Any approach to 3D must involve runtime detection, with some sort of fallback. Adding to the complexity, using 3D codepaths creates integration problems unique to desktop apps, not shared by the more typical game use cases: incremental present capability, low latency present modes (based on wait objects), smooth resize, etc.

One very appealing approach is to adopt wgpu as the primary integration point. The runtime detection would be wgpu or no wgpu, plus of course finer grained feature detection as provided by wgpu. Not all platforms can support wgpu, but compatibility work is envisioned (from the wgpu web page, OpenGL is currently unsupported but in progress).

There is another question of how to composite 3D content with the GUI. Again, two main approaches. One is to leverage the compositor capabilities of the platform, having loose coupling between the 2D and 3D pathways. Another is to use a GPU-resident Texture as the integration point. This would involve synchronization primitives to signal a frame request to the 3D subsystem (and similarly to negotiate resizes, which can get quite tricky with asynchrony), and a semaphore or fence of some kind to signal back to the 2D world that the texture is ready. Then the 2D world can consume that texture as it likes, applying clipping and translation (needed for scrolling), drawing other UI on top of it, etc. My preference is fairly strongly for the latter, though as always there are tradeoffs.

Since wgpu is not really mature yet (among other things, Pathfinder does not yet have a wgpu back-end, though it likely will soon), if we want to make faster progress we would need to add dynamic negotiation for a broader range of GPU interfaces. It's possible, but I'm certainly not enthusiastic enough about that to put time into it myself.

Discussion is welcome, we can use this issue.

@luleyleo luleyleo added discussion needs feedback and ideas write-up Detailed information / thoughts labels May 15, 2020
@nicoburns
Copy link

Again, two main approaches. One is to leverage the compositor capabilities of the platform, having loose coupling between the 2D and 3D pathways. Another is to use a GPU-resident Texture as the integration point

I believe Firefox recently went through a relatively involved process of integrating with the system compositor as it was the only way that they could get decent power usage on macOS (Firefox was all-but-unusable on macs high-DPI displays scaled to a non-native resolution until this patch landed. Chrome also uses this approach on macOS.)

@raphlinus
Copy link
Contributor Author

I'm aware of this (we've discussed it on Zulip). These are very difficult and complex tradeoffs. At the risk of oversimplifying, Apple pulls you into their way of building apps (slow rendering, fix everything in the compositor) and punishes you if you don't fit into that, while the direction other platforms are moving rewards rendering the final appearance into a swapchain buffer and presenting that with minimal friction.

It's very difficult to plumb a platform compositor abstraction up to apps; I find the simplicity of the non-compositor approach really appealing. It will increase power usage on mac for certain use cases, but it depends on the workload. The biggest win for the compositor is scrolling of otherwise static content. I think we can get back some of the power by having very efficient rendering.

@msiglreith
Copy link
Contributor

An hard (and interesting) issue in this regard is also priority handling of the different workloads on the gpu. Newer hardware supports more fine grained priority mechanism and graphics API also expose queue priorities - nonetheless ensuring a smooth UI experience is not easy to achieve depending on the gpu load imposed by the 3D scene ):

@SimonSapin
Copy link

The biggest win for the compositor is scrolling of otherwise static content.

I think another scenario where this Firefox work had significant impact is video playback

@sztomi
Copy link

sztomi commented Jul 6, 2020

Just wanted to chime in here mentioning my own use-case: I'd like to create an mpv gui. The best way to embed libmpv is to give it access to an opengl context.

@SethDusek
Copy link

I am not sure if this is the right place to post this, but to achieve smooth resize, wouldn't using display timing be the best way? Basically there are APIs such as Wayland Presentation Time and VK_GOOGLE_DISPLAY_TIMING that tell you at what point your buffer gets displayed on the screen. When receiving resize events, druid could accumulate these events until it needs to render to reach the next display flip in time. So If the next flip is 16.6 ms from now and rendering druid takes ~2ms, it could accumulate resize events until ~14.6ms to get the latest possible resize state of the window.

@SimonSapin
Copy link

It’s likely hard to know in advance exactly how long rendering is gonna take. If the previous frame took 2.1ms and you start 2.1ms before the next tick and the scene got slightly more complex or another process is keeping the hardware slightly more busy, this frame might take 2.5ms and miss the whole refresh tick.

@SethDusek
Copy link

That's true yeah, but there could be some extra time given or dynamic adjustments made based on how reliable the predictions were. Some programs like Weston have a pre-configured time to wait before starting rendering too, I believe 6ms was the default

@black7375
Copy link

@raphlinus
Copy link
Contributor Author

There is indeed. I'd be curious how that performs compared with the existing platform renderers. It's also possible there will be a port of piet-gpu to wgpu before long. For the longish term, I am pushing forward on piet-gpu as I believe it will be the fastest and highest quality solution, but it will be a while before that is mature enough for production use, which leaves a gap in the meantime.

@sysint64
Copy link
Contributor

I've successfully managed to use wgpu with builtin druid renderer. I render everything into texture and draw the final texture into druid frame. Unfortunately it seems like druid can't draw fast enough big images, because of that on full screen it starts lagging, maybe I do something wrong.

Here you can see an example: https://github.com/sysint64/druid-wgpu-poc

Video:

Screen.Recording.2022-10-17.at.19.49.16.mov

@raphlinus
Copy link
Contributor Author

Intriguing! It's not surprising this path is slow, as it involves shoving lots of of pixel bytes back to the CPU. We're going in a new direction, about which I'll be writing more soon, but these slides might be interesting in the meantime. That will be GPU-first and one way or other we will have performant integration with wgpu.

@heavyrain266
Copy link

Could there be a way to embed viewport from raw DX12 and/or Metal? I'm slowly working on rendering engine for creating Pixar/Dreamworks style animated movies and would like to know if is it possible to implement such feature. Druid as the editor frontend looks like obvious choice over immediate mode UIs such as egui or iced. Runtime is very specific because of mesh shaders, raytracing and other specific (modern) features which are not supported by wgpu.

Of course I could just write custom backend but wanted to know if is there any other way.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
discussion needs feedback and ideas write-up Detailed information / thoughts
Projects
None yet
Development

No branches or pull requests

10 participants