Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Proposal for WebGPU as a new Standard Cross-Platform Graphics API (OpenGL replacement) #295

Open
missmah opened this issue May 12, 2019 · 36 comments

Comments

Projects
None yet
@missmah
Copy link

commented May 12, 2019

From the perspective of writing cross-platform engines, it would be nice to have a new open-source "write once, run anywhere" type graphics API. WebGPU seems like it could hopefully replace OpenGL for that purpose, and be a much lower-level, more performant baseline API.

A few important things would be vital to make this truly worthwhile:

  • Support SPIR-V Natively in WebGPU (for performance reasons)
    • This doesn't preclude also supporting WHLSL, but the added complexity of doing so probably isn't worth having it be part of the API itself
  • Support WebGPU extensions as a first class citizen to enable the API to grow to support new use-cases not exposed by the initial spec (mesh shaders, rtx, etc.)
    • Extensions could be community prototyped/developed open-source on Dawn/wgpu
  • Support zero-cost bi-directional inter-op with backing API (On Platforms other than the Web)
    • Let users choose which behind-the-scenes API a WebGPU device gets created with
    • If a WebGPU API device is a Vulkan device behind the scenes, enable access to that VkDevice directly
      • In this scenario, enable some kind of queryable mapping between WebGPU objects and Vulkan objects

What would this enable that's interesting?

  • Write common renderer codepaths in WebGPU instead of engine abstraction layer
    • Support a high level of default performance and feature set everywhere
    • Faster renderer development time - fewer GfxAPI abstraction layers
      • More development time can be spent optimizing where it matters
    • Improved reliability of abstraction layer, because everyone is using the same one (WebGPU)
    • Community optimizations/improvements to the abstraction layer benefit everyone
  • Surgically write optional optimized codepaths in lower-level APIs
    • Easily support specific optimized use-cases by inter-op to platform/vendor APIs
      • (Game Console specific paths, bleeding edge HW/driver extensions, Gfx optimizations not allowed by WebGPU, etc.)
    • Only pay cost of specific "bleeding-edge" codepaths where the ROI makes sense
  • As a community, working in open-source, promote very common/important optimized codepaths towards becoming WebGPU extensions
    • Goal: Last year's most successful disparate optimized/specialized codepaths become next year's common WebGPU extensions
    • As underlying Platform APIs like Vulkan/D3D12/Metal progress to adopt common/consensus support for features, WebGPU can quickly provide extensions to support those features in the common codepath

Why isn't this just a job for bgfx or other higher-level rendering APIs?

  • For the same reason that WebGPU isn't implemented on bgfx or other such APIs!
    • Developers creating rendering engines are not looking to build on top of someone else's high-level rendering abstractions - they're interested in creating new rendering abstractions/algorithms/data structures/paradigms which match their goals. The "common" hardware abstraction layer should be as minimalist as possible given the other constraints (WebGPU's goal is exactly this).
  • Additionally, those APIs often don't support the aforementioned proposed use-case of providing surgical and seamless inter-op with multiple other APIs as-needed (without heavy modification).

Unity, Unreal, and other large engines already have this abstraction layer written, and already support all the other APIs, won't simply adding a WebGPU back-end be the ideal for them?

  • Yes, large established engines like Unity, Unreal, and others, will probably start by simply adding a WebGPU back-end to their existing abstractions.
  • But, for new green-field renderers (or renderers with less legacy/scope), it could make development time much faster and less error-prone to directly target WebGPU first, and then expand to support specialized codepaths (as mentioned above), where they can provide the most additional benefit/ROI.
  • Furthermore, for certain middleware libraries, it could be ideal to directly primarily support WebGPU internally, and support interop to other APIs for users who are not using WebGPU directly.

Simple Middleware Use-case Example

  • Lets say I want to write a GPU/Compute optimized ray intersection library, with support for certain BVH heuristics, etc. I want this library to work on Windows, Linux, OSX, Android, iOS, The Web, etc. This library should be used by game/graphics engines which want a quick path to optimized ray tracing.
    • If WebCL were actually supported, I might be able to use OpenCL + WebCL.
      • Except on Apple!
      • And some number of other devices?
      • Oh yeah, WebCL isn't supported, either...
    • OK, I guess I could use OpenGL Compute Shaders + WebGL Compute Shaders
      • Except OpenGL is deprecated by Apple!
      • WebGL Compute Shaders - Coming Soon
        • Can I run them on OSX natively?
    • OK, I guess I could re-implement my BVH code for every platform...
      • OpenGL/WebGL
      • Metal
      • ...
      • Wait, now I have to re-write my BVH shaders, or write a shader generator system!
      • Oh, and also if the user is using Vulkan, how do I write an interop system to take their data and convert it to OpenGL and back?
    • ...

Again and again, at various scales within industry, this pattern plays out. So, not only does the web suffer; the majority of cross-platform graphics renderers continue to suffer. We have too many incompatible graphics APIs, none of which covers all of the important platforms, or offers suitable and broad enough inter-op with other APIs. As a result, everyone has to either re-invent a whole lot of very similar wheels, make large API complexity compromises, etc. The fact that some very large and successful companies have suffered this pain and built workarounds for it doesn't mean the situation is healthy. The sheer daunting complexity of working around a lot of these issues probably stops most would-be-cross-platform projects in their tracks, and severely limits others.

I think the industry could use a common denominator, fairly low-level, open-source/open-standard-extensible API. It seems like WebGPU will have to, by its very nature, be that common-denominator. It seems like with some attention to detail, WebGPU could become something like CommonGPU/StandardGPU/UniversalGPU.

@procedural

This comment has been minimized.

Copy link

commented May 13, 2019

Nah.

@magcius

This comment has been minimized.

Copy link

commented May 13, 2019

I've thought a lot about this problem, and I've come to the conclusion that the modern APIs (Metal, Direct3D 12, Vulkan) leave no room for middleware. Any serious application either wants to or needs to have full control over resource allocation, shader binding models, barriers, render graphs, etc.

If an app wants to use WebGPU as a cross-API modern graphics platform, there's Dawn and wgpu, but it won't solve the middleware case.

@floooh

This comment has been minimized.

Copy link

commented May 13, 2019

My 2 cent: I think it's all about a class of small applications (and the people writing those applications) for which D3D12 and Vulkan is way too verbose (e.g. >1000 lines of code to get a triangle on screen), and where "general middleware" like Unity or Unreal is simply overkill. Such small applications are more typical for the web platform, but also make sense on mobile and desktop because they could start instantly without a long installation phase.

For example take these emulators: https://floooh.github.io/tiny8bit/

These are written in C, and compiled to WASM, iOS, Android, Windows, macOS and Linux from the same code, and somewhere between 45 KBytes and 120 KBytes big.

On the web those render through WebGL (on native platforms via D3D11, Metal, GL), through very minimal platform abstraction APIs (https://github.com/floooh/sokol). WebGPU would make sense here because the 3D backend code would actually be smaller and much cleaner than the GL backend, so less code to maintain for me. With a Vulkan or D3D12 backend, the opposite would be true, a lot of complex code to maintain for absolutely zero benefit (for such simple applications).

Or think about small, specialized AR apps on mobile phones.

In my opinion, 3D-APIs like WebGPU and Metal are the true successors to GL and D3D11 as "programmer-friendly" APIs, not Vulkan and D3D12. E.g. Metal allows to write 3D application from scratch without a "sanity layer" inbetween and without employing a whole engineering team. This idea has been lost in Vulkan and D3D12, but preserved in WebGPU. IMHO WebGPU is so far the only really serious approach to create a simple API around the modern 3D APIs, so it's in the best position to become the "standard" successor to GL and D3D11 which targets the same audience as D3D11 and Metal (Vulkan and D3D12 is a completely different target audience IMHO).

A portable C implementation for WebGPU implemented in a library (ideally statically linked) for (at least) Android, iOS, macOS, Windows and Linux, and via WASM for the web makes a whole lot of sense. The main problem is indeed shaders. We need SPIRV there, otherwise it's back to offline cross-compiling solution via SPIRV-Cross. It works, but it's a hack and adds layers of complexity to IDE integrations and build systems.

Finally, the main problem with "3rd-party" wrapper APIs (like I and many others have written many of so far) is that nobody will ever agree on one, otherwise such an API would already have emerged. This leads to fragmentation and much duplicated work (everybody is basically writing the exact same D3D12 and Vulkan wrapper APIs in inhouse code bases, and if not "exact same", than at least large parts of it (all the pointless initialization code for instance).

@procedural

This comment has been minimized.

Copy link

commented May 13, 2019

nobody will ever agree on one

A portable C implementation for WebGPU

It'll be up to Mozilla Corporation to decide which language they will choose to implement WebGPU in, not you. Now please state that you will be fine if it'll be implemented in Rust so you won't contribute to disagreement you're trying to avoid. Thank you for your cooperation.

@floooh

This comment has been minimized.

Copy link

commented May 13, 2019

Now please state that you will be fine if it'll be implemented in Rust so you won't contribute to disagreement you're trying to avoid.

As long as there is a C API, I really don't care much about the language the library is implemented in.
There's also Google's Dawn btw, which is implemented in C++, but will (AFAIK) also get a C API.

PS, link: https://dawn.googlesource.com/dawn/

@procedural

This comment has been minimized.

Copy link

commented May 13, 2019

@floooh you want a little 32 kilobyte C library that is easy to use and talks to a driver directly? You'll get one. Not from the people who don't care how many bytes it will take or how fast it will call a backend GPU API, tho.

@procedural

This comment has been minimized.

Copy link

commented May 13, 2019

And generally, you people should beg for lower level GPU access (lower than Vulkan/D3D/Metal), because we already had a multi-billion dollar experiment with the best, most paid driver engineers in the world if we can pull off a simplified GPU API, the best what we got is D3D11 which pegs CPU and can be easily beaten by any modern API, the worst what we got is OpenGL clown town. Quit asking for APIs which are historically proven to be insufficient to program embarrassingly parallel hardware.

@floooh

This comment has been minimized.

Copy link

commented May 13, 2019

And generally, you people should beg for lower level GPU access

I agree, but let's not derail the thread (3D APIs should disappear completely into the compiler, but without GPU vendors opening up their ISAs this is wishful thinking). Your information about D3D11 is objectively wrong though (look at games which have both a D3D11 and Vk backend, it's hard to find one where the Vk backend performs better).

@procedural

This comment has been minimized.

Copy link

commented May 13, 2019

Your information about D3D11 is objectively wrong though

You want me to assume you have 'objectively right' data that shows Vulkan performance is always the same as with D3D11?

You could link that 'objectively right' data if you had one, but since you don't let me link mine:

DX11 avg | 26.6 ms/f (37.6 fps) | 18.87 ms/f (52.9 fps) | 13.24 ms/f (78.4 fps)
Mantle avg | 23.3 ms/f (43 fps) | 15.08 ms/f (66.3 fps) | 8.38 ms/f (121.5 fps)

Source: https://www.battlefield.com/news/mantle-renderer-now-available-in-battlefield-4 (copy)

Unless you think Mantle is different than Vulkan and performs better than Vulkan, this data shows performance difference I and people at DICE saw compared to D3D11.

@rootext

This comment has been minimized.

Copy link

commented May 13, 2019

Support zero-cost bi-directional inter-op with backing API

It's utopia.

@magcius

This comment has been minimized.

Copy link

commented May 13, 2019

@floooh if you want a cross-platform graphics platform layer or renderer, there are quite a few open-source ones of those, with varying degrees of power up and down the stack. WebGPU is one way to design a platform layer, but there are lots and lots of others, and certainly not all of them will have the same constraints that WebGPU does: WebGPU makes it difficult to write an optimized render graph because of all the barrier tracking it does automatically, and sub-buffer uploads are also very in the air. Any professional renderer is going to want those things. The tradeoffs were made on the side of safety.

Mozilla has tried to position wgpu as a nice open-source library if you want a graphics platform layer for your application: https://gfx-rs.github.io/2019/03/06/wgpu.html . I think it's a fine and sensible choice, but it's certainly not the only choice out there.

Ultimately, you are going to be building a renderer on top of your platform layer to begin with, because you don't want to be emitting raw draw calls from your object graph, you want to sort transparent objects, you want to batch buffer uploads to the start of the frame, and so on.

Also, D3D11 tends to perform better on NVIDIA because NVIDIA's driver applies plenty of game-specific heuristics and shader hacks, and for a multitude of complex reasons, those hacks have not their way to the Vulkan drivers. For other companies that did not invest millions of dollars in a driver moat, D3D12/Vulkan already outperforms D3D11.

@floooh

This comment has been minimized.

Copy link

commented May 13, 2019

WebGPU is one way to design a platform layer, but there are lots and lots of others...

Yes agreed, but such an API doesn't have to be everything to everybody, it just has to provide a better cross-platform alternative to OpenGL with a drastically lower "lines of code to triangle on screen" count than Vk/D3D12.

The various open source wrapper APIs (mine including) lack the extensive specification work, conformance- and compatibility-tests that WebGPU (hopefully) has.

For people who need more explicit access at the cost of having to maintain a lot more code and handling all the GPU-architecture-specific peculiarities themselves, there's still Vk and D3D12. As I said above, Vk and D3D12 are not a replacement for D3D11 or OpenGL though.

Just continuing to use OpenGL and D3D11 isn't an option either, since GPU vendor debugging- and profiling-tools are already starting to drop support for those APIs.

The real elephant in the room is the shader question though. Yet another "high level shader language" isn't helpful.

WebGPU makes it difficult to write an optimized render graph because of all the barrier tracking it does automatically

True, but Metal 1 shows that one can still improve the OpenGL / D3D11 programming model drastically without explicit resource barriers.

...and sub-buffer uploads are also very in the air.

Agreed, but more direct buffer / image content access could be implemented in an API extension that's only available in the native API version with relaxed security requirements.

@kvark

This comment has been minimized.

Copy link
Contributor

commented May 13, 2019

@missmah thank you for writing down this proposal!

There is a strong support within this W3C community group (CG) for exposing the WebGPU API to native applications (in addition to the Web) with an aspiration for it to be come the new cross-platform API of choice for many developers, just like you requested. However, designing a native API is not in the scope of work for this CG. It's possible to form another group, possibly at another standards body (e.g. Khronos), that would research the prospect of WebGPU on native, suggest extensions, etc, but this hasn't happened yet. The main focus of W3C CG is still the Web.

Support SPIR-V Natively in WebGPU (for performance reasons)

Please expand on the reasons you have in mind. AFAIK, actual shader translation (from SPIRV to anything) is a drop in the water among the total time spent in pipeline creation.

Support zero-cost bi-directional inter-op with backing API

I find this requirement hard to achieve. WebGPU is designed to be a safe and portable API. In order to achieve this, we need a full control of the low-level primitives. Whenever there is a breach (i.e. VkDevice exposed to the user), we'd have a hard time keeping our internal structures in check with the low-level, which would negatively affect correctness, validation, and portability.

Write common renderer codepaths in WebGPU instead of engine abstraction layer

The main advantage of WebGPU versus just some libraries out there could be the existence of a spec of this API that the implementations follow and conform with a rich test suite. I said "could be", because native applications don't use Web API directly, they'd need C headers specified, and this, again, is not a goal of this CG...

Goal: Last year's most successful disparate optimized/specialized codepaths become next year's common WebGPU extensions

Swarming the API with extensions would harm portability of applications. I would prefer a cautious approach here by minimizing the number of extensions we'll have exposed.

Yes, large established engines like Unity, Unreal, and others, will probably start by simply adding a WebGPU back-end to their existing abstractions.
I think the industry could use a common denominator, fairly low-level, open-source/open-standard-extensible API. It seems like WebGPU will have to, by its very nature, be that common-denominator.

I'm fairly sure this is an unrealistic goal, and I'm sharing the @floooh position. WebGPU is designed for the Web, which puts certain constraints and involves trade-offs on the API that aren't always best for running on native. For example, we can't return errors by Device methods, because we assume the device may live on a separate GPU process.

WebGPU is good to use for small-to-medium size applications that aren't too CPU limited in their rendering code. Larger engines and applications could use WebGPU as a Web target, but it's not reasonable to expect them to use it on native targets. Established engines already support low-level APIs with less overhead (than going through WebGPU), and they can resort to help of Vulkan Portability for wider reach.

@procedural
Your attacks are counter-productive. wgpu is developed as a C API to begin with (with Rust applications using it over a wrapper wgpu-rs, just like other C libraries are wrapped) and strives to be header-compatible with Dawn.

@kvark kvark added the position label May 13, 2019

@Kangz

This comment has been minimized.

Copy link
Contributor

commented May 13, 2019

Thanks @kvark that's basically hits 100% of what I was going to reply.

Unfortunately this group's focus is on making a Web API, and I don't think it will be possible to expand it to cover native because at least two of the group's participants have their own native API.

However it would be amazing to have the main C APIs for wgpu and Dawn match so that they are interoperable. As WebGPU implementation they would benefit from having extensive testing, tuning and working around driver bugs. The shared C API would be a de-facto standard API.

@missmah

This comment has been minimized.

Copy link
Author

commented May 13, 2019

@kvark, @Kangz thanks for the kind and thoughtful replies!

I think I partially failed to appreciate the limitation of this CG, and also to emphasize the correct wording of the intent of my proposal. Sorry about that!

I mostly agree with both of you, that ultimately what we are talking about here is another working group/project, which would be more directly based upon https://github.com/gfx-rs/wgpu and dawn.

That group's goal would be to, as you pointed out, ensure that those two implementations APIs continued to match and be inter-operable going forward in time. And to ensure that there was a rich test suite for the larger native API, etc.

The proposal I made about being able to directly access underlying devices, get mappings from WebGPU Objects<-->Underlying Device Objects, etc. is obviously not an API that could be exposed on the Web. It appears this proposal belongs more in a new working group than it does in this working group.

However, my hope with this proposal was also to get consensus around the idea of avoiding building a Web API which would unnecessarily complicate the interplay between Native and WebGPU. If the Native API will support this paradigm, there may be changes to the way devices are enumerated and created that should ripple over into the WebGPU API, in order to keep the two from diverging too much.

My point about SPIR-V was in the same vein. Having the two APIs diverge with respect to how they handle shaders is a big compatibility issue. (I believe that requiring WHLSL for the Web API would also be a bad choice for performance, but I'll address that separately in a future comment.)

It sounds like a next step would be proposing a parallel CG for Native be created. But in a sense, I still don't think these CG should be completely independent of each other, or we will risk conflicting decisions being made which do unnecessarily complicate the interplay between the "Native" API vs. the "Web" API.

Thoughts on this?

Thanks again for the constructive feedback and discussion!

@jacobbogers

This comment has been minimized.

Copy link

commented May 13, 2019

whats the difference between webgpu and webgl-compute shaders from the kronos spec?

@missmah

This comment has been minimized.

Copy link
Author

commented May 13, 2019

Separately, I'd like to briefly address the viability and importance of of having minimal divergence between the Native And Web APIs.

Assertion: End-user code shouldn't need to be littered with #if NATIVE all over the place -- the number of places where something like that is required should be minimized (which is different from the number of places where it's optional).


As an example, our current C/C++ codebase (where I work) has an "GL" renderer plugin. This plugin supports Desktop OpenGL, OpenGL ES, and WebGL (via Emscripten) -- and I don't think there are more than a small handful of places where there's any special codepath for Emscripten.

There are some places where optional more optimized codepaths are supported based on extensions and flags (things like multi draw, draw instanced, multi viewport, etc), and Emscripten generally does not benefit from these; but, generally speaking, the vast majority of code is shared between the Desktop, Mobile and Web codepaths in this particular GL plugin.


My argument is that this is a real portability strength of the current implementations of OpenGL/GLES/WebGL, and I really believe that there's value in minimizing divergence within the "common subset" between Native and Web.

I'd go so far as to say that we most likely would not have ported our renderer to the Web yet if there had been a much larger divergence between Web/Native in OpenGL-land.


Finally, with respect to my proposal about the Native API being able to access the underlying device, I think nv_commandlist serves as a good example of how that kind of paradigm can work really well (you don't have to re-write your whole renderer in nv_commandlist to be able to benefit from using it for certain parts of your renderer, where the ROI is deemed worth it).

@missmah

This comment has been minimized.

Copy link
Author

commented May 13, 2019

@kvark Support SPIR-V Natively in WebGPU (for performance reasons)

Please expand on the reasons you have in mind. AFAIK, actual shader translation (from SPIRV to anything) is a drop in the water among the total time spent in pipeline creation.

I haven't seen the performance numbers for WHLSL vs. SPIRV yet; but, I'm operating on the assumption that parsing and compiling from HLSL/GLSL/etc. to SPIRV is a fairly expensive step (e.g. more than 1ms for a shader permutation), and one that apps should have the option of doing offline, in order to save power and load time.

Do you have numbers which show that the WHLSL -> SPIRV step is inconsequential?

@grorg

This comment has been minimized.

Copy link
Contributor

commented May 13, 2019

Even if WebGPU supported SPIR-V at the API level, it's not going to be "native". Some platforms will require the shader to be translated (Metal and DX12). For Vulkan, the shader will need to be validated in some form. While it looks like there will be a special subset of SPIR-V for the Web, the implementation won't be able to simply pass it on to the driver. So offline compilation will still require some online compilation.

This is the same with WHLSL. It will need to be compiled on the client. The difference is that WHLSL acts both as a compile target and a human-writable language.

@missmah

This comment has been minimized.

Copy link
Author

commented May 13, 2019

@grorg I'm pretty unconvinced that SPIR-V validation has the same cost as WHLSL compilation (particularly since that still probably needs to undergo SPIR-V validation under the hood). If you have numbers that show WHLSL is practically free, I'd love to see those.

Even if performance were a non-issue, further fragmenting the shading language landscape seems (to myself and many game developers I know) like a thing to be avoided.

Moreover, WHLSL would be yet another source of divergence between Native and Web -- which I also strongly believe is something to be avoided (wherever there's not a very strong reason for it).

@grorg

This comment has been minimized.

Copy link
Contributor

commented May 13, 2019

@missmah WHSL compilation won't need to go through SPIR-V validation because the language is restricted to only allow Web-safe features. So the comparison is between a WHSL compiler and a SPIR-V validator. But note also that platforms other than Vulkan won't ingest SPIR-V directly, so it will have to be validated, translated then recompiled. We'll have to do measurements to know what the costs are.

Anyway, I'm not really arguing either way - just pointing out that there isn't such a thing as "native" SPIR-V in this API.

FWIW, on the general topic, it seems like you're mostly asking for what Vulkan portability is supposed to provide.

@Kangz

This comment has been minimized.

Copy link
Contributor

commented May 13, 2019

@missmah WHSL compilation won't need to go through SPIR-V validation because the language is restricted to only allow Web-safe features

This is misleading, but let's just link at the previous flamewar: #42 #43 #44

FWIW, on the general topic, it seems like you're mostly asking for what Vulkan portability is supposed to provide.

Vulkan portability is really hard to use, and you have to adapt to different types of hardware in addition to different backing APIs. It also won't target the Web so I think WebGPU in native still has value.

@missmah

This comment has been minimized.

Copy link
Author

commented May 13, 2019

@grorg I agree with @Kangz about Vulkan Portability - as an engine developer, I may use it, and some of the ideas I proposed here might equally apply there; but, I agree with @Kangz that WebGPU on Native still has value and advantages.

A few ways WebGPU in Native has advantages over Vulkan Portability:

  • WebGPU is probably a better language to write a lot of mundane graphics code in than Vulkan. Vulkan is great where you need to be fully explicit (your most performance sensitive parts of your renderer).
    • But, I'd probably rather write an IMGui plugin for engine tools in WebGPU.
    • And for most app developers, WebGPU is probably a better API for getting things done and prototyping.
  • There would be great irony (and inefficiency) in writing Vulkan engine code that then gets translated by Vulkan Portability to WebGPU only to be translated back to Vulkan under the hood when I'm using Emscripten to target the Web.
@litherum

This comment has been minimized.

Copy link
Contributor

commented May 13, 2019

Perhaps feedback on the shortcomings of the Vulkan Portability effort should be directed at that group.

@grorg

This comment has been minimized.

Copy link
Contributor

commented May 13, 2019

Vulkan portability is really hard to use, and you have to adapt to different types of hardware in addition to different backing APIs.

So Vulkan portability isn't portable? (And hard to use!)

It also won't target the Web so I think WebGPU in native still has value.

Maybe I misunderstand the request then. Making a standard programming API that can be used in native apps and the Web? And the developer would be ok with all the performance hits and programming restrictions (e.g. no bindless) that the Web requires, even if they are developing for native?

If Dawn and wgpu are already going to share a C API, then is that enough?

The thing I'd like to avoid is WebGPU having to be constrained by native requirements.

@grorg

This comment has been minimized.

Copy link
Contributor

commented May 13, 2019

@missmah WHSL compilation won't need to go through SPIR-V validation because the language is restricted to only allow Web-safe features

This is misleading, but let's just link at the previous flamewar: #42 #43 #44

What I meant is that the WHSL compiler will produce "safe" SPIR-V, in the same way it will produce "safe" MSL and HLSL.

@Kangz

This comment has been minimized.

Copy link
Contributor

commented May 13, 2019

The thing I'd like to avoid is WebGPU having to be constrained by native requirements.

Indeed, see @kvark's comment or mine that say that this isn't the focus of this group, and @missmah's follow up:

I mostly agree with both of you, that ultimately what we are talking about here is another working group/project, which would be more directly based upon https://github.com/gfx-rs/wgpu and dawn.

What's interesting for this group is that usage of the native version of WebGPU will help with adoption on the Web. Apps using WebGPU native implementations for the ease-of-use and portability will be able to be WASMed for the Web and take advantage of WebGPU directly.

@missmah

This comment has been minimized.

Copy link
Author

commented May 13, 2019

It also won't target the Web so I think WebGPU in native still has value.

Maybe I misunderstand the request then. Making a standard programming API that can be used in native apps and the Web? And the developer would be ok with all the performance hits and programming restrictions (e.g. no bindless) that the Web requires, even if they are developing for native?

The idea was to add the ability to get the native device on Native (a small slice of the API that would be Native-Only), so that you could fall-back to native for a subset of codepaths where the performance hit was important. But yes, you'd write your whole renderer in WebGPU (on the assumption that you were going to support the Web anyway), and you'd only re-write small parts of it in alternative native "backing" APIs - the same way that our current GL renderer has a multi-view path on desktop that doesn't exist on mobile/web - but 99% of the code is shared between all three.

If Dawn and wgpu are already going to share a C API, then is that enough?

Almost enough, again, my proposal was basically 3 things, up at the very top of my post, that might differ from where the standard is currently heading.

The thing I'd like to avoid is WebGPU having to be constrained by native requirements.

I'm attempting to propose that we try to minimize the divergence between Native and Web, and sometimes that might mean a slightly different API than Web would have chosen in isolation - but I can't really imagine a case where trying not to diverge too much from Native would negatively constrain WebGPU (of course, just because I can't imagine it, doesn't mean it doesn't exist -- and maybe sometimes the differences are large enough they really should diverge -- I'm really advocating "thinking two or three times before choosing divergence")

@grorg

This comment has been minimized.

Copy link
Contributor

commented May 13, 2019

What's interesting for this group is that usage of the native version of WebGPU will help with adoption on the Web. Apps using WebGPU native implementations for the ease-of-use and portability will be able to be WASMed for the Web and take advantage of WebGPU directly.

This is a good point, but I think if you replace "native version of WebGPU" with "Dawn" or "wgpu" you get the same result.

To consider a truly native API we'd have to re-open the group's charter. At the moment we're Web first: JavaScript, and possibly Web Assembly.

@missmah

This comment has been minimized.

Copy link
Author

commented May 13, 2019

What's interesting for this group is that usage of the native version of WebGPU will help with adoption on the Web. Apps using WebGPU native implementations for the ease-of-use and portability will be able to be WASMed for the Web and take advantage of WebGPU directly.

100% This. We wouldn't have a WebGL renderer today if not for the compatibility with Native GL/GLES.

I really believe in graphics on the web as a major platform of the future, and I think that having an easy pathway for Native renderers to achieve high performance on the Web will be a huge win for the Web.

To consider a truly native API we'd have to re-open the group's charter. At the moment we're Web first: JavaScript, and possibly Web Assembly.

Our long-term goal is to make our C/C++ renderer API (and whole engine) accessible via JavaScript so that apps coded entirely in web technologies can take advantage of what we've built. I think this is a use-case that is very important for this WebGPU CG to consider.

I caveat what I just said above with the fair point that a new CG should be created for the Native use-case. But, I continue to believe that these two CG should not be independent of each other, and should collaborate on how to have a minimally divergent set of APIs which supports both use-cases well.

Again, I know that the largest companies will always throw engineers at the problem; but, WebGPU should be about promoting graphics innovation on the Web, and a lot of that will actually come from small companies/individuals, where these kinds of barriers increasingly matter.

@jacobbogers

This comment has been minimized.

Copy link

commented May 13, 2019

Is webgpu the same as OpenGL 3.1 spec ??https://www.khronos.org/registry/OpenGL/specs/es/3.1/es_spec_3.1.pdf
Can someone answer if this is this spec of something TOTALLY new, because 3.1 has also compute shaders

@kainino0x

This comment has been minimized.

Copy link
Contributor

commented May 13, 2019

@jacobbogers your question is off-topic for the current discussion; please post elsewhere if you have further questions. WebGPU is a new API, not at all based on OpenGL. It is loosely based on Vulkan, Metal, and D3D12.

@jacobbogers

This comment has been minimized.

Copy link

commented May 13, 2019

Ok Thanks @kainino0x, I know Intel is working on webgl2 compute shaders canvas.getContext('webgl-compute') based on spec above, (in chrome canari if you use a flag), I didnt know if it was the same thing or not, thanks for clarifying

@jacobbogers

This comment has been minimized.

Copy link

commented May 13, 2019

PS: where can i get my hands dirty on this webgpu api?

@kainino0x

This comment has been minimized.

Copy link
Contributor

commented May 14, 2019

See https://webgpu.io.

@noisiak

This comment has been minimized.

Copy link

commented May 15, 2019

I believe this is a good breakpoint towards a new web, one that is ready for 3d and complex graphics processing.

Babylon JS is already on the move (it would be great if Threejs does it too), also, all browser vendors are already working on adopting WebGPU (Chrome, Firefox, Safari, Edge).

So, this is indeed a serious move. And it is a good one.

Check this:
https://medium.com/@babylonjs/webgpu-is-coming-to-babylon-js-c44f8065ac05

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
You can’t perform that action at this time.