-
Notifications
You must be signed in to change notification settings - Fork 19
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Use Texture Arrays directly with WebGL 2 #8
Comments
Ping @Oletus, who surely has some thoughts on this topic. |
Thank you for the writeup @toji. Here is my feedback.
I think what you have should be restrictive enough.
I can't think of any good reason. As you state, once you call texStorage3D, you can't change the dimensions of the object.
I am OK with this.
For the sake of keeping things simple, I think having spec language that makes texture level N map to view N should be sufficient. By the same token, the |
I like this and the fact that we are keeping opaque MV FB. I already have a PoC of WebGL1 shaders-to-ESSL3 transpilation (by modifying ANGLE) that will allow to use opaque MV FB in WebGL 1. |
Sorry about my really slow response! Overall I like the idea of supporting 2D texture array based layers. I think they should be supported alongside opaque multiview framebuffers, which would provide WebGL 1.0 compatibility and provide an easier path to add features like antialiasing. I like trying to keep the arrayTexture in XRWebGLArrayLayer close to a texture array allocated with texStorage3D. The proper term is immutable-format texture. However, we need to specify what happens when the texture is sent for display. We want a zero-copy pipeline, so it should be possible to swap the underlying memory. This means that from the point of view of the user the texture contents should become invalidated or cleared, when the implementation changes the underlying buffer. The following things can probably work with this type of textures as usual:
I don't think the user agent needs to be in charge of depth/stencil. I agree with Rafael that getLevel could be dropped. |
Discussed this previously in #317. This feels better scoped to me, since not every layer will need viewports (or may not need different viewports per view.)
Discussed this previously in #317. This feels better scoped to me, since not every layer will need viewports (or may not need different viewports per view.)
Discussed this previously in #317. This feels better scoped to me, since not every layer will need viewports (or may not need different viewports per view.)
Discussed this previously in #317. This feels better scoped to me, since not every layer will need viewports (or may not need different viewports per view.)
In the service of landing a PR to remove multiview from the By way of explanation, the commit has the new layer type providing both a `TEXTURE_2D_ARRAY for the color attachment and optionally the depth/stencil attachments. The reason why I've included depth is twofold:
(And stencil gets tossed on the pile simply because it's linked to depth.) |
Just re-iterating here what I said on last WebXR call. My concern that removing the opaque FB approach may slowdown multiview adoption. It is already pretty "user-unfriendly" feature, meaning devs will be required to change their shaders to make it work. By requesting users to switch to WebGL 2 AND to manage texture arrays / framebuffers by themselves is another obstacle on multiview adoption way. Needless to say that most of the existing WebVR experiences are made using WebGL 1, including frameworks like three.js, aframe, react360, so on. Maybe, WebGL community wants to speed up the WebGL 2 adoption by limiting multiview to WebGL 2 only, right? There are certain difficulties in adding multiview support to WebGL1, and the main one - shaders must be ES 3.00 (I mean, native shaders). This shader model exists only in WebGL 2, however, I already have a PoC modification to ANGLE shader transpiler, that takes WebGL1 shader and transform it to ES3.00; thus, multiview may work in WebGL 1 shaders (with ViewID support addition, of course). But I also like the flexibility of WebGL2Layer approach, tbh. Like Brandon said, the opaque FB approach is "all or nothing", while WebGL2Layer will allow to add multiview for certain parts of the rendering and it just closer to native GL code, thus porting existing GL + MV experiences to WebGL 2 / WebXR would be much easier. I don't have a solution to this, but I feel like I'd love to have both ways - the opaque FB and the WebGLLayer, but it is, probably, too much to ask. I also forgot to respond on Brandon's question whether multiview support should or should not be in V1 of the spec: I feel like it must be there, otherwise multiview would never be used widely. If we don't want to spend too much time on it for V1, then the opaque FB approach (even limited to WebGL 2 only) sounds as a good enough compromise for V1, while WebGL2Layer way could be added in V2. Just my 2c. |
Another "pro" for opaque FB approach: by providing the opaque FB we can add various optimizations "under-the-hood": the proper framebuffer will be created with the proper textures of the proper format. We can even provide texture wrappers for the VR swapchain to render DIRECTLY into it. Only this will save a copy or two, meaning 0.5 - 1.5 ms (on mobile) of GPU time! Which is hard to achieve with the WebGL2Layer and when a user allocates textures/framebuffers herself. |
Thanks for the recap, Artem! I think that captures the state of things pretty clearly. I'm not against having multiview in V1 of the spec, but I would advocate for it being exposed via the proposed Also, I wanted to point out that in my branch the API I proposed does have the UA explicitly allocating the textures for you, rather than them being user supplied, because I completely agree that we want to cut down on copies where possible and in some cases that's only feasible using surfaces allocated by the native API. |
The biggest issue I see with this change that texture array multisampling is not supported in core WebGL 2.0 (ES 3.0). To get this functionality, we need to add new extensions to our implementations. At the WebGL F2F, we talked about several potential extensions to fix this problem. But, in talking with the ANGLE team, we both agree that OES_texture_storage_multisample_2d_array would be fastest one to implement and be the most impactful. The mobile specific extensions would be expensive to emulate on desktop systems and have somewhat messy dependencies on each other. To date, noone has signed up to implement any of extensions above. While I do like the convenience of the WebGL 2.0 XR layer, giving up multisampling is a tough pill to swallow, especially since it is a regression from the current opaque framebuffer proposal where you can have both multisampling and multiview together. Until we have a more concrete plan to implement the extensions above, I'm inclined to keep multiview in opaque framebuffer. |
@Artyom17 , in the I would love to extend XRWebGLArrayLayer to also allow the user agent to provide depth textures as well. This would help improve LSR on some platforms. I am fine with limiting multi-view to WebGL 2-only, even if we decide to go with opaque framebuffers. Maintaining special code for converting WebGL 1 shaders to ES 3 seems like the wrong thing to do, especially since we expect most VR/AR capable hardware to support WebGL 2. |
Hey @RafaelCintron, any updates? BTW, if we are talking about changes in WebGL, why wouldn't we also define a way to render into a texture2d with implicit multisampling, the analog of |
At the last WebGL F2F, the group decided to:
With the above, we should theoretically have everything we need to implement a WebGL 2 array layer in WebXR. |
Great to hear that @RafaelCintron, really great news! Any ETAs on when these extensions will become at least draft? Also, if there is going to be OVR_multiview_multisampled_render_to_texture, then it is super inconsistent that there is no EXT_multisampled_render_to_texture extension in WebGL. Any plans to introduce it as well? PLEASE? That would be super helpful for non-multiview layers and in general it is much more convenient to use than 'renderBufferMutlisampled + blit'. |
@Artyom17 , I'll see what I can do about adding EXT_multisampled_render_to_texture. Support for the "render_to_texture" variety of extensions is predicated on being able to effectively emulate them on desktop, where implicit resolve is not available. We do not want developers to write two codepaths. By the way, the spec and test portion of OVR_multiview2 has landed in draft form. |
You are my hero @RafaelCintron !!! Thanks a lot! Now we just need OVR_multiview_multisampled_render_to_texture and EXT_multisampled_render_to_texture (the latter one can be written against WebGL 1, btw) to be completely happy! ;) |
Update on progress: We (Microsoft) are working on implementing EXT_multisampled_render_to_texture in ANGLE and will use that as a foundation for implementing OVR_multiview_multisampled_render_to_texture in ANGLE. Once that is complete, we look at exposing the OVR extension through WebGL. |
Great to hear. Oculus Browser 6.2.x will already support OVR_multiview2 enabled by default (shipping ETA this week). Can't wait to have both EXT_multisampled_render_to_texture and OVR_multiview_multisampled_render_to_texture implemented in WebGL. The only issue - those (especially the multiview one) are completely intolerant to mid-frame flushes. Which may come, for example, from texImage2D (at least the way how it is implemented in Chrome: it switches framebuffers). |
Closing since this is resolved with latest proposal in WebXR and WebGL groups |
This idea originated with an email that @RafaelCintron sent me a couple of days ago which was then discussed on the last WebXR call. In short, he proposed having a XR layer type that was WebGL 2.0-only that provided Texture Array's rather than the opaque framebuffer of our current
XRWebGLLayer
. (XRWebGLLayer
would stick around for WebGL 1.0 compatibility.) Then the WebGL multiview extensions could use the texture array directly, which would more directly mirror the underlying native extensions like OVR_multiview.I'm in favor of this approach, as it solves a couple of problems. Primary among them is that it would give developers the ability to mix geometry that is rendered via multiview techniques with geometry that is rendered one eye at a time. I believe that this is necessary because otherwise any single technique that a library uses that has not been converted to be multiview compatible would disqualify an app from using multiview at all (this is likely to be common, as the web ecosystem delights in mashing code from a variety of sources together.)
One point of misunderstanding in our initial discussion: I had assumed that this proposal was meant to wholly replace the previously planned method of multiview compatibility which was via the opaque framebuffer and could potentially support WebGL 1.0. I'm personally OK with the tradeoff of saying that multiview is a WebGL 2.0-only feature (this is the state of native OpenGL ES anyway.) @RafaelCintron and others apparently weren't ready to go that far and appear to be in favor of supporting both the Texture Array method and the opaque framebuffer method. That would lead to broader compatibility at the expense of complexity, both in WebXR and WebGL. (And again, the opaque framebuffer route is an all-or-nothing affair as currently specced.) In any case, I would be very interested in hearing from a variety of sources how big of a hurdle a WebGL 2.0 requirement for this feature would represent.
To kick off the discussion, here's a quick stab at the IDL to support this feature, all subject to copious bikeshedding. This variant omits multiview on the
XRWebGLLayer
, but that's ultimately dependent on the groups feelings on the above points.A few things to point out in relation to the IDL:
arrayTexture
to be immutable, the same way theframebuffer
is inXRWebGLLayer
. I think we can get this by saying that it should be treated as if it were created withtexStorage3D
, which will naturally restrict the dimensions and format but not the texture parameters. Not sure if that's restrictive enough. If it is, though, it's much better than inventing Yet Another Opaque Type.XRWebGLLayer
because it makes the developer responsible for the framebuffer and all of it's attachments. That's generally a good thing for flexibility, but I'm curious if there's a good reason why we may still want user agent control of the depth or stencil buffers?getViewport
off ofXRView
and onto the individual layer types for a couple of reasons: They're interpreted differently for each layer (all levels of the array share a single viewport, so you don't need to pass the view) and there may be other values that need to be queried only for that layer type (like the array texture level). To me it makes sense to group these queries on the layers themselves for flexibility, and it would help simplify some of the spec langauge to boot!getLevel
to theXRWebGLArrayLayer
but it's not clear to me if that's useful or if we just want to say "texture level N is always associated with view N".Let's hear everyone's thoughts on this!
The text was updated successfully, but these errors were encountered: