Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Use Texture Arrays directly with WebGL 2 #8

Closed
toji opened this issue Jan 10, 2018 · 18 comments
Closed

Use Texture Arrays directly with WebGL 2 #8

toji opened this issue Jan 10, 2018 · 18 comments

Comments

@toji
Copy link
Member

toji commented Jan 10, 2018

This idea originated with an email that @RafaelCintron sent me a couple of days ago which was then discussed on the last WebXR call. In short, he proposed having a XR layer type that was WebGL 2.0-only that provided Texture Array's rather than the opaque framebuffer of our current XRWebGLLayer. (XRWebGLLayer would stick around for WebGL 1.0 compatibility.) Then the WebGL multiview extensions could use the texture array directly, which would more directly mirror the underlying native extensions like OVR_multiview.

I'm in favor of this approach, as it solves a couple of problems. Primary among them is that it would give developers the ability to mix geometry that is rendered via multiview techniques with geometry that is rendered one eye at a time. I believe that this is necessary because otherwise any single technique that a library uses that has not been converted to be multiview compatible would disqualify an app from using multiview at all (this is likely to be common, as the web ecosystem delights in mashing code from a variety of sources together.)

Note: Even if we introduce a new layer type the plan would still be to only allow one layer at a time to be presented in the initial version of WebXR.

One point of misunderstanding in our initial discussion: I had assumed that this proposal was meant to wholly replace the previously planned method of multiview compatibility which was via the opaque framebuffer and could potentially support WebGL 1.0. I'm personally OK with the tradeoff of saying that multiview is a WebGL 2.0-only feature (this is the state of native OpenGL ES anyway.) @RafaelCintron and others apparently weren't ready to go that far and appear to be in favor of supporting both the Texture Array method and the opaque framebuffer method. That would lead to broader compatibility at the expense of complexity, both in WebXR and WebGL. (And again, the opaque framebuffer route is an all-or-nothing affair as currently specced.) In any case, I would be very interested in hearing from a variety of sources how big of a hurdle a WebGL 2.0 requirement for this feature would represent.

To kick off the discussion, here's a quick stab at the IDL to support this feature, all subject to copious bikeshedding. This variant omits multiview on the XRWebGLLayer, but that's ultimately dependent on the groups feelings on the above points.

// I think it's important (and easy) to retain WebGL 2.0 compatibility on the "basic" layer type.
typedef (WebGLRenderingContext or
         WebGL2RenderingContext) XRWebGLRenderingContext;

dictionary XRWebGLLayerInit {
  boolean antialias = true;
  boolean depth = true;
  boolean stencil = false;
  boolean alpha = true;
  boolean multiview = false;
  double framebufferScaleFactor;
};

[SecureContext, Exposed=Window,
 Constructor(XRSession session,
             XRWebGLRenderingContext context,
             optional XRWebGLLayerInit layerInit)]
interface XRWebGLLayer : XRLayer {
  readonly attribute XRWebGLRenderingContext context;
  readonly attribute boolean antialias;
  readonly attribute boolean depth;
  readonly attribute boolean stencil;
  readonly attribute boolean alpha;

  readonly attribute unsigned long framebufferWidth;
  readonly attribute unsigned long framebufferHeight;
  readonly attribute WebGLFramebuffer framebuffer;

  void requestViewportScaling(double viewportScaleFactor);
  XRViewport getViewport(XRView view);
};

dictionary XRWebGLArrayLayerInit {
  boolean alpha = true;
  double arrayTextureScaleFactor; // Same as the framebufferScaleFactor
};

[SecureContext, Exposed=Window,
 Constructor(XRSession session,
             WebGL2RenderingContext context,
             optional XRWebGLArrayLayerInit layerInit)]
interface XRWebGLArrayLayer : XRLayer {
  readonly attribute WebGL2RenderingContext context;
  readonly attribute boolean alpha;

  readonly attribute unsigned long arrayTextureWidth;
  readonly attribute unsigned long arrayTextureHeight;
  readonly attribute unsigned long arrayTextureDepth;
  readonly attribute WebGLTexture arrayTexture;

  void requestViewportScaling(double viewportScaleFactor);
  XRViewport getViewport();
  int getLevel(XRView view);
};

A few things to point out in relation to the IDL:

  • We would want the arrayTexture to be immutable, the same way the framebuffer is in XRWebGLLayer. I think we can get this by saying that it should be treated as if it were created with texStorage3D, which will naturally restrict the dimensions and format but not the texture parameters. Not sure if that's restrictive enough. If it is, though, it's much better than inventing Yet Another Opaque Type.
  • As specced above this would be somewhat harder to use than the XRWebGLLayer because it makes the developer responsible for the framebuffer and all of it's attachments. That's generally a good thing for flexibility, but I'm curious if there's a good reason why we may still want user agent control of the depth or stencil buffers?
  • In the IDL above I've moved getViewport off of XRView and onto the individual layer types for a couple of reasons: They're interpreted differently for each layer (all levels of the array share a single viewport, so you don't need to pass the view) and there may be other values that need to be queried only for that layer type (like the array texture level). To me it makes sense to group these queries on the layers themselves for flexibility, and it would help simplify some of the spec langauge to boot!
  • I did add getLevel to the XRWebGLArrayLayer but it's not clear to me if that's useful or if we just want to say "texture level N is always associated with view N".

Let's hear everyone's thoughts on this!

@toji
Copy link
Member Author

toji commented Jan 10, 2018

Ping @Oletus, who surely has some thoughts on this topic.

@RafaelCintron
Copy link

Thank you for the writeup @toji. Here is my feedback.

I think we can get this by saying that it should be treated as if it were created with texStorage3D, which will naturally restrict the dimensions and format but not the texture parameters. Not sure if that's restrictive enough.

I think what you have should be restrictive enough.

I'm curious if there's a good reason why we may still want user agent control of the depth or stencil buffers?

I can't think of any good reason. As you state, once you call texStorage3D, you can't change the dimensions of the object.

In the IDL above I've moved getViewport off of XRView and onto the individual layer types for a couple of reason

I am OK with this. requestViewportScaling is already a function on the layer object so it makes sense that you also get the viewport from the layer object.

I did add getLevel to the XRWebGLArrayLayer but it's not clear to me if that's useful or if we just want to say "texture level N is always associated with view N".

For the sake of keeping things simple, I think having spec language that makes texture level N map to view N should be sufficient. By the same token, the arrayTextureDepth attribute also seems redundant.

@Artyom17
Copy link
Contributor

I like this and the fact that we are keeping opaque MV FB. I already have a PoC of WebGL1 shaders-to-ESSL3 transpilation (by modifying ANGLE) that will allow to use opaque MV FB in WebGL 1.

@Oletus
Copy link

Oletus commented Jan 31, 2018

Sorry about my really slow response! Overall I like the idea of supporting 2D texture array based layers. I think they should be supported alongside opaque multiview framebuffers, which would provide WebGL 1.0 compatibility and provide an easier path to add features like antialiasing.

I like trying to keep the arrayTexture in XRWebGLArrayLayer close to a texture array allocated with texStorage3D. The proper term is immutable-format texture. However, we need to specify what happens when the texture is sent for display. We want a zero-copy pipeline, so it should be possible to swap the underlying memory. This means that from the point of view of the user the texture contents should become invalidated or cleared, when the implementation changes the underlying buffer.

The following things can probably work with this type of textures as usual:

  • texture parameters (though the implementation may need to change these behind the scenes when the texture is sent for display)
  • generateMipmap
  • using the texture as a framebuffer attachment

I don't think the user agent needs to be in charge of depth/stencil.

I agree with Rafael that getLevel could be dropped.

toji referenced this issue in immersive-web/webxr Mar 6, 2018
Discussed this previously in #317. This feels better scoped to me,
since not every layer will need viewports (or may not need different
viewports per view.)
toji referenced this issue in immersive-web/webxr Mar 7, 2018
Discussed this previously in #317. This feels better scoped to me,
since not every layer will need viewports (or may not need different
viewports per view.)
toji referenced this issue in immersive-web/webxr Mar 7, 2018
Discussed this previously in #317. This feels better scoped to me,
since not every layer will need viewports (or may not need different
viewports per view.)
toji referenced this issue in immersive-web/webxr Mar 20, 2018
Discussed this previously in #317. This feels better scoped to me,
since not every layer will need viewports (or may not need different
viewports per view.)
@toji
Copy link
Member Author

toji commented May 26, 2018

In the service of landing a PR to remove multiview from the XRWebGLLayer, I put together a branch that speculatively adds this concept to the explainer. I'm not sure if this is something that belongs in WebXR v1 (I'm leaning towards no?) but I think it's good to feel it out and make sure that we'd be happy with this direction.

By way of explanation, the commit has the new layer type providing both a `TEXTURE_2D_ARRAY for the color attachment and optionally the depth/stencil attachments. The reason why I've included depth is twofold:

  1. While it's likely to be rare, the number of views may change from frame to frame. As such it would be error-prone to ask apps to track when that changes and allocate or discard depth textures in response. Many would probably just allocate enough to cover the first frame's view count and then never check again. Having the UA manage that removes the opportunity for error and the need for constant monitoring of the view count.

  2. Some VR compositors (Oculus' dekstop service, specifically) can use the depth information to aid in reprojection or UI rendering. Having the layer manage depth buffer allocation allows it to use those features silently when appropriate.

(And stencil gets tossed on the pile simply because it's linked to depth.)

@Artyom17
Copy link
Contributor

Just re-iterating here what I said on last WebXR call. My concern that removing the opaque FB approach may slowdown multiview adoption. It is already pretty "user-unfriendly" feature, meaning devs will be required to change their shaders to make it work. By requesting users to switch to WebGL 2 AND to manage texture arrays / framebuffers by themselves is another obstacle on multiview adoption way.

Needless to say that most of the existing WebVR experiences are made using WebGL 1, including frameworks like three.js, aframe, react360, so on. Maybe, WebGL community wants to speed up the WebGL 2 adoption by limiting multiview to WebGL 2 only, right?

There are certain difficulties in adding multiview support to WebGL1, and the main one - shaders must be ES 3.00 (I mean, native shaders). This shader model exists only in WebGL 2, however, I already have a PoC modification to ANGLE shader transpiler, that takes WebGL1 shader and transform it to ES3.00; thus, multiview may work in WebGL 1 shaders (with ViewID support addition, of course).

But I also like the flexibility of WebGL2Layer approach, tbh. Like Brandon said, the opaque FB approach is "all or nothing", while WebGL2Layer will allow to add multiview for certain parts of the rendering and it just closer to native GL code, thus porting existing GL + MV experiences to WebGL 2 / WebXR would be much easier.

I don't have a solution to this, but I feel like I'd love to have both ways - the opaque FB and the WebGLLayer, but it is, probably, too much to ask.

I also forgot to respond on Brandon's question whether multiview support should or should not be in V1 of the spec: I feel like it must be there, otherwise multiview would never be used widely. If we don't want to spend too much time on it for V1, then the opaque FB approach (even limited to WebGL 2 only) sounds as a good enough compromise for V1, while WebGL2Layer way could be added in V2.

Just my 2c.

@Artyom17
Copy link
Contributor

Another "pro" for opaque FB approach: by providing the opaque FB we can add various optimizations "under-the-hood": the proper framebuffer will be created with the proper textures of the proper format. We can even provide texture wrappers for the VR swapchain to render DIRECTLY into it. Only this will save a copy or two, meaning 0.5 - 1.5 ms (on mobile) of GPU time! Which is hard to achieve with the WebGL2Layer and when a user allocates textures/framebuffers herself.

@toji
Copy link
Member Author

toji commented May 29, 2018

Thanks for the recap, Artem! I think that captures the state of things pretty clearly.

I'm not against having multiview in V1 of the spec, but I would advocate for it being exposed via the proposed XRWebGLArrayLayer or similar. I don't think implementing the new layer type is going to be too difficult, and in fact it may prove useful for illustrating the necessity of the layer system right away rather than having infrastructure in place that we wave at and say "This will be important eventually, promise!"

Also, I wanted to point out that in my branch the API I proposed does have the UA explicitly allocating the textures for you, rather than them being user supplied, because I completely agree that we want to cut down on copies where possible and in some cases that's only feasible using surfaces allocated by the native API.

@RafaelCintron
Copy link

The biggest issue I see with this change that texture array multisampling is not supported in core WebGL 2.0 (ES 3.0). To get this functionality, we need to add new extensions to our implementations.

At the WebGL F2F, we talked about several potential extensions to fix this problem. But, in talking with the ANGLE team, we both agree that OES_texture_storage_multisample_2d_array would be fastest one to implement and be the most impactful. The mobile specific extensions would be expensive to emulate on desktop systems and have somewhat messy dependencies on each other. To date, noone has signed up to implement any of extensions above.

While I do like the convenience of the WebGL 2.0 XR layer, giving up multisampling is a tough pill to swallow, especially since it is a regression from the current opaque framebuffer proposal where you can have both multisampling and multiview together.

Until we have a more concrete plan to implement the extensions above, I'm inclined to keep multiview in opaque framebuffer.

@RafaelCintron
Copy link

@Artyom17 , in the XRWebGLArrayLayer approach, the developer only allocates depth and stencil textures themselves. Color textures arrays are allocated by the user agent and given to the developer. So, unless I am missing something, you should be able to have developer render to your color textures directly.

I would love to extend XRWebGLArrayLayer to also allow the user agent to provide depth textures as well. This would help improve LSR on some platforms.

I am fine with limiting multi-view to WebGL 2-only, even if we decide to go with opaque framebuffers. Maintaining special code for converting WebGL 1 shaders to ES 3 seems like the wrong thing to do, especially since we expect most VR/AR capable hardware to support WebGL 2.

@Artyom17
Copy link
Contributor

Hey @RafaelCintron, any updates? BTW, if we are talking about changes in WebGL, why wouldn't we also define a way to render into a texture2d with implicit multisampling, the analog of GL_EXT_multisampled_render_to_texture GL extension? Currently rendering into a texture (non-multiview way) with AA is also tricky: the only way is to use renderBufferMutlisampled + blit (correct me if I am wrong), which is less efficient than implicit mutlisampling with WEBGL_texture_multisample extension. I'd also love to use implicit mutlisampled rendering into a texture2d for layers (like render content for a Quad layer without extra texture copies).
I can see the proposed WEBGL_texture_multisample extension, but it only proposes texImage2DMultisample, while I need framebufferTexture2DMultisample method.

@RafaelCintron
Copy link

At the last WebGL F2F, the group decided to:

  • Rename WebGL_multiview to OVR_multiview2 and remove all references to opaque framebuffers. Since OVR_multiview2 tracks ES 3.0, this means the corresponding WebGL extension will only be available to WebGL 2 developers. .
  • Add OVR_multiview_multisampled_render_to_texture as a WebGL extension. This extension would need to be emulated on platforms without implicit resolve such as Direct3D11.

With the above, we should theoretically have everything we need to implement a WebGL 2 array layer in WebXR.

@Artyom17
Copy link
Contributor

Great to hear that @RafaelCintron, really great news! Any ETAs on when these extensions will become at least draft?

Also, if there is going to be OVR_multiview_multisampled_render_to_texture, then it is super inconsistent that there is no EXT_multisampled_render_to_texture extension in WebGL. Any plans to introduce it as well? PLEASE? That would be super helpful for non-multiview layers and in general it is much more convenient to use than 'renderBufferMutlisampled + blit'.

@RafaelCintron
Copy link

@Artyom17 , I'll see what I can do about adding EXT_multisampled_render_to_texture. Support for the "render_to_texture" variety of extensions is predicated on being able to effectively emulate them on desktop, where implicit resolve is not available. We do not want developers to write two codepaths.

By the way, the spec and test portion of OVR_multiview2 has landed in draft form.

@Artyom17
Copy link
Contributor

Artyom17 commented Mar 22, 2019

You are my hero @RafaelCintron !!! Thanks a lot! Now we just need OVR_multiview_multisampled_render_to_texture and EXT_multisampled_render_to_texture (the latter one can be written against WebGL 1, btw) to be completely happy! ;)

@RafaelCintron
Copy link

Update on progress:
The OVR_multiview2 WebGL extension has been community approved. The Chromium implementation on Windows has been turned on by default. Firefox implements it on Windows and Android, also on by default.

We (Microsoft) are working on implementing EXT_multisampled_render_to_texture in ANGLE and will use that as a foundation for implementing OVR_multiview_multisampled_render_to_texture in ANGLE. Once that is complete, we look at exposing the OVR extension through WebGL.

@Artyom17
Copy link
Contributor

Great to hear. Oculus Browser 6.2.x will already support OVR_multiview2 enabled by default (shipping ETA this week). Can't wait to have both EXT_multisampled_render_to_texture and OVR_multiview_multisampled_render_to_texture implemented in WebGL. The only issue - those (especially the multiview one) are completely intolerant to mid-frame flushes. Which may come, for example, from texImage2D (at least the way how it is implemented in Chrome: it switches framebuffers).

@cabanier
Copy link
Member

Closing since this is resolved with latest proposal in WebXR and WebGL groups

cabanier added a commit that referenced this issue Apr 14, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants