Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add support for texture arrays #42

Merged
merged 3 commits into from
May 28, 2024
Merged

Conversation

cabanier
Copy link
Member

@cabanier cabanier commented Oct 6, 2023

@cabanier cabanier requested review from toji and bialpio October 6, 2023 04:56
@cabanier
Copy link
Member Author

cabanier commented Oct 6, 2023

@bialpio what do you think?

index.bs Outdated Show resolved Hide resolved
@cabanier cabanier requested a review from bialpio October 6, 2023 21:13
index.bs Outdated
};
</script>

The {{XRWebGLDepthInformation/texture}} attribute contains depth buffer information as an [=opaque texture=]. Each texel corresponds to distance from the [=XRDepthInformation/view=]'s near plane to the users' environment, in unspecified units. The size of each data entry and the type is determined by {{XRSession/depthDataFormat}}. The values can be converted from unspecified units to meters by multiplying them by {{XRDepthInformation/rawValueToMeters}}. The {{XRDepthInformation/normDepthBufferFromNormView}} can be used to transform from [=normalized view coordinates=] into depth buffer's coordinate system. When accessed, the algorithm to <a lt="attempt to access the depth buffer">access the depth buffer</a> of {{XRDepthInformation}} MUST be run.

The {{XRWebGLDepthInformation/textureType}} attribute describes if the texture is of type {{TEXTURE_2D}} or {{TEXTURE_2D_ARRAY}}.

The {{XRWebGLDepthInformation/imageIndex}} attribute returns the offset into the texture array. It MUST be defined when textureType is equal to {{TEXTURE_2D_ARRAY}} and MUST be undefined if it's {{TEXTURE_2D}}.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nit: replace "textureType" with "{{XRWebGLDepthInformation/textureType}}"? Here we refer directly to another attribute of the XRWebGLDepthInformation.

@@ -479,6 +490,23 @@ In order to <dfn>create a WebGL depth information instance</dfn> given {{XRFrame
1. Initialize |result|'s {{XRDepthInformation/normDepthBufferFromNormView}} to a new {{XRRigidTransform}}, based on |nativeDepthInformation|'s [=depth coordinates transformation matrix=].
1. Initialize |result|'s {{XRWebGLDepthInformation/texture}} to an [=opaque texture=] containing the depth buffer returned in |nativeDepthInformation|.
1. Initialize |result|'s [=XRDepthInformation/view=] to |view|.
1. Initialize |result|'s {{XRWebGLDepthInformation/textureType}} as follows:
<dl class="switch">
<dt> If the |result|'s {{XRWebGLDepthInformation/texture}} was created with a textureType of [=XRTextureType/texture-array=]:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nit: replace "textureType" with "texture type"? Here, we refer to some abstract type of a texture that we assume is known by the implementation.

Copy link
Contributor

@bialpio bialpio left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM with nitpicks. I'll give @toji a chance to also take a look before merging.

<dt> Otherwise
<dd> Initialize |result|'s {{XRWebGLDepthInformation/textureType}} to "[=XRTextureType/texture=]".
</dl>
1. Initialize |result|'s {{XRWebGLDepthInformation/imageIndex}} as follows:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This feels like it's overly specific? I certainly expect that it'll work for the majority of head-worn use cases, but I do wonder if there needs to be room left for more unique optics like the Varjo or Pimax headsets. I have no idea if they would choose to use separate depth maps for each view or not, assuming they had the ability to generate them in the first place, but it doesn't seem too far fetched.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is also how it's done in WebXR Layers :-)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Eww. 😅

That strikes me as a bug? But using common language is preferable IMO, so I'm OK with this if you could also file an issue against layers to re-evaluate that algorithm and make sure to indicate that this section should be updated to match any eventual changes in that spec's language.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Eww. 😅

That strikes me as a bug? But using common language is preferable IMO, so I'm OK with this if you could also file an issue against layers to re-evaluate that algorithm and make sure to indicate that this section should be updated to match any eventual changes in that spec's language.

Will do. Maybe we can add an algorithm to the layers spec that assigns the indices and refer to it from here.

};
</script>

The {{XRWebGLDepthInformation/texture}} attribute contains depth buffer information as an [=opaque texture=]. Each texel corresponds to distance from the [=XRDepthInformation/view=]'s near plane to the users' environment, in unspecified units. The size of each data entry and the type is determined by {{XRSession/depthDataFormat}}. The values can be converted from unspecified units to meters by multiplying them by {{XRDepthInformation/rawValueToMeters}}. The {{XRDepthInformation/normDepthBufferFromNormView}} can be used to transform from [=normalized view coordinates=] into depth buffer's coordinate system. When accessed, the algorithm to <a lt="attempt to access the depth buffer">access the depth buffer</a> of {{XRDepthInformation}} MUST be run.

The {{XRWebGLDepthInformation/textureType}} attribute describes if the texture is of type {{TEXTURE_2D}} or {{TEXTURE_2D_ARRAY}}.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd like there to be some more concrete information about how the texture type is selected. The text in this change makes it sound like the OS chooses for you and you have no say in the matter, but that's not how we handle the texture type selection anywhere else. For example, we allow developers to specify the texture type for XRLayers.

Feels like this would ideally to be linked with the developers use of projection layers or specified with the XRDepthStateInit (which is at session creation time, so possibly too restrictive?) I'd like developers to never be surprised by the texture type they received.

I recognize that in some cases that might mean that an internal copy would have to happen, and that would be unfortunate, but it might also make the difference between an Mobile-tested experience "just working" on a headset or the entire app failing to render because of a texture type mismatch.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It would be unfortunate if we force implementators to do a slow copy behind the scenes to get a side-by-side texture. Maybe it's better to drop the regular texture and always return a texture-array if there's a chance for developer confusion.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm concerned that would break any existing mobile AR apps that use it? But I guess that depends on WebGL's behavior around texture binding. I need to refresh my memory as to how that works, but don't you have to bind to the GL_TEXTURE_2D_ARRAY to interact with the texture and use the sampler2DArray type in the shader to sample from it. I don't think you could treat a texture array with a single layer as a GL_TEXTURE_2D and have it work seamlessly.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm concerned that would break any existing mobile AR apps that use it?

AFAIK we're the first ones to offer GPU acceleration. Android uses the software path.

But I guess that depends on WebGL's behavior around texture binding. I need to refresh my memory as to how that works, but don't you have to bind to the GL_TEXTURE_2D_ARRAY to interact with the texture and use the sampler2DArray type in the shader to sample from it. I don't think you could treat a texture array with a single layer as a GL_TEXTURE_2D and have it work seamlessly.

Correct. However, a texture array with 1 element is still fine.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

AFAIK we're the first ones to offer GPU acceleration. Android uses the software path.

If that's the case it changes my opinion, and I'd be OK with it always being a texture array. @bialpio, is this accurate?

@RaananW
Copy link
Contributor

RaananW commented Dec 18, 2023

Any update on this PR?

@cabanier
Copy link
Member Author

Any update on this PR?

@toji ?

@toji
Copy link
Member

toji commented Jan 2, 2024

Pinging @bialpio on the question I asked previously: Does the Android depth sensing use the GPU accelerated path at all?

If not, I'm inclined to say that the GL_TEXTURE_2D path should be dropped prior to merging.

@bialpio
Copy link
Contributor

bialpio commented Jan 2, 2024

Pinging @bialpio on the question I asked previously: Does the Android depth sensing use the GPU accelerated path at all?

Ah, sorry, missed the in-review comment. No, we do not support the GPU-accelerated path for our ARCore-backed implementation on Android.

@cabanier
Copy link
Member Author

cabanier commented Jan 2, 2024

Pinging @bialpio on the question I asked previously: Does the Android depth sensing use the GPU accelerated path at all?

Ah, sorry, missed the in-review comment. No, we do not support the GPU-accelerated path for our ARCore-backed implementation on Android.

Could we update the sample so it works on Android as well? Maybe by uploading the depth to a texture?

@bialpio
Copy link
Contributor

bialpio commented Jan 2, 2024

Pinging @bialpio on the question I asked previously: Does the Android depth sensing use the GPU accelerated path at all?

Ah, sorry, missed the in-review comment. No, we do not support the GPU-accelerated path for our ARCore-backed implementation on Android.

Could we update the sample so it works on Android as well? Maybe by uploading the depth to a texture?

Let me look into that, I think the public sample may not have the changes that happened in our internal, for-testing sample.

@cabanier cabanier merged commit d703f27 into immersive-web:main May 28, 2024
1 check passed
@cabanier cabanier deleted the texture-array branch May 28, 2024 23:17
github-actions bot added a commit that referenced this pull request May 28, 2024
SHA: d703f27
Reason: push, by cabanier

Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants