Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Spec: GL_RGB10_A2 format missing in texImageSource variants of tex(Sub)Image2D #2274

Closed
astojilj opened this issue Jan 25, 2017 · 26 comments
Closed
Assignees

Comments

@astojilj
Copy link
Contributor

An entry:

  • Internal format: GL_RGB10_A2
  • format: GL_RGBA
  • type: GL_UNSIGNED_INT_2_10_10_10_REV

is missing from the table defining supported internal_format&format&type combinations to use with with TexImageSource source [1].

As it is normalized fixed-point format [2] with higher precision and the note [1] is not applicable to it:

When the data source is a DOM element (HTMLImageElement, HTMLCanvasElement, or HTMLVideoElement), or is an ImageBitmap or ImageData object, commonly each channel's representation is an unsigned integer type of at least 8 bits. Converting such representation to signed integers or unsigned integers with more bits is not clearly defined. For example, when converting RGBA8 to RGBA16UI, it is unclear whether or not the intention is to scale up values to the full range of a 16-bit unsigned integer. Therefore, only converting to unsigned integer of at most 8 bits, half float, or float is allowed.

[1]
https://www.khronos.org/registry/webgl/specs/latest/2.0/#3.7.6
table under [throws] void texImage2D(GLenum target, GLint level, GLint internalformat, GLsizei width, GLsizei height, GLint border, GLenum format, GLenum type, TexImageSource source)

[2]
https://www.khronos.org/opengl/wiki/Image_Format#Special_color_formats

@zhenyao
Copy link
Contributor

zhenyao commented Jan 25, 2017

Why do you think the note [1] does not apply with GL_RGB10_A2?

The mapping from 8 bit to 10 bit is ambiguous, as stated in this note.

@astojilj
Copy link
Contributor Author

It would be ambiguous for:
GL_RGB10_A2UI / GL_RGBA_INTEGER / GL_UNSIGNED_INT_2_10_10_10_REV/ red bits: ui10
as it is integer format. For normalized fixed-point format:
GL_RGB10_A2 / GL_RGBA / GL_UNSIGNED_INT_2_10_10_10_REV / red bits: 10

it is not ambiguous.

For normalized fixed-point formats like R8, GL_RGB10_A2 or R16 (which is unfortunately not available in ES3 but I'm using it as an example here) value of 1.f is 255, 1023, 65535, respectively. Anyway, for all of them, color component values are in [0-1.f] range.

This is important for HDR and standard 8-bit color video display on HDR displays (10-bit color depth).

@zhenyao
Copy link
Contributor

zhenyao commented Jan 25, 2017

I see. We can discuss this in the next working group meeting.

@kenrussell @jdashg @kainino0x

@kenrussell
Copy link
Member

@astojilj thanks for pointing this out.

Presumably in order to properly scale up 8-bit-per-channel content to 10-bit, it's necessary to do an intermediate conversion through floating-point? On the CPU, do you know of a cheaper way?

@MarkCallow
Copy link
Collaborator

For normalized fixed-point formats like R8, GL_RGB10_A2 or R16 (which is unfortunately not available in ES3 but I'm using it as an example here) value of 1.f is 255, 1023, 65535, respectively. Anyway, for all of them, color component values are in [0-1.f] range.

I do not think scaling up is the correct thing to do, at least not in every situation. Scenes authored for regular displays may look uncomfortably bright if mapped to the full dynamic range.

@astojilj
Copy link
Contributor Author

@kenrussell
Sorry for late response on this.
Good point, we cannot focus only on HDR content here as we (Chromium) already scale up for half_float and float. And for HDR output half_float and float are, in a way, covering also RGB10_A2 deep-color, as they can support HDR with deeper color (12-bit, 16-bit) than HDR10.

On the CPU, do you know of a cheaper way?

Only R10 = (R8<<2) | (R8>>6) comes to my mind, but glCopyTexImage2D should work.

@msc-
Agree, it is complex thing. I'm noob on this and guessed there is a need for dithering like for 565<->888.
Seems more complicated e.g. 5 Companding of HDR Images. However, either it is outside the scope of this spec or we need to reconsider also half_float and float and also rendering HDR content.

@kenrussell
Copy link
Member

@astojilj Chromium does no scaling when uploading videos to FP16 and FP32 textures. These texture types can represent the value 1.0 exactly, as well as values larger than 1.0. When uploading a video element to a floating-point texture, the maximum value per channel will be 1.0. The user is responsible for any scaling up.

It's not clear to me what the rules should be when uploading videos to RGB10_A2 textures. If the source video is already in a texture, then sampling it, will produce values between 0.0 and 1.0, and when writing those values to the target RGB10_A2 texture, they've implicitly been scaled up. A scaling by (255.0 / 1023.0) would be needed to scale them back down.

For what it's worth. CopyTexImage2D will only work if the video decoding pipeline has already produced an RGB10_A2 texture. If the source video's texture is RGBA8 then there will be a mismatch between the sizes of the components, so the copy would fail (OpenGL ES 3.0.5 section 3.8.5 "Alternate Texture Image Specification Commands" p. 140).

Please help define the rules you want to see so that we can review them and see if they're implementable. Thanks.

@astojilj
Copy link
Contributor Author

astojilj commented Feb 2, 2017

@kenrussell With scaling up, I was referring to converting lower precision 8-bit to format with higher precision float, half_float or RGB10_2 while keeping the same value, in the range [0-1].

Please help define the rules you want to see so that we can review them and see if they're implementable.

From that point of view, RGB10_2, being fixed normalized format, is defining values [0-1] the same way as RGBA8 or GL_RGB565.

To transfer the precision from HDR content to RGB10_A2 texture, we use intermediate floating-point texture, as you mentioned, and render to RGB10_A2 or use copyTexImage2D to RGB10_A2 (EXT_color_buffer_float).

For non-HDR content, no scaling needed, or let's say that it would be implicitly happening when uploading to intermediate floating point texture.

Note: edited this after @zhenyao 's comment. Thanks.

@zhenyao
Copy link
Contributor

zhenyao commented Feb 3, 2017

Note that the extension EXT_color_buffer_float lifts this limitation and you can use copyTexImage2D on float formats. https://www.khronos.org/registry/webgl/extensions/EXT_color_buffer_float/

@kenrussell
Copy link
Member

@astojilj what about @msc- 's comment above that non-HDR content (with values per channel between 0.0 and 1.0) should probably not map to the full 0.0...1.0 range in RGB10_A2 textures?

Please, help clearly define all of the rules here. We are relying on you to study this problem, test the behavior, and produce both the specification change as well as conformance tests.

@astojilj
Copy link
Contributor Author

@kenrussell
OK. Let me do so.

should probably not map to
Let me do it like this:
the mapping we already have now for non HDR fixed_8 [0...1.0] ([0 ... 255]) to float or half_float (0.1 - range) should keep the values correct for non HDR fixed_8 [0...1.0] ([0 .... 255]) -> 10 bit fixed range.

@kenrussell
Copy link
Member

@astojilj thanks - it wasn't clear to me from your reply whether your quote was intended to be your proposal. If so, it isn't complete. Waiting for a clear and self-contained proposal in the form of a pull request which includes the spec update and a new conformance test. Thanks for your help.

@kenrussell
Copy link
Member

Per offline discussion with Fredrik Hubinette on Chrome's media team, his opinion is that the value returned to shaders should remain unchanged. So (paraphrasing and quoting him):

  • If the shader gets values between 0.0 and 1.0, they should still be 0.0..1.0 values even if the textures gets copied to a 10-bit or 1000-bit texture

  • But if the shader gets values between 0..255, it should still get 0..255 values after it gets copied to a 10-bit texture

So, for RGB10_A2, which when sampled returns values between 0.0 and 1.0:

  • If the texture data remains on the GPU, the GPU-GPU copy to the 10-bit texture should not do any scaling (since the source values will be between 0.0 and 1.0).

  • If the source texture data was on the CPU, then it would need to be scaled up from the 0..255 range to the 0..1023 range.

Comments? (@astojilj @msc- in particular) Thanks.

@astojilj
Copy link
Contributor Author

This is in line with the report. Thanks for clarifying it.
I have some other work to complete during this and next week and could start working on this after that.

@astojilj
Copy link
Contributor Author

Some related work - work ongoing on HDR video decoding https://bugs.chromium.org/p/chromium/issues/detail?id=682416
@kenrussell, I'm not able to assign this to myself - anyway, working on it.
Opened the account in Khronos (yesterday).

@kenrussell
Copy link
Member

Thanks @astojilj , assigned this issue to you. Looking forward to your contributions.

@MarkCallow
Copy link
Collaborator

So, for RGB10_A2, which when sampled returns values between 0.0 and 1.0:

If the texture data remains on the GPU, the GPU-GPU copy to the 10-bit texture should not do any scaling (since the source values will be between 0.0 and 1.0).

If the source texture data was on the CPU, then it would need to be scaled up from the 0..255 range to the 0..1023 range.

There appears to be a hole in the OpenGL ES 3.x specs regarding the above. The second is clearly not supported as the combination of GL_RGBA, GL_UNSIGNED_BYTE and GL_RGB10_A2 are not shown as a valid combination in table 8.2.

However according to table 16.4 RGBA8 and RGB10_A2 are compatible internal formats. After several indirections, we find on pp 220,221 (ES 3.2 spec) that the conversion should happen based on "compatible formats" in table 8.27 but RGB10_A2 is not listed anywhere in that table. I think table 16.4 is in error showing RGBA8 and RGB10_A2 as compatible.

I don't think you can rely on any consistent behavior across implementations with regards to a GPU->GPU copy between these types.

But if the shader gets values between 0..255, it should still get 0..255 values after it gets copied to a 10-bit texture

The only way a shader would see such values is with an integer or floating point texture. Tying the "source data on the CPU" to the above comment doesn't make sense to me.

I think that TexImageSources should be considered to be GL_RGB8, GL_UNSIGNED BYTE and the compatibility rules in the OpenGL ES should apply. In other words it should be an error to attempt to upload one to a RGB10_A2 texture.

@astojilj
Copy link
Contributor Author

If the source texture data was on the CPU, then it would need to be scaled up from the 0..255 range to the 0..1023 range.

There appears to be a hole in the OpenGL ES 3.x specs regarding the above. The second is clearly not supported as the combination of GL_RGBA, GL_UNSIGNED_BYTE and GL_RGB10_A2 are not shown as a valid combination in table 8.2.

There might be a misunderstanding here - Table 8.2 ES 3.2 (November "16) that you are referring to includes an entry for:
RGBA / UNSIGNED_INT_2_10_10_10_REV / 4 / RGB10_A2, RGB5_A1.

However according to table 16.4 RGBA8 and RGB10_A2 are compatible internal formats. After several indirections, we find on pp 220,221 (ES 3.2 spec) that the conversion should happen based on "compatible formats" in table 8.27 but RGB10_A2 is not listed anywhere in that table.

This is ES 3.2 / Core 4.2 related. We need to follow ES 3.0 for WebGL.

I think table 16.4 is in error showing RGBA8 and RGB10_A2 as compatible.

AFAIK, 16.4 in ES 3.2 should be fine - but this table is related to CopyImageSubData (not available in ES 3.0, nor on OSX supported Core 4.1 profile). Anyway, no plans to use this method.

The only way a shader would see such values is with an integer or floating point texture.

UNSIGNED_INT_2_10_10_10_REV support is already in WebGL 2 spec, fine to use with Uint32Array data. We are here extending the usage to HDR video.

To me this all looks clear. Possible cause for misunderstanding is that @kenrussell and me already started discussing here about how to implement this (In Chromium). This is related to multi process architecture: for video upload we have 2 paths: "source texture data was on the CPU for some platforms (e.g. Windows) and "GPU-GPU" for platforms supporting interprocess sharing of GPU buffers (e.g. IOSurfaces on OSX).

@kenrussell
Copy link
Member

There appears to be a hole in the OpenGL ES 3.x specs regarding the above. The second is clearly not supported as the combination of GL_RGBA, GL_UNSIGNED_BYTE and GL_RGB10_A2 are not shown as a valid combination in table 8.2.

However according to table 16.4 RGBA8 and RGB10_A2 are compatible internal formats. After several indirections, we find on pp 220,221 (ES 3.2 spec) that the conversion should happen based on "compatible formats" in table 8.27 but RGB10_A2 is not listed anywhere in that table. I think table 16.4 is in error showing RGBA8 and RGB10_A2 as compatible.

I don't think you can rely on any consistent behavior across implementations with regards to a GPU->GPU copy between these types.

The browser wouldn't necessarily rely on CopyImageSubData, or even CopyTexSubImage2D. In http://crbug.com/612542 @qkmiao has made tremendous contributions to add more strategies for doing GPU-to-GPU texture copies. Since RGB10_A2 is guaranteed to be color-renderable, in this case, the RGBA8 video texture would be texture mapped onto a quad, and drawn into the destination RGB10_A2 texture.

But if the shader gets values between 0..255, it should still get 0..255 values after it gets copied to a 10-bit texture

The only way a shader would see such values is with an integer or floating point texture.

Agree.

Tying the "source data on the CPU" to the above comment doesn't make sense to me.

The reason for mentioning these at the same time is to motivate the decision to expose the same range of values to the user's shader when sampling the resulting texture, regardless of the per-channel bit depth of the texture.

In ES 3.0.5 section 2.1.6.1 "Conversion from Normalized Fixed-Point to Floating-Point" it's clear that when uploading UNSIGNED_INT_2_10_10_10_REV data to an RGB10_A2 texture, the value 0x3FF for the R, G, and B channels maps to a sampled value of 1.0 in the shader. Scaling up from 0x0..0xFF to 0x0..0x3FF would have to be done when uploading a software-decoded 8-bit-per-channel video to an RGB10_A2 texture. As pointed out above, no such scaling up would need to be done if the video was already hardware-decoded into an RGBA8 texture and it were copied on the GPU to the RGB10_A2 tetxure.

I think that TexImageSources should be considered to be GL_RGB8, GL_UNSIGNED BYTE and the compatibility rules in the OpenGL ES should apply. In other words it should be an error to attempt to upload one to a RGB10_A2 texture.

This restriction would be artificially constraining. There's no conceptual problem with uploading videos to RGB10_A2 textures, since this is an unsigned normalized format like RGBA8. The rules simply need to be defined and solid conformance tests written.

@MarkCallow
Copy link
Collaborator

Thanks @kenrussell. I was in the middle of composing a response to @astojilj but yours has saved me the trouble of finishing it.

I think that TexImageSources should be considered to be GL_RGB8, GL_UNSIGNED BYTE and the compatibility rules in the OpenGL ES should apply. In other words it should be an error to attempt to upload one to a RGB10_A2 texture.

This restriction would be artificially constraining. There's no conceptual problem with uploading videos to RGB10_A2 textures, since this is an unsigned normalized format like RGBA8. The rules simply need to be defined and solid conformance tests written.

I agree there is no conceptual problem but it is an extension over OpenGL ES 3. Do you intend for WebGL to also support uploading RGBA UNSIGNED_BYTE data to an RGB10_A2 texture or is this only for TexImageSources?

But if the shader gets values between 0..255, it should still get 0..255 values after it gets copied to a 10-bit texture

Does this mean that if you are uploading UNSIGNED_BYTE-equivalent data from the CPU to an RGB10_A2 texture that you will copy the component values as is, except for A, of course. So a value of 255 which in an RGBA8 texture would become 1.0 will become 0.25 (255/3FF )? If you actually have an HDR display then that may be the right thing. If you don't then it seems like it would be the wrong thing.

@kenrussell
Copy link
Member

I agree there is no conceptual problem but it is an extension over OpenGL ES 3. Do you intend for WebGL to also support uploading RGBA UNSIGNED_BYTE data to an RGB10_A2 texture or is this only for TexImageSources?

This is only for TexImageSources. The intent is to support HDR videos well in WebGL.

But if the shader gets values between 0..255, it should still get 0..255 values after it gets copied to a 10-bit texture

Does this mean that if you are uploading UNSIGNED_BYTE-equivalent data from the CPU to an RGB10_A2 texture that you will copy the component values as is, except for A, of course. So a value of 255 which in an RGBA8 texture would become 1.0 will become 0.25 (255/3FF )? If you actually have an HDR display then that may be the right thing. If you don't then it seems like it would be the wrong thing.

No. The sentence above was talking about what the user's shader sees when it samples the texture, and was specifically talking about integer format textures (as well as floating-point). RGB10_A2 is unsigned normalized, so the range the shader sees when it samples the texture is always 0.0..1.0. This is the same as for RGBA8 internalformat textures.

For this reason, if the video element is performing decoding on the CPU, when uploading to RGB10_A2, the decoded RGBA8 values will have to be scaled up from (0..255) to (0..1023) in order for the shader to see the full (0.0..1.0) range in the shader. This is necessary regardless of whether the user has an HDR display. (Actually outputting HDR from WebGL is a different question.) The scaling is a more complex operation than just shifting left by two bits. That's a little concerning, but the fast (and presumably common) case will be when hardware-accelerated video decoding is in use, and be a simple GPU-GPU copy.

@MarkCallow
Copy link
Collaborator

For this reason, if the video element is performing decoding on the CPU, when uploading to RGB10_A2, the decoded RGBA8 values

I think this has been a disconnect for me. If video is decoding into RGBA8 or RGB8 values, then it isn't an HDR video is it?

@astojilj
Copy link
Contributor Author

For this reason, if the video element is performing decoding on the CPU, when uploading to RGB10_A2, the decoded RGBA8 values

I think this has been a disconnect for me. If video is decoding into RGBA8 or RGB8 values, then it isn't an HDR video is it?

It is an implementation detail - a constraint of current implementation of HDR10 (High 10 profile) and the work on implementing it is ongoing.

astojilj added a commit to astojilj/WebGL that referenced this issue Apr 19, 2017
astojilj added a commit to astojilj/WebGL that referenced this issue Apr 19, 2017
astojilj added a commit to astojilj/WebGL that referenced this issue Apr 21, 2017
kenrussell added a commit that referenced this issue Apr 21, 2017
kenrussell added a commit that referenced this issue Apr 21, 2017
@kenrussell
Copy link
Member

Reopening this as the above pull request was reverted for the moment.

@kenrussell kenrussell reopened this Apr 21, 2017
@kenrussell
Copy link
Member

Aleks, once https://codereview.chromium.org/2821363003/ lands could you please put up another pull request like #2375 re-landing this? I could also revert the revert if you like but I think the commit history would be cleaner with a fresh pull request.

Thanks.

@astojilj
Copy link
Contributor Author

Thanks. Yes, I'll make another one.

astojilj added a commit to astojilj/WebGL that referenced this issue Apr 27, 2017
Splitting part of the KhronosGroup#2375 to
allow landing https://crrev.com/2821363003/ before landing the
KhronosGroup#2375.

This is related to KhronosGroup#2274: GL_RGB10_A2 and texImageSource variants of tex*Image*
kenrussell pushed a commit that referenced this issue Apr 27, 2017
Splitting part of the #2375 to
allow landing https://crrev.com/2821363003/ before landing the
#2375.

This is related to #2274: GL_RGB10_A2 and texImageSource variants of tex*Image*
MXEBot pushed a commit to mirror/chromium that referenced this issue May 1, 2017
…res.

See: KhronosGroup/WebGL#2274

BUG=714240
CQ_INCLUDE_TRYBOTS=master.tryserver.chromium.android:android_optional_gpu_tests_rel;master.tryserver.chromium.linux:linux_optional_gpu_tests_rel;master.tryserver.chromium.mac:mac_optional_gpu_tests_rel;master.tryserver.chromium.win:win_optional_gpu_tests_rel

Review-Url: https://codereview.chromium.org/2821363003
Cr-Commit-Position: refs/heads/master@{#468257}
astojilj added a commit to astojilj/WebGL that referenced this issue May 14, 2017
…tex(Sub)Image*

Chromium implementation landed: https://crrev.com/2821363003/.
Verified by running tests locally on OSX.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants