New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Specify what happens with texImage2D(..., gl.SRGB8, video) #3472
Comments
Other way of asking: What does the
|
For the In the example above, |
This I understand. The question is: what is the input data for the non-linear function.
This I don't see from the specification. Putting it the other way:
To me, none of these are clear cut "native format of the video". I could see arguments for both: either you take some sort of "truncation" viewpoint or you take the encoding viewpoint. From my layman thought, the encoding viewpoint is more consistent: it produces the sensible texture (same texture sampling, some variation based on the precision of the storage vs the precision of the input). |
Also, the |
I'd define HTMLVideoElement uploads similar to HTMLImageElement:
Since the simple |
I agree with @lexaknyazev that |
The behavior of texImage2D(SRGB, RGB, UBYTE, 0x80) is that it will decode-on-fetch as ~0.2. (because 0.5 "perceptual" srgb is just ~20% of the photons as 1.0! https://hackmd.io/0wkiLmP7RWOFjcD13M870A?both#Physical-vs-Perceptual-So-what%E2%80%99s-rgb05-05-05-mean) It seems natural to me that I think this gets into "do you want color management or raw pixel data/bytes". I don't think we want to tie this into the colorspace-decode-enable/disable stuff though. An upload into SRGB8 via RGB+UBYTE operates on raw data/bytes. By passing something to upload here, you are saying "here are values, but when you fetch from them, apply the transform". I think the most reasonable thing is, if you want to merely store e.g. a video as SRGB, but sample the same pixel values (modulo quantization errors, because this is lossy!), then you'd need to upload to RGBA8, and draw that to an SRGB8_ALPHA8 texture/framebuffer.
I feel like I want #2 to be correct, but that I want to think more. SRGB8_ALPHA8 not matching SRGB8 is definitely wrong though. |
Maybe the discussion with contrasting "raw bytes" ( You use an analogy example However, I do see that following should work consistently:
I just don't immediately see the location of the specification nor the use-case for one of these be darker than the source and the two of them as specified in the source. If I understand correctly, none of these operate on "raw values" of anything. They operate on the idealised contents. The other way to think about it: In other words: the video file says "in this video, the colours are in colourspace X". However, when interpreted in this colourspace, the colours are actually off. E.g. your file lies. Only when you uncorrupt the colours via SRGB texture fetch, do you get the correct values. If the video format would have a header field "the colours are corrupted for WebGL SRGB use cases", then the decoder could interpret that, use it to adjust the colours and we'd be able to display it correctly. At which point we would be in the same situation, where now RGB uploads were correct and SRGB uploads would again be darker. What's the use case to get wrong colours from texImage2D upload? (I'm probably missing a lot of knowledge from video authoring -> webgl app rendering color management chain) Contrast with: I'd imagine the use-case for the "upload to SRGB matches uploads to RGB visually" is the added precision where humans typically need it. E.g. uploading a grayscale gradient you would get roughly similar color gradient in both cases. The difference is that with SRGB you get more distinct color variation in the gradient at the darker part, as SRGB has more bits to spare there. |
Even for the canvas/image/video cases, the So the missing part in the WebGL spec is how to convert arbitrary canvas/image/video data into |
Our texImage(..., unpackFormat, unpackType, image) entrypoints have always been kinda strange, exactly because of what @lexaknyazev says: We need to define how to convert e.g. the image to the unpackFormat+unpackType, before we can let GL's specification take over. Currently there are effectively two phases to uploading from e.g. images:
For phase 1, we don't pay attention to internalFormat at all.
Functionally, this means that we should be uploading the same bytes to GL for both:
One ground truth is that uploading to SRGB/RGB/UNSIGNED_BYTE uploads We did recently add support for colorspace conversion to texImage:
However, this happens before conversion+truncation to unpackFormat+unpackType, at least today. IMO the major reason we don't have a getVideoData is because it can already be done via composition of existing functionality:
All of these calls presently, for legacy reasons, imply that all color spaces are (non-linear) sRGB. It sounds like one possible answer here is adding a distinct IMO the only reasons not to add this would be:
I kinda want to take one last stab at external samplers for this reason. |
Specifically, you'd also want your image/video/canvas to say "my colorspace is srgb-linear" (or IDK maybe rec709-linear?). |
One thing that has to be kept in mind for any “ground truth” experiments is that, absent an implementation of the canvas color space proposal, WebGL provides an sRGB drawing buffer that masquerades as linear. The drawing buffer is sRGB because it is typically composited as is and presented on an sRGB display. But WebGL says it is linear and any fragment shader outputs are written to it as is. Your fragment shader needs to do sRGB encoding to have any chance of correct colors. Otherwise that sRGB texture will be decoded to linear on sampling, written as linear to the drawing buffer.
Regards
-Mark
|
This is why I don't like "linear" as a term. I don't really agree with "WebGL provides an sRGB drawing buffer that masquerades as linear", but it's probably a difference in word choice rather than a disagreement? I think it's clearer to say that webgl operates on perceptually-linear (rather than physically-linear) values, within the srgb colorspace, and that all arithmetic is done naively-mathematically-linearly between values. A framing I like is to say that WebGL is agnostic to colors, that it's pure math. Naturally the math is done "linearly", but it is not per se "linear" as a color-person would use the term. The way things are sent to display is usually as a perceptually-linear encoding, so any physically-linear values get lossily quantized, but that's ok because the main value of physically-linear textures and framebuffers is better dark precision for texture-fetched shader inputs and blending respectively. (Quantizing after all blending is complete is fine) |
Thanks for the clarification! |
Specify what happens with texImage2D(..., gl.SRGB8, video)
Consider code:
Which one is correct:
canvas
,canvas2
andvideo
match visually (e.g. follow the model similar to other internal formats such as float formats)canvas
is the darkest,canvas2
andvideo
match visually (e.g. follow the model similar totexImage2D(.., data)
)Similar issue: #3350
WebKit bug:
https://bugs.webkit.org/show_bug.cgi?id=222822
Test case:
https://bugs.webkit.org/attachment.cgi?id=461426
The text was updated successfully, but these errors were encountered: