-
Notifications
You must be signed in to change notification settings - Fork 21
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Clarify the use of color volume metadata #98
Conversation
What if both ST2086 max luminance and (HDR10) MaxCLL are available? Which one to use? |
We're also using mastering display metadata in the wayland color management protocol to let clients define the content color volume. One issue that we're not sure how to handle is differences in the white point of the color space and the mastering display white point. It doesn't seem to be specified anywhere if the mastering display volume is supposed to be chromatically adjusted to the color space or not. cc @ppaalanen |
Yes indeed, thanks for the CC. I've always been assuming that no color gamut (or tone) mapping is happening between the content color encoding and mastering display, but now that I think of it, that too is just an assumption I have made. What justifies this assumption, or am I wrong? The assumption seems to be required for critical viewing and mastering to be meaningful. But then, if color gamut mapping is not done (apart from clipping, perhaps), why should white point chromatic adaptation be done? And if it is done, then how is it done? After all, doing chromatic adaptation seems to be the rule whenever white points differ, but is this an exception? These questions apply when one wants to fill in a color volume description based on a mastering display description. |
hdr_html_canvas_element.md
Outdated
If omitted, `minimumLuminance` is equal to 0. | ||
|
||
The color volume is nominal because it MAY be smaller or larger than the actual | ||
color volume of image content, but SHOULD not be larger. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do you instead mean
color volume of image content, but SHOULD not be larger. | |
color volume of image content, but SHOULD not be smaller. |
?
If the nominal color volume is smaller than the actual image content color volume, and color gamut mapping is driven by the nominal volume, then that would result in unexpected color clipping or worse.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I agree. If it is smaller, that means the content contains values that were out of gamut for the mastering display. That seems difficult to explain. But certainly, the color volume of a given image may easily be smaller than the gamut of the mastering display (a grayscale or sepia image; an image that uses primarily cool or warm hues, for example).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for catching the typo.
It seems clear that if the mastering color volume uses a different white point to the color space, then it should be chromatically adapted so that the corresponding color volume in the color space is known. Within reason, white will always appear white but the appearance of other colors will change, and that is precisely what chromatic adaptation accomplishes (prediction of corresponding colors). |
hdr_html_canvas_element.md
Outdated
If omitted, `minimumLuminance` is equal to 0. | ||
|
||
The color volume is nominal because it MAY be smaller or larger than the actual | ||
color volume of image content, but SHOULD not be larger. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I agree. If it is smaller, that means the content contains values that were out of gamut for the mastering display. That seems difficult to explain. But certainly, the color volume of a given image may easily be smaller than the gamut of the mastering display (a grayscale or sepia image; an image that uses primarily cool or warm hues, for example).
hdr_html_canvas_element.md
Outdated
|
||
For example, `colorVolumeMetadata` can be set according to the Mastering Display | ||
Color Volume chunk found in a PNG image: the color volume of the image content | ||
typically coincides with that of the mastering display. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Does it? I would say instead that the color volume of the image content will be a subset of that of the mastering display
I think you are assuming that the viewer is adapted to the mastering display's white point. Can we assume that? What does the viewer in a mastering environment adapt to? Is it not mostly the monitor contents rather than monitor physical build (monitor white point) or the surround? What would give the monitor white away to the viewer? Counter-example: night light; nothing gives the monitor white away, so the viewer adapts to what content depicts as white, so white content looks white even if it is yellow/reddish compared to monitor white. OTOH, would one not intend to show content colorimetry as-is on a mastering display? Meaning no chromatic adaptation, no gamut mapping, no tone mapping? Or maybe no-one is foolish enough to have a mastering display with a different white point than the content encoding, so that this question never comes up in the first place? Would be nice to have some inside information from the industry here, how do they really do things. My feeling is that mastering is not equivalent to end user viewing. End user viewing wants to get the best impression out of content with whatever equipment they happen to have, while mastering is about accurately inspecting the content as it is and tuning the content rather than its presentation to look intended. |
MaxCLL is supposed to match the content, so it is probably a safer value. |
1,000 nits for minimum?
On 5/30/23, 11:08 AM, "Pierre-Anthony Lemieux" ***@***.***> wrote:
EXTERNAL: Use caution when clicking on links or opening attachments.
@palemieux commented on this pull request.
In hdr_html_canvas_element.md:
-obtained from metadata contained in a source image, and omitted otherwise.
+`colorVolumeMetadata` specifies the nominal color volume occupied by
+the image content in the CIE 1931 XYZ color space. The boundaries of the color
+volume are defined by:
+
+* the xy coordinates, as defined in [ISO
+ 11664-3](https://www.iso.org/standard/74165.html), of three color primaries:
+ `redPrimaryX`, `redPrimaryY`, `greenPrimaryX`, `greenPrimaryY`,
+ `bluePrimaryX`, and `bluePrimaryY`;
+* the xy coordinates of a white point: `whitePointX` and `whitePointY`; and
+* a minimum and maximum luminance in cd/m²: `minimumLuminance` and `maximumLuminance`.
+
+If omitted, `chromaticity` is equal to the chromaticity of the color space of
+the Canvas.
+
+If omitted, `minimumLuminance` is equal to 0.
I think 1,000 is probably a safer value.
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you commented.Message ID: ***@***.***>
|
Unlikely to be exceeded, yes, but also extreme and useless.
1,000 is more useful, and already the default for HDR10, i.e. PQ, and HLG
Lars
On 5/30/23, 6:55 PM, "Pierre-Anthony Lemieux" ***@***.***> wrote:
EXTERNAL: Use caution when clicking on links or opening attachments.
@palemieux commented on this pull request.
In hdr_html_canvas_element.md:
-obtained from metadata contained in a source image, and omitted otherwise.
+`colorVolumeMetadata` specifies the nominal color volume occupied by
+the image content in the CIE 1931 XYZ color space. The boundaries of the color
+volume are defined by:
+
+* the xy coordinates, as defined in [ISO
+ 11664-3](https://www.iso.org/standard/74165.html), of three color primaries:
+ `redPrimaryX`, `redPrimaryY`, `greenPrimaryX`, `greenPrimaryY`,
+ `bluePrimaryX`, and `bluePrimaryY`;
+* the xy coordinates of a white point: `whitePointX` and `whitePointY`; and
+* a minimum and maximum luminance in cd/m²: `minimumLuminance` and `maximumLuminance`.
+
+If omitted, `chromaticity` is equal to the chromaticity of the color space of
+the Canvas.
+
+If omitted, `minimumLuminance` is equal to 0.
In the balance, I think @svgeesus' proposal might be the least terrible: set the default maximumLuminance to 10,000, which is unlikely to be exceeded.
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you commented.Message ID: ***@***.***>
|
When this spec uses the unit cd/m², are you clear which viewing environment those values are relative to? cd/m² is theoretically an absolute luminance, but any given absolute luminance value is appropriate as-is only in a specific viewing environment if the goal is a universally consistent perception of that luminance. |
When I want to make the same statement, which standard or report can I refer to? ITU, SMPTE, ... |
This is in the context of the viewing environment specified in BT.2100. |
The mastering display white point (as used in SMPTE ST 2086 et al.) and in this strawman are merely intended to characterize the volume (in CIE xy space) spanned by the pixels within the image -- for the purpose of generating stable and optimal tone mapping. These white points are not related to any scene or reference viewing environment illuminant. This strawman specifically does not use the term "mastering display" to avoid confusion. Makes sense? |
Does HDR10 constrain the mastering display and/or the image pixel luminance? |
Yes, that's the fundamental description of what those parameters are. We got up to that point in the Wayland protocol design too. It is kind of enough for an interface specification. The open question is that we do not know what to do with those numbers. How do you compute a volume in, say, the signal encoding space from the mastering parameters? How are they used to drive color gamut mapping? We haven't found good, or any, references for that yet. I don't know how to handle those parameters in a compositor, which makes the interface design... kind of blind. I was hoping you would have contacts to find out, because I believe you will have the same questions. Maybe it's not a topic for this document under review but for another. Our Wayland discussions so far are stuck with the question at https://gitlab.freedesktop.org/pq/color-and-hdr/-/issues/18 . |
All I know about HDR10 is in https://gitlab.freedesktop.org/pq/color-and-hdr/-/blob/main/doc/hdr10.md#hdr10-media-profile . It does seem to suggest delivering all that metadata. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sorry for not making it to the telco. Here are some further comments for your consideration, I hope I'm not wasting your time.
`redPrimaryX`, `redPrimaryY`, `greenPrimaryX`, `greenPrimaryY`, | ||
`bluePrimaryX`, and `bluePrimaryY`; | ||
* the xy coordinates of a white point: `whitePointX` and `whitePointY`; and | ||
* a minimum and maximum luminance in cd/m²: `minimumLuminance` and `maximumLuminance`. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this should explicitly refer to the BT.2100 viewing environment. When this content is being displayed in some other viewing environment, the cd/m² may not be emitted literally even on capable equipment.
@@ -200,12 +200,12 @@ Add a new CanvasColorMetadata dictionary: | |||
|
|||
```idl | |||
dictionary CanvasColorMetadata { | |||
CanvasMasteringDisplayMetadata masteringDisplayMetadata; | |||
CanvasColorVolumeMetadata colorVolumeMetadata; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You are renaming mastering display to color volume. If the data actually held within is still mastering display information, it might imply something unintended, like not doing chromatic adaptation if one should be done.
In the Wayland protocol, we decided to keep calling this mastering display information, because we still do not know how that defines a color volume wrt. any other space.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If the data actually held within is still mastering display information, it might imply something unintended
The data held is not mastering display information.
The data is intended to describe the contents of the image.
function rec2100PQtoSRGB(r, g, b) { | ||
let rt = 10000 * pqEOTF(r) / 203; | ||
let gt = 10000 * pqEOTF(g) / 203; | ||
let bt = 10000 * pqEOTF(b) / 203; | ||
[rt, gt, bt] = matrixXYZtoRec709(matrixBT2020toXYZ(rt, gt, bt)); | ||
const rp = Math.pow(rt, 1/2.4); | ||
const gp = Math.pow(gt, 1/2.4); | ||
const bp = Math.pow(bt, 1/2.4); | ||
return [rp, gp, bp]; | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Btw. this seems to be a direct colorimetric conversion, like a change of basis in linear algebra, rather than a color gamut mapping. You will get out-of-range sRGB values, which I presume will then be hard-clipped independently on each color channel. This will cause loss of detail and saturation.
What is the value of this example? I doubt it would ever be used in practice as-is.
There is also a mathematical problem, because a negative value raised to fractional power results in a complex number, unless matrixXYZtoRec709()
does clipping internally?
Curiously the demo link to sandflow below is using an almost achromatic example image which would not exhibit color gamut problems.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good point, this is indeed a colorimetric conversion.
In CSS Color 4, all the predefined RGB spaces are defined over the extended range. We don't support 709 but do support sRGB over the extended range, as an example.
If had clipping is expected, then the stage at which it occurs should be specified and the choice of hard clip (with associated lightness and hue changes) justified.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What is the value of this example? I doubt it would ever be used in practice as-is.
The value of this example is to demonstrate tone-mapping from a high dynamic range image to a narrow dynamic range image, where high and narrow dynamic range refer to luminance range. It does not demonstrate mapping from a wide color range to a narrow color range, which is not a new problem (last I checked many monitors could not display the full sRGB/Rec. 709 gamut).
1. This function is misnamed.: 1/2.4 is for 709 displays, not sRGB.
2. The matrix doesn’t clip, so Pow 1/2.4 will fail for negative values. Seems this needs a clip after the matrix.
Lars
On 6/1/23, 4:21 AM, "Chris Lilley" ***@***.***> wrote:
EXTERNAL: Use caution when clicking on links or opening attachments.
@svgeesus commented on this pull request.
________________________________
In hdr_html_canvas_element.md<#98 (comment)>:
+function rec2100PQtoSRGB(r, g, b) {
+ let rt = 10000 * pqEOTF(r) / 203;
+ let gt = 10000 * pqEOTF(g) / 203;
+ let bt = 10000 * pqEOTF(b) / 203;
+ [rt, gt, bt] = matrixXYZtoRec709(matrixBT2020toXYZ(rt, gt, bt));
+ const rp = Math.pow(rt, 1/2.4);
+ const gp = Math.pow(gt, 1/2.4);
+ const bp = Math.pow(bt, 1/2.4);
return [rp, gp, bp];
}
Good point, this is indeed a colorimetric conversion.
In CSS Color 4, all the predefined RGB spaces are defined over the extended range. We don't support 709 but do support sRGB over the extended range<https://drafts.csswg.org/css-color-4/#predefined-sRGB>, as an example.
If had clipping is expected, then the stage at which it occurs should be specified and the choice of hard clip (with associated lightness and hue changes) justified.
—
Reply to this email directly, view it on GitHub<#98 (comment)>, or unsubscribe<https://github.com/notifications/unsubscribe-auth/ASRHAL7HZ6H4SUAKVWSKDMDXJCQMZANCNFSM6AAAAAAYEKRXLU>.
You are receiving this because you commented.Message ID: ***@***.***>
|
I suggest calling it ColorPrimaries
SMPTE ST 2086, which this is based on, calls it Display Primaries.
On 6/1/23, 4:22 AM, "Chris Lilley" ***@***.***> wrote:
@svgeesus commented on this pull request.
________________________________
In hdr_html_canvas_element.md<#98 (comment)>:
}
```
```idl
- dictionary ColorVolume {
+ dictionary Chromaticity {
ColorPrimaries would indeed be a better term here. Or perhaps Chromaticities, the plural.
—
Reply to this email directly, view it on GitHub<#98 (comment)>, or unsubscribe<https://github.com/notifications/unsubscribe-auth/ASRHAL7QMOPPJVYEQJSERIDXJCQQZANCNFSM6AAAAAAYEKRXLU>.
You are receiving this because you commented.Message ID: ***@***.***>
|
@ppaalanen The numbers can be used to drive the rendering of the image to the ultimate display and ensure that the rendering algorithm is stable over a sequence of images -- since the rendering algorithm might depend on the contents of the image, i.e., mapping to a 400 nits monitor an image with pixels that range from 0 to 300 nits would ideally be different than mapping an image with pixels that range from 0 to 10,000 nits. Same for the color gamut. The demo at https://www.sandflow.com/public/tone-mapping illustrates the use of |
Closes #97