Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Clarify the use of color volume metadata #98

Merged
merged 6 commits into from
Jun 2, 2023

Conversation

palemieux
Copy link
Contributor

  • specify that color volume metadata defines the nominal color volume of the image content
  • specify how mastering display metadata can be used to set color volume metadata
  • provide an example of how color volume metadata is used when tone mapping
  • specify that color volume metadata should be used when rendering a temporal sequence of images

Closes #97

@LBorgSMPTE
Copy link

LBorgSMPTE commented May 17, 2023

What if both ST2086 max luminance and (HDR10) MaxCLL are available? Which one to use?
(Use of MaxCLL over max luminance might be compatible with HDR10 TV sets. Pls confirm)

@swick
Copy link

swick commented May 22, 2023

We're also using mastering display metadata in the wayland color management protocol to let clients define the content color volume. One issue that we're not sure how to handle is differences in the white point of the color space and the mastering display white point. It doesn't seem to be specified anywhere if the mastering display volume is supposed to be chromatically adjusted to the color space or not.

cc @ppaalanen

@ppaalanen
Copy link

Yes indeed, thanks for the CC.

I've always been assuming that no color gamut (or tone) mapping is happening between the content color encoding and mastering display, but now that I think of it, that too is just an assumption I have made. What justifies this assumption, or am I wrong?

The assumption seems to be required for critical viewing and mastering to be meaningful. But then, if color gamut mapping is not done (apart from clipping, perhaps), why should white point chromatic adaptation be done? And if it is done, then how is it done?

After all, doing chromatic adaptation seems to be the rule whenever white points differ, but is this an exception?

These questions apply when one wants to fill in a color volume description based on a mastering display description.

If omitted, `minimumLuminance` is equal to 0.

The color volume is nominal because it MAY be smaller or larger than the actual
color volume of image content, but SHOULD not be larger.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do you instead mean

Suggested change
color volume of image content, but SHOULD not be larger.
color volume of image content, but SHOULD not be smaller.

?

If the nominal color volume is smaller than the actual image content color volume, and color gamut mapping is driven by the nominal volume, then that would result in unexpected color clipping or worse.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree. If it is smaller, that means the content contains values that were out of gamut for the mastering display. That seems difficult to explain. But certainly, the color volume of a given image may easily be smaller than the gamut of the mastering display (a grayscale or sepia image; an image that uses primarily cool or warm hues, for example).

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for catching the typo.

@svgeesus
Copy link
Contributor

It doesn't seem to be specified anywhere if the mastering display volume is supposed to be chromatically adjusted to the color space or not.

It seems clear that if the mastering color volume uses a different white point to the color space, then it should be chromatically adapted so that the corresponding color volume in the color space is known. Within reason, white will always appear white but the appearance of other colors will change, and that is precisely what chromatic adaptation accomplishes (prediction of corresponding colors).

If omitted, `minimumLuminance` is equal to 0.

The color volume is nominal because it MAY be smaller or larger than the actual
color volume of image content, but SHOULD not be larger.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree. If it is smaller, that means the content contains values that were out of gamut for the mastering display. That seems difficult to explain. But certainly, the color volume of a given image may easily be smaller than the gamut of the mastering display (a grayscale or sepia image; an image that uses primarily cool or warm hues, for example).

hdr_html_canvas_element.md Show resolved Hide resolved
hdr_html_canvas_element.md Outdated Show resolved Hide resolved
hdr_html_canvas_element.md Outdated Show resolved Hide resolved
hdr_html_canvas_element.md Outdated Show resolved Hide resolved
hdr_html_canvas_element.md Show resolved Hide resolved
hdr_html_canvas_element.md Show resolved Hide resolved

For example, `colorVolumeMetadata` can be set according to the Mastering Display
Color Volume chunk found in a PNG image: the color volume of the image content
typically coincides with that of the mastering display.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does it? I would say instead that the color volume of the image content will be a subset of that of the mastering display

@ppaalanen
Copy link

It doesn't seem to be specified anywhere if the mastering display volume is supposed to be chromatically adjusted to the color space or not.

It seems clear that if the mastering color volume uses a different white point to the color space, then it should be chromatically adapted so that the corresponding color volume in the color space is known. Within reason, white will always appear white but the appearance of other colors will change, and that is precisely what chromatic adaptation accomplishes (prediction of corresponding colors).

I think you are assuming that the viewer is adapted to the mastering display's white point. Can we assume that? What does the viewer in a mastering environment adapt to? Is it not mostly the monitor contents rather than monitor physical build (monitor white point) or the surround? What would give the monitor white away to the viewer?

Counter-example: night light; nothing gives the monitor white away, so the viewer adapts to what content depicts as white, so white content looks white even if it is yellow/reddish compared to monitor white.

OTOH, would one not intend to show content colorimetry as-is on a mastering display? Meaning no chromatic adaptation, no gamut mapping, no tone mapping?

Or maybe no-one is foolish enough to have a mastering display with a different white point than the content encoding, so that this question never comes up in the first place?

Would be nice to have some inside information from the industry here, how do they really do things.

My feeling is that mastering is not equivalent to end user viewing. End user viewing wants to get the best impression out of content with whatever equipment they happen to have, while mastering is about accurately inspecting the content as it is and tuning the content rather than its presentation to look intended.

@palemieux
Copy link
Contributor Author

(Use of MaxCLL over max luminance might be compatible with HDR10 TV sets. Pls confirm)

MaxCLL is supposed to match the content, so it is probably a safer value.

@LBorgSMPTE
Copy link

LBorgSMPTE commented May 30, 2023 via email

@LBorgSMPTE
Copy link

LBorgSMPTE commented May 31, 2023 via email

@ppaalanen
Copy link

When this spec uses the unit cd/m², are you clear which viewing environment those values are relative to?

cd/m² is theoretically an absolute luminance, but any given absolute luminance value is appropriate as-is only in a specific viewing environment if the goal is a universally consistent perception of that luminance.

@ppaalanen
Copy link

1,000 is more useful, and already the default for HDR10, i.e. PQ, and HLG

When I want to make the same statement, which standard or report can I refer to? ITU, SMPTE, ...

@palemieux
Copy link
Contributor Author

When this spec uses the unit cd/m², are you clear which viewing environment those values are relative to?

This is in the context of the viewing environment specified in BT.2100.

@palemieux
Copy link
Contributor Author

One issue that we're not sure how to handle is differences in the white point of the color space and the mastering display white point.

The mastering display white point (as used in SMPTE ST 2086 et al.) and in this strawman are merely intended to characterize the volume (in CIE xy space) spanned by the pixels within the image -- for the purpose of generating stable and optimal tone mapping. These white points are not related to any scene or reference viewing environment illuminant.

This strawman specifically does not use the term "mastering display" to avoid confusion.

Makes sense?

@palemieux
Copy link
Contributor Author

1,000 is more useful, and already the default for HDR10, i.e. PQ, and HLG

Does HDR10 constrain the mastering display and/or the image pixel luminance?

@ppaalanen
Copy link

The mastering display white point (as used in SMPTE ST 2086 et al.) and in this strawman are merely intended to characterize the volume (in CIE xy space) spanned by the pixels within the image -- for the purpose of generating stable and optimal tone mapping. These white points are not related to any scene or reference viewing environment illuminant.

This strawman specifically does not use the term "mastering display" to avoid confusion.

Makes sense?

Yes, that's the fundamental description of what those parameters are. We got up to that point in the Wayland protocol design too. It is kind of enough for an interface specification.

The open question is that we do not know what to do with those numbers. How do you compute a volume in, say, the signal encoding space from the mastering parameters? How are they used to drive color gamut mapping? We haven't found good, or any, references for that yet. I don't know how to handle those parameters in a compositor, which makes the interface design... kind of blind. I was hoping you would have contacts to find out, because I believe you will have the same questions. Maybe it's not a topic for this document under review but for another.

Our Wayland discussions so far are stuck with the question at https://gitlab.freedesktop.org/pq/color-and-hdr/-/issues/18 .

@ppaalanen
Copy link

Does HDR10 constrain the mastering display and/or the image pixel luminance?

All I know about HDR10 is in https://gitlab.freedesktop.org/pq/color-and-hdr/-/blob/main/doc/hdr10.md#hdr10-media-profile . It does seem to suggest delivering all that metadata.

Copy link

@ppaalanen ppaalanen left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry for not making it to the telco. Here are some further comments for your consideration, I hope I'm not wasting your time.

`redPrimaryX`, `redPrimaryY`, `greenPrimaryX`, `greenPrimaryY`,
`bluePrimaryX`, and `bluePrimaryY`;
* the xy coordinates of a white point: `whitePointX` and `whitePointY`; and
* a minimum and maximum luminance in cd/m²: `minimumLuminance` and `maximumLuminance`.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this should explicitly refer to the BT.2100 viewing environment. When this content is being displayed in some other viewing environment, the cd/m² may not be emitted literally even on capable equipment.

@@ -200,12 +200,12 @@ Add a new CanvasColorMetadata dictionary:

```idl
dictionary CanvasColorMetadata {
CanvasMasteringDisplayMetadata masteringDisplayMetadata;
CanvasColorVolumeMetadata colorVolumeMetadata;

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You are renaming mastering display to color volume. If the data actually held within is still mastering display information, it might imply something unintended, like not doing chromatic adaptation if one should be done.

In the Wayland protocol, we decided to keep calling this mastering display information, because we still do not know how that defines a color volume wrt. any other space.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If the data actually held within is still mastering display information, it might imply something unintended

The data held is not mastering display information.

The data is intended to describe the contents of the image.

hdr_html_canvas_element.md Outdated Show resolved Hide resolved
Comment on lines +335 to 344
function rec2100PQtoSRGB(r, g, b) {
let rt = 10000 * pqEOTF(r) / 203;
let gt = 10000 * pqEOTF(g) / 203;
let bt = 10000 * pqEOTF(b) / 203;
[rt, gt, bt] = matrixXYZtoRec709(matrixBT2020toXYZ(rt, gt, bt));
const rp = Math.pow(rt, 1/2.4);
const gp = Math.pow(gt, 1/2.4);
const bp = Math.pow(bt, 1/2.4);
return [rp, gp, bp];
}

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Btw. this seems to be a direct colorimetric conversion, like a change of basis in linear algebra, rather than a color gamut mapping. You will get out-of-range sRGB values, which I presume will then be hard-clipped independently on each color channel. This will cause loss of detail and saturation.

What is the value of this example? I doubt it would ever be used in practice as-is.

There is also a mathematical problem, because a negative value raised to fractional power results in a complex number, unless matrixXYZtoRec709() does clipping internally?

Curiously the demo link to sandflow below is using an almost achromatic example image which would not exhibit color gamut problems.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good point, this is indeed a colorimetric conversion.

In CSS Color 4, all the predefined RGB spaces are defined over the extended range. We don't support 709 but do support sRGB over the extended range, as an example.

If had clipping is expected, then the stage at which it occurs should be specified and the choice of hard clip (with associated lightness and hue changes) justified.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What is the value of this example? I doubt it would ever be used in practice as-is.

The value of this example is to demonstrate tone-mapping from a high dynamic range image to a narrow dynamic range image, where high and narrow dynamic range refer to luminance range. It does not demonstrate mapping from a wide color range to a narrow color range, which is not a new problem (last I checked many monitors could not display the full sRGB/Rec. 709 gamut).

@LBorgSMPTE
Copy link

LBorgSMPTE commented Jun 1, 2023 via email

@LBorgSMPTE
Copy link

LBorgSMPTE commented Jun 1, 2023 via email

@palemieux
Copy link
Contributor Author

The open question is that we do not know what to do with those numbers.

@ppaalanen The numbers can be used to drive the rendering of the image to the ultimate display and ensure that the rendering algorithm is stable over a sequence of images -- since the rendering algorithm might depend on the contents of the image, i.e., mapping to a 400 nits monitor an image with pixels that range from 0 to 300 nits would ideally be different than mapping an image with pixels that range from 0 to 10,000 nits. Same for the color gamut.

The demo at https://www.sandflow.com/public/tone-mapping illustrates the use of minLuminance and maxLuminance to drive the tone-mapping algorithm.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Interpreting masteringDisplayMetadata
5 participants