New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Finish ITU-R YCbCr encodings #198
Conversation
These share the same transfer function which is therefore a separate struct instead of each sharing an implementation only. Strictly speaking, the RgbSpace defined by Rec.709 could be shared with the sRGB color space as it uses the exact same primaries and whitepoint but the advantage due to the difference in transfer function seems negligible.
Uses the precise floating point coefficients and ensure output values are clamped into the allowed signal range as specified. It already adds the quantization conversion for 10-bit output without adding the actual quantization itself because the type representation is not yet clear.
This is for scene linear light, which does not depend on viewing parameters in the transfer function. For PQ there are is no universal reversible transfer function given, the recommendation only describes a set of OETF and EOTF with which a reference viewing environment can produce the proper luminance but this is not easily reversible.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Welcome back! I think the implementations look nice and clean, so I have mostly just been commenting on nit picks. The only major improvements I would like to request, outside what's in the comments, are
- Symmetric conversion, if possible, to and from
Yuv
(now it's just to). You can also add it to the derive macro, to make it available everywhere. Also keep in mind that there is implicit clamping inFromColor
andIntoColor
, so you may want to change how the conversion is implemented. Let me know if the new model is unclear. - I think I mentioned it earlier too, but unit tests (and examples in the documentation) helps a lot with verifying the code both now and later. Having them as examples helps explaining things too! It doesn't have to be super complicated. Just a few cases to make sure it's not off the rails, if there isn't any good test data source out there.
The more "first class citizens" these new types can be, the better! 😁
For the transfer functions, would it help if the TransferFn
trait could be split into a From
/Into
pair? Not in this PR, but in the future. It's something I have considered as a possible improvement, both there and elsewhere. Are there any other changes that could help with the other parts?
I think it's a good call to not draw the line where it's not too device or environment specific. Especially if it already covers a significant part of the process. I imagine some of it could be done dynamically instead, in a future iteration.
No more duplicate `from_f64` method, remove Into/From impls and fix the Luma conversion to use Rgb for now.
Applied fixes for the style portion of the first round of review. One question on quantization. The implementation is currently not publicly exposed. When you say:
Do you mean moving it to a new issue that just deals with this part and has a references to a commit containing the files? Then they could be removed from the source tree for the moment. |
Nice, I will have another look later. But what I meant with "moving to the issue" was to add it to #121. I see now that you intend to close it with this PR. I'm not sure how you intend to proceed with the parts that are not done. I would like to have the code free from lose ends, so I'm trying to find a way to clean out anything that isn't ready to be used straight away. I would like to have any future plans and TODOs (including their reference material) in issues and/or on a branch until it's implemented. Anything that's released should be useful in its current state. If everything is documented, sufficiently tested and gets the job done, it can be released at any time. That's the philosophy I'm following. |
But yeah, a new issue would be fine if you want to close the current one. If the temporary code is just a few snippets, it can just be added to the issue text. Having it in a commit sounds like a good idea if it's too much. |
The main portion is adding rgb color spaces, primaries, transfer functions, and YUV scaling as done here. The quantization is separate from that. In particular, full range quantization is a simple |
Ok, that sounds like a good plan. You can still create issues if you would like to have a discussion space for it. I'm interested to know if there are ways to make it easier to implement the quantization, so don't hesitate to reach out when you get back to those parts. |
I hope I didn't come forward as accusing earlier. I wanted to make sure the expectations are as clear as possible, so you know the background to my review comments. |
This is a separate commit so it can be reintroduced later.
I should make some kind of checklist for new color spaces. They are far from as simple as they were. I forgot to mention a couple of more things earlier, that is good to know:
I would strongly suggest taking inspiration from both Rgb and Lab for the traits and methods. |
I have another potential problem: Linear RGB does not refer to any particular absolute luminance. Thus the conversion between HDR (bt.2020/2100) and non-HDR (everything else) color spaces won't work like written. The reference would be |
Right, yeah, it's not something I have thought of modelling as part of the type system. It could potentially be awrapper (like |
Worse than the slight difference introduces by multiplying/dividing through 100 a couple of times would be accidental clamping to the standard range, resulting in complete loss of high-dynamic range data. One possibility that I arrived at would be to represent this using the /// Specify a high-dynamic range whitepoint.
struct Hdr<W>(PhantomData<W>);
impl Xyz<W> {
/// Multiplies Y by 100, unclamped.
pub fn to_hdr(self) -> Xyz<Hdr<W>> { … }
}
impl Xyz<Hdr<W>> {
/// Multiplies Y by 100, unclamped, but the type of course implements clamping.
pub fn to_srd(self) -> Xyz<W> { … }
} |
That would be an interesting way of representing it. Then it would indeed be carried over between all spaces in a mostly transparent way. I suppose an alternative would be to embed it in the component type instead, so for example But go ahead and implement it with the white point, as an experiment if nothing else. |
Hmm, I realized this suggestion is going to cause trouble with some implementations that rely on specific white point types or properties. #194 is what made me think of it. It would have to special case |
Hey, it's been a while and I hope all is well! I'm closing this due to inactivity and because it's not something I'm expecting to be able to continue on my own in the near future. Feel free to come back to it some day (one way or another) if you get the motivation back! |
Adds the missing RGB spaces and transfers functions of issue #121 (BT.2020, BT.2100).
There are some quantization functions here as well but since they target 10-bit and 12-bit depths, which have no native type, there is no straightforward way of including them. They can be added separately based on the floating point signals, it's not a huge priority for the digital signals.
For BT.2100 only the HLG (Hybrid Log-Gamma) transfer method has been added, and only its for scene-signal-EOTF. The PQ-scene-signal (recording and editing) transfer, as well as the second method, hybrid log gamma (HLG), use an extended model of the signal pipeline to modifying the signal to be appropriate for the viewing environment. In the former case no easily reversible encoding is given, while the second depends on many parameters. This is not supported by the
TransferFn
trait which does not permit such state. There are replacement values for when the viewing environment is unknown but I wanted to avoid the temptation to guess and rather get some feedback/directions/think about this some more. These would be required for producing display luminance only, a specific use case that's not part of the color editing pipeline anyways.Closes: #121