-
Notifications
You must be signed in to change notification settings - Fork 1.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Filmic v6 #10976
Filmic v6 #10976
Conversation
c5e159e to
8f741da
Compare
|
got an build error on osx: |
|
@MStraeten fixed in 4039412. @flannelhead 1947b6d adds the support for output profile. I bet it's simpler than what you though ;-) |
|
now: might be declared before line 2251 Addendum: did a successful build on windows mingw64 - it seems osx is a bit more sensitive to this ;) |
|
Can you please explain the logic of Broadly speaking, I see it is an optimization problem: when a point is outside a given domain, find a point within the domain that is closest under some distance metric. My reading of the code shows an iterative algorithm that looks like some kind of fixed point iteration, which stops when the point ends up in the domain. But I don't understand why it should have desirable properties, eg is it even monotonic? Does it map to an optimization problem, and if yes, what's the distance metric? |
|
@tpapp - it's gamut maping with variant L. Gamut mapping with invariant L is in #10975 Both are simplified versions of generic gamut mapping described in https://bottosson.github.io/posts/gamutclipping/ |
|
@aurelienpierre - I'll try to do a deeper review and test during this week. However at a first glance, this looks like a very interesting and clever approach :) I also like the fact that you preserved the saturation control. One initial question however - is there anything in place to make sure that out-of-gamut pixels land exactly at the boundary? As far as I see, the iteration could in principle undershoot. Like @tpapp, I would be interested to see some example trajectories of pixels in Yc during the mapping iteration. I'm thinking that maybe this method could be also used as a last step after applying the mapping in #10975 - that method yields an approximate result because the gamut boundary data is sampled at finite intervals and smoothed for antialiasing. There needs to be something to ensure the final result is strictly in gamut. |
In addition, this PR does its work in a linear color space while #10975 does it in a perceptually uniform color space, so the color quantities involved are really quite different. It will be sure interesting to compare these approaches in practice. |
|
@johnny-bit: The transformations in that blog post seem to be closed form, not iterative. Conceptually, let where The algorithm in the code looks something like I do not immediately see that this is monotonic, let alone continuous, or even guaranteed to be |
|
@tpapp There is no guaranty of monotonicity, but I don't think we need to. Take the usual RGB clipping approach: it works great as far as gradients preservation goes. Problem is hue is not preserved. Basically, we use the clipped RGB as the closest estimate color in-gamut, update its hue to the original and keep its chroma. But since the hue has changed, the maximum chroma available in gamut has changed too, so there is no guaranty that this estimate is actually in-gamut. Iterating over this empirically converges in about 4 iterations. If it doesn't, we clip it, and anyway the clipped final version will be very close in hue from the original, so it's safe-ish regarding color difference and hue consistency. But that's for the upper cone of the gamut volume. Since the first commit, I forced it at constant luminance because the non-constant luminance approaches, while giving better saturation, can lead to colored light sources getting darkened more than their surrounding, which is a cognitive dissonance. For the lower gamut cone, we do things a bit differently. The story is high-chroma && low-luminance colors are typically bounce light from a colored light source (most of the time, blue light). I would invite you to be careful about the examples given by Bottosson, which are reflective colors (mostly) with chroma boosted artificially in software. Reflective surfaces are rarely a problem IRL, since sRGB encompasses more than 95% of reflective spectra. Shit comes from lasers and neons light, aka emissive and colored, which have high radiometric energy and high perceptual chroma, but can still be recorded as low luminance because of the flaws of the human tristimulus. So, for the bottom cone, we allow a brightening in order to allow an higher chroma, and get a sense of that colored light spill on dark surfaces. Same as before, even if it doesn't converge after the 8 allowed iterations, the result of the final per-channel clipping should be very close to the original hue. The method is heuristic more than algebraic. But the results don't exhibit any artifact, which would suggest that having a per-pixel logic, perhaps even non-monotonic, creates some random variations that actually help dithering the mapping and hide it. |
|
One point, I've noticed that v6 as of 5ced2e5 will undershoot the white point on a purely gray area.
Expected: sky at 255, 255, 255, got substantially less. Target white in display is at 100%. Happens with all 'preserve chrominance' options. Note: I noticed that the area was reported as not quite gray in the picker after raising the exposure, but I didn't further research it, as it seems unrelated to this. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I gave this a read and this seems a very nice approach in that it preserves the luminance Y. That might be beneficial in many cases.
I left a few comments (on the CPU path, but most of them apply also to OpenCL path of course). There are a few typos in comments I spotted, a couple of optimization ideas that can be left for later and then some questions about the iterative scheme of finding the in-gamut color. I could look into finding a more exact closed-form or numerical solution soon as the RGB <-> Ych conversions don't look particularly nasty math-wise.
From brief testing, this method does seem to yield somewhat different results than mine at #10975 - and that's a good thing since users will have a choice between the two depending on the need.
| // Check if the color fits in Yrg and LMS cone space | ||
| // clip chroma at constant hue and luminance otherwise | ||
|
|
||
| // Do a test conversion to Yrg |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
(optimization, can be done later)
Now that this is going to be code shared by many modules, I suppose optimizing it a bit wouldn't hurt. You can skip the Ych part entirely and do the whole thing in Yrg. See here an optimization I have proposed earlier: 98f57a1
The sinf and cosf calls might be pretty expensive on the CPU, so it makes sense to get rid of those. However, I suggest doing this after this PR is otherwise handled and merged and there's some test coverage, including an integration test. That makes it easier to optimize with confidence.
| Yrg_to_LMS(Yrg, LMS); | ||
|
|
||
| // go from CIE LMS 2006 to pipeline RGB | ||
| dot_product(LMS, matrix, out); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
(optimization)
might be good to start using dt_apply_transposed_color_matrix() here instead (see #10963), but this can also be quite easily handled at a later time.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, I would prefer to layout the logic here, and deal with optimizations later.
src/iop/filmicrgb.c
Outdated
| // Perform exposure correction to fit back in gamut | ||
| // This gives us the closest in-gamut pixel at same saturation. | ||
| dt_aligned_pixel_t RGB_darkened = { 0.f }; | ||
| for_each_channel(c, aligned(RGB_darkened)) RGB_darkened[c] = RGB_in[c] / max_pix; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As far as I understand, this normalizes the greatest RGB component to 1. Wouldn't it make sense to normalize to display_white instead? I.e. this would become:
RGB_darkened[c] = display_white * RGB_in[c] / max_pix;There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not necessarily. We are in display RGB, so the gamut is bounded in [0; 1] in the 3 dimensions.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What about cases where display_white > 1 i.e. some future HDR displays? After all, the while loop
while(max_pix > display_white && iter < max_iter)
{
...
}indeed loops until each RGB component is below display_white.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Right. The part I'm not sure of is how HDR RGB is encoded. I think it's still encoded between 0 and 1, and then 1 is mapped to 1023 in 10 bits or 4095 in 12 bits. It's just that the meaning of this 1 changes. But then, we have to decide where to normalize to that 1 and it's probably better to do it at the far end, so indeed it might be good to keep the display-referred scaling here (where 1 means reflective white == SDR white, and RGB can go above).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, that makes sense to me.
src/iop/filmicrgb.c
Outdated
| // from the brightnened projection | ||
| if(Ych_brightened[0] > CIE_Y_1931_to_CIE_Y_2006(display_black)) | ||
| { | ||
| Ych_in[0] = fmax((Ych_in[0] + Ych_clamped[0]) / 2.f, CIE_Y_1931_to_CIE_Y_2006(display_black)); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Use fmaxf. However for better vectorizability, the MIN() and MAX() macros should be favoured. I think it's the -fno-finite-math-only flag in the build that causes these fmaxf and fminf to not auto-vectorize properly. Not a big deal though.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
right
src/iop/filmicrgb.c
Outdated
| const int user_desat = (saturation < 0.f); | ||
|
|
||
| chroma_final = (filmic_brightens && filmic_resat) | ||
| ? chroma_original // force original lower sat if brightening |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This may be a personal opinion, but this seems to produce a bit of a washed-out look in the highlights. Personally I would rather preserve the saturation even in the highlights. After all, there's color balance rgb which can be used to achieve that saturation roll-off in the highlights, but if filmic does this by default, it's harder to "undo" in a color balance rgb instance which usually comes before filmic in the pixel pipe.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah, I see what you meant in the comment below. v5 has an explicit desaturation curve to white. v6 doesn't. This was intended to provide a better intermediate between no desaturation at all and the full-blown desaturation of the split RGB method. Maybe we could set that to the average of chroma_original and chroma_final ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe we could set that to the average of chroma_original and chroma_final ?
That could probably be a good enough middle ground. However, I would still consider the option of leaving the desaturation out entirely for the chrominance preserving modes - one could always desaturate the highlights earlier in the pipe in e.g. color balance rgb. The "add basic colorfulness" preset already does this.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I personally find that leaving chroma as-is yields oversaturated colors. Keep in mind that people come from the split-RGB world where highlights are indeed heavily desaturated (which is also consistent with film sensitometry). I would make the reverse case where you can always resaturate highlights through color balance, but that average seems a better default starting point in general.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Right, let's stick with the average for now :)
Turns out there is a simple-ish closed-form solution (if I didn't make a mistake), I'm going to try replacing the iterative scheme with that. Results later this week if time permits. |
|
This was easier than I expected. Here you go, have a test and feel free to cherry-pick if it seems good to you. I wrote down the Ych -> RGB conversion in SymPy, set each component of RGB to a given constant The commit includes the Python script used for deriving the equations. The gamut mapping is also changed slightly; previously the lower boundary mapping (handling negative RGB) would alter both luminance and chroma. Now it also stays at constant luminance and hue, altering only chroma. |
|
OOOh that's brilliant, thanks @flannelhead ! On cursory reading, you didn't keep the brightening strategy for the negative RGB cases, did you ? |
Indeed. I think it can be recovered if desired (although the heuristic in your implementation probably can't be exactly replicated by the analytic equation), but keeping luminance constant also for the lower boundary mapping looked quite good to me initially. |
The idea behind brightening high-sat shadows is that they have 80% chances of coming from blue light spills. Brightening them helps keeping the balance between the surfaces around. |
|
Do you have a suitable test image for this and an image of a roughly desired processing result? I can try to come up with something. |
|
There is this one : https://discuss.pixls.us/t/austrian-fountain/21355 |
|
Please see: flannelhead@6a6b9de I introduced a brightening strategy similar to the original one. First mix in enough white light to the RGB to bring the negative components to display black point, then take the average of the resulting Y and input Y. Proceed with clipping chroma from there. Result using your XMP (sans the new highlight recovery algo): |
|
As another note, I would like to make a case for removing the highlights chroma rolloff I noted when reviewing. Here's an example image (mine, license CC-BY-NC-SA 4.0): P7140100.ORF.txt With this PR, including the highlight rolloff behaviour: With this PR but highlight rolloff disabled (don't force original chroma when Filmic brightened the pixel): Notice that in the first one the green field in the middle is pretty washed out compared to the two subsequent ones. I would argue that users might not be happy if Filmic v6 introduced this kind of behaviour when at least I would assume it would allow keeping more of the original colorfulness. |
Got it from the code review. |
src/iop/filmicrgb.c
Outdated
| // It's not saturation-invariant. | ||
| dt_aligned_pixel_t RGB_brightened = { 0.f }; | ||
| for_each_channel(c, aligned(RGB_brightened)) | ||
| RGB_brightened[c] = RGB_in[c] + fabsf(min_pix); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, I have been wondering about that also, and agree that it's better not to clamp individual channels in order to allow for most saturated colors. But then, the while loop condition should also change:
while(min_pix < 0.f && iter < max_iter)
{
...
}Or, in my version of the chroma clipping using closed-form equations, I should change
// Calculate the maximum chroma where RGB components still stay above black point
const float chroma_clipped_to_black = clip_chroma(matrix_out, display_black, Y, cos_h, sin_h);to
const float chroma_clipped_to_black = clip_chroma(matrix_out, 0.f, Y, cos_h, sin_h);
src/iop/filmicrgb.c
Outdated
| // Perform exposure correction to fit back in gamut | ||
| // This gives us the closest in-gamut pixel at same saturation. | ||
| dt_aligned_pixel_t RGB_darkened = { 0.f }; | ||
| for_each_channel(c, aligned(RGB_darkened)) RGB_darkened[c] = RGB_in[c] / max_pix; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, that makes sense to me.
src/iop/filmicrgb.c
Outdated
| iter++; | ||
| } | ||
|
|
||
| for_each_channel(c, aligned(RGB_in)) RGB_in[c] = CLAMP(RGB_in[c], display_black, display_white); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Based on previous discussions, this clamping should change to allow for a minimum RGB component value of 0.f:
for_each_channel(c, aligned(RGB_in)) RGB_in[c] = CLAMP(RGB_in[c], 0.f, display_white);|
Starting to look very good. Another minor fix related to black point is here: flannelhead@21e5bdd Don't force RGB components to |
|
I can reproduce similar "greyness" of assumedly white parts as reported by @sh1llo above. The explanation seems to be that the |
We only need to compensate up to 0. Also clamp the resulting luminance to the display luminance boundaries.
6ce9508 to
9ad14ac
Compare
|
I have merged @flannelhead 's propositions, rebased and fixed conflicts. This is good to go too @TurboGit ! |
|
First test and there's immediately a large improvement in the sky: |
and in the sand in the foreground. 👍 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code looks good. Makes a huge difference when v6 applied to concert lights with blue dominant colors. All very saturated colors seems preserved instead of becoming a white/grey spot. Really nice, thanks!
|
@piratenpanda is that area clipped in raw? If it is, I've found the following steps helpful:
|
|
@flannelhead yes, unfortunately. Thanks for the pointers, this worked quite fine at first glance. I'll try playing a bit more. |
|
Handling of those clipped areas (which are colorful if only one channel is clipped) is hard. Maybe there's some room for improvement but it's good that the v6 is merged now and gets more field testing. Other than clipped areas, I've found it to perform really well so far. |
|
changing the highlight reconstruction threshold to something like -0.5 eV produces comparable if not better results than v5. v6 looks indeed much better everywhere else. |
|
|
In the past I found if there were clipped channels in the extreme highlights that it was harder to work with them when the filmic preservation norms were used. These often seemed to shift one of the remaining channels in such a way that it could create extremes in the highlights. I found switching them off producted highlights with much less difference in the remaining channels and they were easier to correct but this might not be as true now....looking forward to evaluating this |









That crossed #10975 by a thread.
What's new ?
OpenCL available, but there is a discrepancy between C and OpenCL that I need to find. Reference implementation is OpenCL. I publish now to show @flannelhead
The gamut-mapping is much simpler than in #10975 and done in Kirk Yrg space (linear). Hue is preserved at all cost, then we do a test projection in RGB. Out-of-gamut are spotted if any coordinate exceeds [0; 1]. In which case, we record the Ych value of:
From the clipped color, we update the Y value (usually a bit brighter, which is desirable to retain luminance consistency of colored lights), and from the darkened color, we update the chroma estimation (chroma at same saturation and lower luminance is actually lower than original). So we march like this toward the boundary of the gamut, and iterate up to 8 times until we find it.