Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Filmic v6 #10976

Merged
merged 14 commits into from
Apr 2, 2022
Merged

Filmic v6 #10976

merged 14 commits into from
Apr 2, 2022

Conversation

aurelienpierre
Copy link
Member

@aurelienpierre aurelienpierre commented Jan 24, 2022

That crossed #10975 by a thread.

What's new ?

  1. we ditch the mandatory desaturation to white, since it was too harsh in many cases and we don't need it because…
  2. introduce 2 steps of gamut mapping, one against LMS cone space, one against pipeline RGB, and I'm considering mapping to output RGB directly,
  3. for the non-chrominance preserving method (independent RGB channels), we still enforce hue to be passed along from input to output. We still keep the desaturing behaviour in highlights but we set the original chroma in shadows to avoid resaturating.
  4. Bottom line is all methods now preserve hue. Chrominance-preserving norms preserve saturation as well.

OpenCL available, but there is a discrepancy between C and OpenCL that I need to find. Reference implementation is OpenCL. I publish now to show @flannelhead

The gamut-mapping is much simpler than in #10975 and done in Kirk Yrg space (linear). Hue is preserved at all cost, then we do a test projection in RGB. Out-of-gamut are spotted if any coordinate exceeds [0; 1]. In which case, we record the Ych value of:

  1. the original color,
  2. the clipped color (closest in-gamut color, but at a different hue),
  3. the darkened color (closest color at same saturation).
    From the clipped color, we update the Y value (usually a bit brighter, which is desirable to retain luminance consistency of colored lights), and from the darkened color, we update the chroma estimation (chroma at same saturation and lower luminance is actually lower than original). So we march like this toward the boundary of the gamut, and iterate up to 8 times until we find it.

@aurelienpierre aurelienpierre force-pushed the filmic-v6 branch 2 times, most recently from c5e159e to 8f741da Compare January 24, 2022 14:57
@MStraeten
Copy link
Collaborator

got an build error on osx:

[ 82%] Building C object lib/darktable/plugins/CMakeFiles/filmicrgb.dir/introspection_filmicrgb.c.o
In file included from /Users/me/src/darktable/build/lib/darktable/plugins/introspection_filmicrgb.c:191:
/Users/me/src/darktable/src/iop/filmicrgb.c:2323:14: fatal error: variable 'input_matrix_cl' is used uninitialized whenever 'if' condition is true [-Wsometimes-uninitialized]
          if(err != CL_SUCCESS) goto error;
             ^~~~~~~~~~~~~~~~~
/Users/me/src/darktable/src/iop/filmicrgb.c:2443:32: note: uninitialized use occurs here
  dt_opencl_release_mem_object(input_matrix_cl);
                               ^~~~~~~~~~~~~~~
/Users/me/src/darktable/src/iop/filmicrgb.c:2323:11: note: remove the 'if' if its condition is always false
          if(err != CL_SUCCESS) goto error;
          ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
1 error generated.

@aurelienpierre
Copy link
Member Author

@MStraeten fixed in 4039412.

@flannelhead 1947b6d adds the support for output profile. I bet it's simpler than what you though ;-)

@MStraeten
Copy link
Collaborator

MStraeten commented Jan 24, 2022

now:

In file included from /Users/me/src/darktable/build/lib/darktable/plugins/introspection_filmicrgb.c:191:
/Users/me/src/darktable/src/iop/filmicrgb.c:2251:6: fatal error: variable 'input_matrix_cl' is used uninitialized whenever 'if' condition is  true [-Wsometimes-uninitialized]
  if(err != CL_SUCCESS) goto error;
     ^~~~~~~~~~~~~~~~~
/Users/me/src/darktable/src/iop/filmicrgb.c:2482:32: note: uninitialized use occurs here
  dt_opencl_release_mem_object(input_matrix_cl);
                               ^~~~~~~~~~~~~~~
/Users/me/src/darktable/src/iop/filmicrgb.c:2251:3: note: remove the 'if' if its condition is always false
  if(err != CL_SUCCESS) goto error;
  ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
/Users/me/src/darktable/src/iop/filmicrgb.c:2262:3: note: variable 'input_matrix_cl' is declared here
  cl_mem input_matrix_cl = dt_opencl_copy_host_to_device_constant(devid, 12 * sizeof(float), input_matrix);
  ^
1 error generated.
  cl_mem input_matrix_cl = NULL;
  cl_mem output_matrix_cl = NULL;
  cl_mem export_input_matrix_cl = NULL;
  cl_mem export_output_matrix_cl = NULL;

might be declared before line 2251

Addendum: did a successful build on windows mingw64 - it seems osx is a bit more sensitive to this ;)

@tpapp
Copy link

tpapp commented Jan 25, 2022

Can you please explain the logic of gamut_check_RGB a bit in detail?

Broadly speaking, I see it is an optimization problem: when a point is outside a given domain, find a point within the domain that is closest under some distance metric.

My reading of the code shows an iterative algorithm that looks like some kind of fixed point iteration, which stops when the point ends up in the domain. But I don't understand why it should have desirable properties, eg is it even monotonic? Does it map to an optimization problem, and if yes, what's the distance metric?

@johnny-bit
Copy link
Member

@tpapp - it's gamut maping with variant L. Gamut mapping with invariant L is in #10975

Both are simplified versions of generic gamut mapping described in https://bottosson.github.io/posts/gamutclipping/

@flannelhead
Copy link
Contributor

flannelhead commented Jan 25, 2022

@aurelienpierre - I'll try to do a deeper review and test during this week. However at a first glance, this looks like a very interesting and clever approach :) I also like the fact that you preserved the saturation control.

One initial question however - is there anything in place to make sure that out-of-gamut pixels land exactly at the boundary? As far as I see, the iteration could in principle undershoot. Like @tpapp, I would be interested to see some example trajectories of pixels in Yc during the mapping iteration.

I'm thinking that maybe this method could be also used as a last step after applying the mapping in #10975 - that method yields an approximate result because the gamut boundary data is sampled at finite intervals and smoothed for antialiasing. There needs to be something to ensure the final result is strictly in gamut.

@flannelhead
Copy link
Contributor

@tpapp - it's gamut maping with variant L. Gamut mapping with invariant L is in #10975

Both are simplified versions of generic gamut mapping described in https://bottosson.github.io/posts/gamutclipping/

In addition, this PR does its work in a linear color space while #10975 does it in a perceptually uniform color space, so the color quantities involved are really quite different. It will be sure interesting to compare these approaches in practice.

@tpapp
Copy link

tpapp commented Jan 25, 2022

@johnny-bit: The transformations in that blog post seem to be closed form, not iterative.

Conceptually, let G denote the gamut of the target color space. For a color x ∉ G, I would think of gamut mapping as

$$f(x) = \argmin_{y ∈ G} d(x, y)$$

where d is some distance metric between colors.

The algorithm in the code looks something like

i = 0
while x ∉ G && i < maxiter
    x = g(x)
    i += 1
end

I do not immediately see that this is monotonic, let alone continuous, or even guaranteed to be ∈ G (if this is obvious, sorry for the noise).

@aurelienpierre
Copy link
Member Author

@tpapp There is no guaranty of monotonicity, but I don't think we need to.

Take the usual RGB clipping approach: it works great as far as gradients preservation goes. Problem is hue is not preserved.

Basically, we use the clipped RGB as the closest estimate color in-gamut, update its hue to the original and keep its chroma. But since the hue has changed, the maximum chroma available in gamut has changed too, so there is no guaranty that this estimate is actually in-gamut. Iterating over this empirically converges in about 4 iterations. If it doesn't, we clip it, and anyway the clipped final version will be very close in hue from the original, so it's safe-ish regarding color difference and hue consistency.

But that's for the upper cone of the gamut volume. Since the first commit, I forced it at constant luminance because the non-constant luminance approaches, while giving better saturation, can lead to colored light sources getting darkened more than their surrounding, which is a cognitive dissonance.

For the lower gamut cone, we do things a bit differently.

The story is high-chroma && low-luminance colors are typically bounce light from a colored light source (most of the time, blue light). I would invite you to be careful about the examples given by Bottosson, which are reflective colors (mostly) with chroma boosted artificially in software. Reflective surfaces are rarely a problem IRL, since sRGB encompasses more than 95% of reflective spectra. Shit comes from lasers and neons light, aka emissive and colored, which have high radiometric energy and high perceptual chroma, but can still be recorded as low luminance because of the flaws of the human tristimulus.

So, for the bottom cone, we allow a brightening in order to allow an higher chroma, and get a sense of that colored light spill on dark surfaces. Same as before, even if it doesn't converge after the 8 allowed iterations, the result of the final per-channel clipping should be very close to the original hue.

The method is heuristic more than algebraic. But the results don't exhibit any artifact, which would suggest that having a per-pixel logic, perhaps even non-monotonic, creates some random variations that actually help dithering the mapping and hide it.

@sh1llo
Copy link

sh1llo commented Jan 31, 2022

One point, I've noticed that v6 as of 5ced2e5 will undershoot the white point on a purely gray area.
To reproduce:

  • I picked the white balance off the sky, so it is gray.
  • exposure +3 (the image was previously correctly exposed)
  • highlight reconstruction threshold +6 in filmic, so it doesn't interfere with the result
  • white point 1EV

Expected: sky at 255, 255, 255, got substantially less. Target white in display is at 100%. Happens with all 'preserve chrominance' options.
I expect this will make bright specular highlights look perceptively darker if the surrounding area is saturated.

Note: I noticed that the area was reported as not quite gray in the picker after raising the exposure, but I didn't further research it, as it seems unrelated to this.

image

@TurboGit TurboGit added this to the 4.0 milestone Feb 2, 2022
@TurboGit TurboGit added documentation-pending a documentation work is required feature: redesign current features to rewrite release notes: pending labels Feb 2, 2022
Copy link
Contributor

@flannelhead flannelhead left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I gave this a read and this seems a very nice approach in that it preserves the luminance Y. That might be beneficial in many cases.

I left a few comments (on the CPU path, but most of them apply also to OpenCL path of course). There are a few typos in comments I spotted, a couple of optimization ideas that can be left for later and then some questions about the iterative scheme of finding the in-gamut color. I could look into finding a more exact closed-form or numerical solution soon as the RGB <-> Ych conversions don't look particularly nasty math-wise.

From brief testing, this method does seem to yield somewhat different results than mine at #10975 - and that's a good thing since users will have a choice between the two depending on the need.

// Check if the color fits in Yrg and LMS cone space
// clip chroma at constant hue and luminance otherwise

// Do a test conversion to Yrg
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

(optimization, can be done later)

Now that this is going to be code shared by many modules, I suppose optimizing it a bit wouldn't hurt. You can skip the Ych part entirely and do the whole thing in Yrg. See here an optimization I have proposed earlier: 98f57a1

The sinf and cosf calls might be pretty expensive on the CPU, so it makes sense to get rid of those. However, I suggest doing this after this PR is otherwise handled and merged and there's some test coverage, including an integration test. That makes it easier to optimize with confidence.

src/iop/filmicrgb.c Outdated Show resolved Hide resolved
Yrg_to_LMS(Yrg, LMS);

// go from CIE LMS 2006 to pipeline RGB
dot_product(LMS, matrix, out);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

(optimization)

might be good to start using dt_apply_transposed_color_matrix() here instead (see #10963), but this can also be quite easily handled at a later time.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, I would prefer to layout the logic here, and deal with optimizations later.

// Perform exposure correction to fit back in gamut
// This gives us the closest in-gamut pixel at same saturation.
dt_aligned_pixel_t RGB_darkened = { 0.f };
for_each_channel(c, aligned(RGB_darkened)) RGB_darkened[c] = RGB_in[c] / max_pix;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As far as I understand, this normalizes the greatest RGB component to 1. Wouldn't it make sense to normalize to display_white instead? I.e. this would become:

RGB_darkened[c] = display_white * RGB_in[c] / max_pix;

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not necessarily. We are in display RGB, so the gamut is bounded in [0; 1] in the 3 dimensions.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What about cases where display_white > 1 i.e. some future HDR displays? After all, the while loop

while(max_pix > display_white && iter < max_iter)
{
  ...
}

indeed loops until each RGB component is below display_white.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Right. The part I'm not sure of is how HDR RGB is encoded. I think it's still encoded between 0 and 1, and then 1 is mapped to 1023 in 10 bits or 4095 in 12 bits. It's just that the meaning of this 1 changes. But then, we have to decide where to normalize to that 1 and it's probably better to do it at the far end, so indeed it might be good to keep the display-referred scaling here (where 1 means reflective white == SDR white, and RGB can go above).

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, that makes sense to me.

src/iop/filmicrgb.c Outdated Show resolved Hide resolved
// from the brightnened projection
if(Ych_brightened[0] > CIE_Y_1931_to_CIE_Y_2006(display_black))
{
Ych_in[0] = fmax((Ych_in[0] + Ych_clamped[0]) / 2.f, CIE_Y_1931_to_CIE_Y_2006(display_black));
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Use fmaxf. However for better vectorizability, the MIN() and MAX() macros should be favoured. I think it's the -fno-finite-math-only flag in the build that causes these fmaxf and fminf to not auto-vectorize properly. Not a big deal though.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

right

src/iop/filmicrgb.c Show resolved Hide resolved
src/iop/filmicrgb.c Outdated Show resolved Hide resolved
src/iop/filmicrgb.c Outdated Show resolved Hide resolved
const int user_desat = (saturation < 0.f);

chroma_final = (filmic_brightens && filmic_resat)
? chroma_original // force original lower sat if brightening
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This may be a personal opinion, but this seems to produce a bit of a washed-out look in the highlights. Personally I would rather preserve the saturation even in the highlights. After all, there's color balance rgb which can be used to achieve that saturation roll-off in the highlights, but if filmic does this by default, it's harder to "undo" in a color balance rgb instance which usually comes before filmic in the pixel pipe.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah, I see what you meant in the comment below. v5 has an explicit desaturation curve to white. v6 doesn't. This was intended to provide a better intermediate between no desaturation at all and the full-blown desaturation of the split RGB method. Maybe we could set that to the average of chroma_original and chroma_final ?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe we could set that to the average of chroma_original and chroma_final ?

That could probably be a good enough middle ground. However, I would still consider the option of leaving the desaturation out entirely for the chrominance preserving modes - one could always desaturate the highlights earlier in the pipe in e.g. color balance rgb. The "add basic colorfulness" preset already does this.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I personally find that leaving chroma as-is yields oversaturated colors. Keep in mind that people come from the split-RGB world where highlights are indeed heavily desaturated (which is also consistent with film sensitometry). I would make the reverse case where you can always resaturate highlights through color balance, but that average seems a better default starting point in general.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Right, let's stick with the average for now :)

@flannelhead
Copy link
Contributor

I could look into finding a more exact closed-form or numerical solution soon as the RGB <-> Ych conversions don't look particularly nasty math-wise.

Turns out there is a simple-ish closed-form solution (if I didn't make a mistake), I'm going to try replacing the iterative scheme with that. Results later this week if time permits.

@flannelhead
Copy link
Contributor

This was easier than I expected. Here you go, have a test and feel free to cherry-pick if it seems good to you.
flannelhead@9a66488

I wrote down the Ych -> RGB conversion in SymPy, set each component of RGB to a given constant k and solved for chroma. There's an unambiguous solution. Final chroma is picked as the greatest value where all RGB components are within bounds. I tried to do the same in Oklab for my gamut mapping implementation as well, but the closed form solutions looked horrible (and there are three of them for each equation due to cubic function in the transformation). Here, the Yrg color space is much simpler and thus the solution can be actually used in code.

The commit includes the Python script used for deriving the equations.

The gamut mapping is also changed slightly; previously the lower boundary mapping (handling negative RGB) would alter both luminance and chroma. Now it also stays at constant luminance and hue, altering only chroma.

@aurelienpierre
Copy link
Member Author

OOOh that's brilliant, thanks @flannelhead ! On cursory reading, you didn't keep the brightening strategy for the negative RGB cases, did you ?

@flannelhead
Copy link
Contributor

flannelhead commented Feb 8, 2022

On cursory reading, you didn't keep the brightening strategy for the negative RGB cases, did you ?

Indeed. I think it can be recovered if desired (although the heuristic in your implementation probably can't be exactly replicated by the analytic equation), but keeping luminance constant also for the lower boundary mapping looked quite good to me initially.

@aurelienpierre
Copy link
Member Author

Indeed. I think it can be recovered if desired (although the heuristic in your implementation probably can't be exactly replicated by the analytic equation), but keeping luminance constant also for the lower boundary mapping looked quite good to me initially.

The idea behind brightening high-sat shadows is that they have 80% chances of coming from blue light spills. Brightening them helps keeping the balance between the surfaces around.

@flannelhead
Copy link
Contributor

Do you have a suitable test image for this and an image of a roughly desired processing result? I can try to come up with something.

@aurelienpierre
Copy link
Member Author

@flannelhead
Copy link
Contributor

flannelhead commented Feb 9, 2022

Please see: flannelhead@6a6b9de

I introduced a brightening strategy similar to the original one. First mix in enough white light to the RGB to bring the negative components to display black point, then take the average of the resulting Y and input Y. Proceed with clipping chroma from there. Result using your XMP (sans the new highlight recovery algo):

20200906_Wien_1636

@flannelhead
Copy link
Contributor

flannelhead commented Feb 9, 2022

As another note, I would like to make a case for removing the highlights chroma rolloff I noted when reviewing.

Here's an example image (mine, license CC-BY-NC-SA 4.0): P7140100.ORF.txt
XMP: P7140100.ORF.xmp.txt

With this PR, including the highlight rolloff behaviour:
P7140100_02

With this PR but highlight rolloff disabled (don't force original chroma when Filmic brightened the pixel):
P7140100_03

For comparison, Filmic v5:
P7140100_04

Notice that in the first one the green field in the middle is pretty washed out compared to the two subsequent ones. I would argue that users might not be happy if Filmic v6 introduced this kind of behaviour when at least I would assume it would allow keeping more of the original colorfulness.

@aurelienpierre
Copy link
Member Author

aurelienpierre commented Feb 9, 2022

With this PR but highlight rolloff disabled (don't force original chroma when Filmic brightened the pixel):

What method (norm) is that ? We force original chroma only on the split RGB channels, which will desaturate highlights.

Got it from the code review.

// It's not saturation-invariant.
dt_aligned_pixel_t RGB_brightened = { 0.f };
for_each_channel(c, aligned(RGB_brightened))
RGB_brightened[c] = RGB_in[c] + fabsf(min_pix);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, I have been wondering about that also, and agree that it's better not to clamp individual channels in order to allow for most saturated colors. But then, the while loop condition should also change:

while(min_pix < 0.f && iter < max_iter)
{
  ...
}

Or, in my version of the chroma clipping using closed-form equations, I should change

// Calculate the maximum chroma where RGB components still stay above black point
const float chroma_clipped_to_black = clip_chroma(matrix_out, display_black, Y, cos_h, sin_h);

to

const float chroma_clipped_to_black = clip_chroma(matrix_out, 0.f, Y, cos_h, sin_h);

// Perform exposure correction to fit back in gamut
// This gives us the closest in-gamut pixel at same saturation.
dt_aligned_pixel_t RGB_darkened = { 0.f };
for_each_channel(c, aligned(RGB_darkened)) RGB_darkened[c] = RGB_in[c] / max_pix;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, that makes sense to me.

iter++;
}

for_each_channel(c, aligned(RGB_in)) RGB_in[c] = CLAMP(RGB_in[c], display_black, display_white);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Based on previous discussions, this clamping should change to allow for a minimum RGB component value of 0.f:

for_each_channel(c, aligned(RGB_in)) RGB_in[c] = CLAMP(RGB_in[c], 0.f, display_white);

@flannelhead
Copy link
Contributor

Starting to look very good. Another minor fix related to black point is here: flannelhead@21e5bdd

Don't force RGB components to display_black but 0 instead now that we allow individual components to have value 0 or greater. Also clamp the brightened luminance to display_black just to be sure.

@flannelhead
Copy link
Contributor

flannelhead commented Feb 12, 2022

I can reproduce similar "greyness" of assumedly white parts as reported by @sh1llo above.

The explanation seems to be that the Yrg space used in gamut mapping uses CIE 2006 Y (in addtion, is D65 balanced) compared to the pipeline CIE 1931 Y (D50 balanced, although chromatic adaptation plays only a minor role). The conversion between these is not a simple linear scaling like it is currently but actually depends on the current chromaticity. I'll fix this, but probably will need to reconsider where to clamp. Perhaps it is best done after all the chroma reduction.

@aurelienpierre
Copy link
Member Author

I have merged @flannelhead 's propositions, rebased and fixed conflicts. This is good to go too @TurboGit !

@trougnouf
Copy link
Contributor

First test and there's immediately a large improvement in the sky:
v5: https://nc.trougnouf.com/index.php/s/FfqHZ792KReK5Zi
v6:
scrn-2022-04-01-14-32-50

@janfri
Copy link
Contributor

janfri commented Apr 1, 2022

First test and there's immediately a large improvement in the sky

and in the sand in the foreground. 👍

Copy link
Member

@TurboGit TurboGit left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code looks good. Makes a huge difference when v6 applied to concert lights with blue dominant colors. All very saturated colors seems preserved instead of becoming a white/grey spot. Really nice, thanks!

@TurboGit TurboGit merged commit da77339 into darktable-org:master Apr 2, 2022
@piratenpanda
Copy link
Contributor

piratenpanda commented Apr 2, 2022

is there another workflow involved when working with backlit subjects? No matter what I try I can't fix that with v6. I guess I am missing something here. Edit: This is using OpenCL but same result without.

v5:
v5

v6:
v6

@flannelhead
Copy link
Contributor

@piratenpanda is that area clipped in raw? If it is, I've found the following steps helpful:

  1. disable the highlight reconstruction module
  2. lower the highlight reconstruction threshold in filmic and possible lower the white point if necessary
  3. you can also play with the filmic highlight reconstruction parameters or enable one of the other modes (LCh, reconstruct color but not clipping) in the highlight reconstruction module

@piratenpanda
Copy link
Contributor

@flannelhead yes, unfortunately. Thanks for the pointers, this worked quite fine at first glance. I'll try playing a bit more.

@flannelhead
Copy link
Contributor

Handling of those clipped areas (which are colorful if only one channel is clipped) is hard. Maybe there's some room for improvement but it's good that the v6 is merged now and gets more field testing. Other than clipped areas, I've found it to perform really well so far.

@piratenpanda
Copy link
Contributor

changing the highlight reconstruction threshold to something like -0.5 eV produces comparable if not better results than v5. v6 looks indeed much better everywhere else.

@ralfbrown
Copy link
Collaborator

@piratenpanda is that area clipped in raw? If it is, I've found the following steps helpful:

1. disable the highlight reconstruction module
  1. increase the clipping threshold in the highlight reconstruction module to account for the additional white-balance shift in the color calibration module's white adaptation. Highlight reconstruction assumes that pure white never changes in the rest of the pipe, but with modern white balance workflow, it gets blue-shifted data as input (6500K white point) which color calibration then usually moves more toward yellow -- that's why clipped areas appear cyan in the modern workflow instead of magenta. I find a threshold of 1.15 to be a good starting point for my Nikon D7x00 images.

@todd-prior
Copy link

In the past I found if there were clipped channels in the extreme highlights that it was harder to work with them when the filmic preservation norms were used. These often seemed to shift one of the remaining channels in such a way that it could create extremes in the highlights. I found switching them off producted highlights with much less difference in the remaining channels and they were easier to correct but this might not be as true now....looking forward to evaluating this

@elstoc elstoc added documentation-complete needed documentation is merged in dtdocs and removed documentation-pending a documentation work is required labels May 24, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
documentation-complete needed documentation is merged in dtdocs feature: redesign current features to rewrite
Projects
None yet
Development

Successfully merging this pull request may close these issues.