Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

filmic RGB v4 update : new color science #4800

Merged
merged 8 commits into from
Jun 8, 2020

Conversation

aurelienpierre
Copy link
Member

@aurelienpierre aurelienpierre commented Apr 23, 2020

  • add a new desaturation method based on spectral purity of the light emission, independent of XYZ space, and a basic gamut mapping in pipeline RGB space following the same strategy,
  • change the default saturation strategy : 0% and 100% luminance bounds are forced to 0% saturation, but the midtones are resaturated instead.
  • add an highlights reconstruction method in wavelet space, bounded to the scene white value, to bloom highlights or recover details with 3 different strategies.
  • hide the middle-grey settings by default, and set them to 18.45% instead. A box needs to be checked in "options" tab to get them back. These have confused too many users and overlap with exposure setting anyway.
  • make the display gamma auto-adjustment optional (checkbox in "options" tab).
  • allow 2 different curvatures for the filmic curve toe and shoulder : hard (4th order polynomial), with more tonal compression, and soft (3rd order polynomial), with less tonal compression.

NOTES:

  • v3 algos (dt 3.0-3.0.2) are still available as an option.

WARNINGS:

  • older presets are NOT imported at this time,
  • OpenCL kernels are NOT available at this time for the v4 color science and highlights recovery.

DEMO (no additional highlight reconstruction or color adjustments):
filmic v3 (2019):
Shoot Minh-Ly Toronto-0034-_DSC0242_12

filmic v4 (2020):
Shoot Minh-Ly Toronto-0034-_DSC0242_13

filmic v4 without highlights reconstruction:
6Q1B0045_02

filmic v4 with highlights reconstruction:
6Q1B0045_01

filmic v4 without highlights reconstruction:
DSC_0552_06

filmic v4 with highlights reconstruction:
DSC_0552_07

@aurelienpierre aurelienpierre marked this pull request as draft April 23, 2020 16:41
@GLLM
Copy link

GLLM commented Apr 23, 2020

Looks promising on the samples @aurelienpierre !
Can't wait to test it :)
Thanks for the work !

@TurboGit
Copy link
Member

Haven't you inverted the first two pictures?

@aurelienpierre
Copy link
Member Author

@TurboGit no, why ?

@TurboGit
Copy link
Member

Because I find the highlight better recovered on the first than on the second. No?

@TurboGit TurboGit self-requested a review April 23, 2020 21:00
@TurboGit TurboGit added the feature: enhancement current features to improve label Apr 23, 2020
@TurboGit TurboGit added this to the 3.2 milestone Apr 23, 2020
@aurelienpierre
Copy link
Member Author

@TurboGit the highlight skin patch on the first has a magenta to yellow gradient which is telling that some individual channel clipping is going on. The second one has less texture, but a more natural color and less magenta fringing.

@TurboGit TurboGit added wip pull request in making, tests and feedback needed difficulty: hard big changes across different parts of the code base priority: medium core features are degraded in a way that is still mostly usable, software stutters scope: image processing correcting pixels labels Apr 24, 2020
@TurboGit
Copy link
Member

Ok, I see. Thanks for the explanations.

@TurboGit
Copy link
Member

Error reported by CI:

/build/darktable/src/tests/unittests/iop/test_filmicrgb.c:375:11: error: implicit declaration of function ‘filmic_desaturate’; did you mean ‘filmic_desaturate_v2’? [-Werror=implicit-function-declaration]
  375 |           filmic_desaturate(p[0], sigma_toe, sigma_shoulder, saturation);
      |           ^~~~~~~~~~~~~~~~~
      |           filmic_desaturate_v2

@GrahamByrnes
Copy link
Contributor

I'll just applaud a couple of the goals: setting the default saturation back to zero, and changing the curvature of the toe... I was almost always using the tonecurve as a post-hoc fix. Are you planning to make the choice of 3rd vs 4th order separately selectable for the toe and shoulder?

@aurelienpierre
Copy link
Member Author

Error reported by CI:

@TurboGit I don't understand what the CI did. If you search my code, there is no filmic_desaturate function in the file, only filmic_desaturate_v1 or filmic_desaturate_v2.

Are you planning to make the choice of 3rd vs 4th order separately selectable for the toe and shoulder?

It's already there:
Capture d’écran du 2020-04-23 21-18-55

@parafin
Copy link
Member

parafin commented Apr 24, 2020

@aurelienpierre, you need to fix or disable the unit test.

@aurelienpierre
Copy link
Member Author

ah, I forget these. Thanks @parafin

@aurelienpierre
Copy link
Member Author

aurelienpierre commented Apr 24, 2020

The commit 0275667 will break previous parameters structure compared to c8b8fa2.

Copy link
Contributor

@burrima burrima left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good to me, though did not analyze correctness of all the code, this will take more time.

#ifdef _OPENMP
#pragma omp declare simd
#endif
static inline float fmaxabsf(const float a, const float b)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do you have to write this code locally? I would suggest some library functions lying around somewhere.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

By experience, there is little chance to reuse exactly that function in the future, with the exact same optimization. Code sharing is nice, but also makes people lazy when it comes to low-level optimization. All in all, you still need to rewrite them to take into account data locality, vector vs. scalar variants, vector alignment and such.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Well, the function parameters clearly state that it does not take vectors. But yes, C has no function overloading and no Templates and your functions are so small that they don't cost much to compile.

mask[k / ch] = weight;

// at x = 4, the sigmoid produces opacity = 5.882 %.
// any x > 4 will produce neglictible changes over the image,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

typo - negligible

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, would be nice to fix before merging.

clipped += (4.f > argument);
}

// If clipped area is < 9 pixels, recovery is not worth the computational cost, so skip it.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

<= 9 pixels.
Why exactly 9?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

9 pixels defines a square 3×3 patch of immediate neighbours. If you have fewer clipped pixels than that, it's likely that they are barely clipped, and on only one channel, so any downsampling interpolation algo at export should, as a side effect of the low-pass filtering it does, fill that hole. Bottom line, we don't need to unroll fancy wavelets for so little.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Makes sense to me.

const float scale = roi_in->scale / piece->iscale;
const size_t size = MAX(piece->buf_in.height * piece->iscale, piece->buf_in.width * piece->iscale);
const int scales = floorf(log2f((2.0f * size * scale / ((fsize - 1) * fsize)) - 1.0f));
return CLAMP(scales, 1, MAX_NUM_SCALES);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sometimes, you use CLAMP and sometimes clamp_simd. Is there a reason for this?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

CLAMP is an all-purpose pre-processor macro that does whatever. clamp_simd is a specialty function clamping floats between 0 and 1 and declared with OpenMP pragmas for inlining and vectorization in parallel/SIMD loops. So I use clamp_simd in pixel loops where I need performance, and CLAMP for scalar computations of indices.

dt_bauhaus_combobox_add(g->version, _("v3 (2019)"));
dt_bauhaus_combobox_add(g->version, _("v4 (2020)"));
gtk_widget_set_tooltip_text(g->version, _("v3 is darktable 3.0 desaturation method, same as color balance.\n"
"v4 is a newer desaturation method, based on spectral purity of light."));
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

"spectral purity of light" - marketing buzzword? :-) I'm not expert in photo processing, so could be that it is a well-known term in this field?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No marketing. A white spectrum is a spectrum where every frequency has an equal weight. A pure spectrum is… well not a spectrum anymore, but almost a single frequency (kind of a laser mono-wavelength). So the idea of the v4 is to sharpen or even the shape of the light spectrum, making it more pure or more white.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for your explanation. Already thought that you are not using the term for marketing :-) What market - DT is free!

@trougnouf
Copy link
Contributor

I was able to easily improve a picture which had severe artifacts on v3. Keeping v4, thank you for your great work!

@burrima
Copy link
Contributor

burrima commented Apr 26, 2020

Dumb question that popped into my mind this morning: Is there a reason why the highlight reconstruction is now integrated into filmic? Would it not make sense to locate this code in the highlight reconstruction module - so that it can be used even when filmic is not used?

@johnny-bit
Copy link
Member

Highlight reconstruction module happens before demosaicing and works differently than highlight reconstruction in filmic (which is late stage in pipe). Non-demosaiced images are weird.

@aurelienpierre
Copy link
Member Author

aurelienpierre commented Apr 26, 2020

The primary idea of the filmic's highlights reconstruction is actually not to reconstruct, but to ensure a spatial blending between the pixels that will be clipped and those not clipped at the output of filmic (not necessarily clipped by the sensor).

Indeed, the filmic curve does the 1D blending of pixels based on their intensity (slowly degrading to white instead of an hard transition), but it has been shown that it's not enough to really blend the transition (especially with saturated blue skies).

So, the first design goal of filmic's highlights was something like a highlights blooming/blurring, to take care of the 2D blending of pixels based on their locality. But then, I couldn't decide how to drive the radius of blurring for the blooming filter, because some images have both small and large clipped areas, and no fixed radius can work for all at the same time. So I said, crap, let's do it multiscale and merge, then no need to decide on one radius.

And since I was already there, knee-deep in wavelets, the color inpainting and texture reconstruction was only a couple of additions more in the loop. No big deal. And that, kids, is how I met… over-engineered that stuff.

@GrahamByrnes
Copy link
Contributor

GrahamByrnes commented Apr 27, 2020

A white spectrum is a spectrum where every frequency has an equal weight. A pure spectrum is… well not a spectrum anymore, but almost a single frequency (kind of a laser mono-wavelength).

Is it possible you could have another try at this? I'm familiar with the notions of white vs pink noise from audio processing, but it seems far more complicated in tri-stimulus land. Ok, a pure spectrum is an approximate frequency-domain delta function. This however would have little relation to the other interpretations of "white", ie

  • a spectrum of light from a black-body radiator with an appropriate colour temperature, or some continuum approximation to that; or

  • a set of discrete or continuous frequencies which result in the same perception of light temperature as the continuous spectrum above, when viewed be the human eye.

So are you saying that non-linearities in either transmission (atmospheric effects?) or perception (well recognised multi-modal receptor filters) lead to different effects if generated in continuous vs discrete spectra?

@parafin
Copy link
Member

parafin commented Apr 27, 2020

I guess "pure spectrum" is monochromatic light, and white light is an opposite of that - all relevant frequencies are present.

@aurelienpierre
Copy link
Member Author

So are you saying that non-linearities in either transmission (atmospheric effects?) or perception (well recognised multi-modal receptor filters) lead to different effects if generated in continuous vs discrete spectra?

Perception is more than ever out of the equation. The last remaining bit (saturation as the distance between RGB channels and Y) is gone.

But I don't get the part on transmission non-linearities. What is done here is simply using spectral purity correction as a gamut-mapping strategy, so we sharpen or even the light spectrum. Of course, it's not exact since we are using a tri-stimulus, but the principle holds.

@GrahamByrnes
Copy link
Contributor

GrahamByrnes commented Apr 27, 2020

using spectral purity correction as a gamut-mapping strategy

So you are using a measure of the spectrum width (ie variance, dispersion?) as a measure? So when you wrote "pure spectrum" you were using this to mean "narrower spectrum," without getting to the point where the light becomes monochromatic, and hence not at all white?

@aurelienpierre aurelienpierre marked this pull request as ready for review May 26, 2020 14:44
@aurelienpierre
Copy link
Member Author

aurelienpierre commented May 26, 2020

This is ready for review @TurboGit.

I gave up on OpenCL for this PR, it is for now entirely disabled, and I will submit another PR for OpenCL update (+1600/-300 is enough for one PR I think ;-) )

New defaults have been set and the module will be auto-enabled if basecurve is not auto-applied (reverted since it applied it also on old edits). The default filmic curve expects exposure to raised by 1 to 2 EV, depending on manufacturers, and is intended to match OOC Jpegs more or less.

The new saturation/gamut-sort-of-mapping is now set to 10% to resaturate mildly the midtones while enforcing 0% saturation at 0% et 100% luminance display-referred.

I have tested it in conjunction with #4940, I'm confident to be able to provide a reasonable default look/pipeline default based on exposure/filmic/channel mixer 2/color balance for the lazy photographer.

@groutr
Copy link

groutr commented May 30, 2020

@aurelienpierre, from a user perspective, I think it would make more sense to have the "auto adjust hardness" an eyedropper button in the "look" tab instead of a checkbox. This would be similar to the eyedropper buttons in the "scene" tab that adjust a parameter based on the image.
Having auto-adjust checked, then changing hardness seems to override the auto-adjust. Then toggling the checkbox (on -> off -> on) basically acts as a reset on the hardness.

Aside from that, my experience so far with this module has been incredible! Great work! :)

@MStraeten
Copy link
Collaborator

pulled again for a windows build, but errors from #4800 (comment) are still present

@aurelienpierre
Copy link
Member Author

@aurelienpierre, from a user perspective, I think it would make more sense to have the "auto adjust hardness" an eyedropper button in the "look" tab instead of a checkbox. This would be similar to the eyedropper buttons in the "scene" tab that adjust a parameter based on the image.
Having auto-adjust checked, then changing hardness seems to override the auto-adjust. Then toggling the checkbox (on -> off -> on) basically acts as a reset on the hardness.

"Auto-adjust hardness" can't be a color picker because it has nothing to do with the image. It only adjusts the parameter formerly known as "display target power function" so the grey stays on the identity line. Therefore, hardness (= power function) is adjusted from white and black relative exposures.

pulled again for a windows build, but errors from #4800 (comment) are still present

It's an original error. The first error actually shows that the variables are specified in the enclosing parallel.

@MStraeten
Copy link
Collaborator

now Windows is pleased - it builds fine

@aurelienpierre
Copy link
Member Author

\o/ Thanks @MStraeten

@TurboGit TurboGit removed the wip pull request in making, tests and feedback needed label Jun 3, 2020
@TurboGit
Copy link
Member

TurboGit commented Jun 3, 2020

I should look at this soon now !

Copy link
Member

@TurboGit TurboGit left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Some minor changes.

Also, I have created an integration tests for FlimicRGB from v3.0. With this new version I have a small Delta-E:

Test 0013-filmic-rgb
      Image mire1.cr2
      CPU & GPU version differ by 29439 pixels
      Max  dE         0.8932
  OK

No, big deal but it still means that some code has changed when using v3.0 code path, right?

src/iop/filmicrgb.c Outdated Show resolved Hide resolved
(index_x > bound_right) ? bound_right :
index_x ;

static const float DT_ALIGNED_ARRAY filter[fsize] = { 1.0f / 16.0f, 4.0f / 16.0f, 6.0f / 16.0f, 4.0f / 16.0f, 1.0f / 16.0f };
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would define this outside the loop and share with the same definition below. Having it in a loop could have a little speed penalty anyway.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I had it out of the loop before, but it turned out to be slower since it needs to be initialized from the outside for each thread. Since it's static const, I guess OpenMP parallel loops can cache the array in the code and avoid some memory flip-flops at runtime.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So maybe a macro to share the code for all values ?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

When it was outside, was it declared as static const too? Because IMHO it shouldn't matter where static const declaration is put, it may as well be a global variable - it's allocated in the same space and not initialized at runtime, but at compile time.

Copy link
Member Author

@aurelienpierre aurelienpierre Jun 5, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, it was static const both cases, but I had to declare it first_private anyway for OpenMP, so I suspect it is the cause. Using a macro seems fine.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It doesn't look like we can define arrays in macros

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Have you tried making it shared instead of first_private then? I see that first_private is considered a better choice, though I'm not too sure why since I don't know OpenMP well enough.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Shared is really bad: it means only one thread can access the variable at one time, so every thread needs to wait for its turn to lock it. It's what you want to avoid at all cost, since it usually destroys the whole point of using multithreading. The goal here is to make the filter 100% private in threads.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So it's impossible with OpenMP to access global memory directly, even constant? OK then.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I wouldn't say I fully understand what OpenMP does here, but it seems indeed that its threads don't care about global memory.

for(size_t c = 0; c < 3; ++c)
{
const size_t index_x = mult * (jj - (fsize - 1) / 2) + j;
static const float DT_ALIGNED_ARRAY filter[fsize] = { 1.0f / 16.0f, 4.0f / 16.0f, 6.0f / 16.0f, 4.0f / 16.0f, 1.0f / 16.0f };
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

See above.

mask[k / ch] = weight;

// at x = 4, the sigmoid produces opacity = 5.882 %.
// any x > 4 will produce neglictible changes over the image,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, would be nice to fix before merging.

(index_y > bound_bot) ? bound_bot :
index_y ;

static const float DT_ALIGNED_ARRAY filter[fsize] = { 1.0f / 16.0f, 4.0f / 16.0f, 6.0f / 16.0f, 4.0f / 16.0f, 1.0f / 16.0f };
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

see above.

src/iop/filmicrgb.c Outdated Show resolved Hide resolved
darktable.gui->reset = 1;
gtk_widget_set_visible(g->grey_point_source, p->custom_grey);
gtk_widget_set_visible(g->grey_point_target, p->custom_grey);
darktable.gui->reset = reset;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

and
--darktable.gui->reset;

src/iop/filmicrgb.c Outdated Show resolved Hide resolved
src/iop/filmicrgb.c Outdated Show resolved Hide resolved
@@ -196,62 +247,64 @@ int default_colorspace(dt_iop_module_t *self, dt_dev_pixelpipe_t *pipe, dt_dev_p
return iop_cs_rgb;
}

void init_presets(dt_iop_module_so_t *self)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You have removed all presets? Is that expected?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, I'm really not sure it's a good idea to have presets in there. People expect them to be one-size-fits-all so it's only frustration.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Your take, but I use them as base work. I think people are used to them now.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There is a large dependency to camera, for presets accuracy/efficiency, so I would rather encourage users to build their own.

@aurelienpierre
Copy link
Member Author

@TurboGit the delta E is not normal, the original code has not changed. Could you retry with removing "no-math-errno" in the optimize pragma below header ? It's the only explanation I can think of.

@TurboGit
Copy link
Member

TurboGit commented Jun 5, 2020

the delta E is not normal, the original code has not changed. Could you retry with removing "no-math-errno" in the optimize pragma below header ?

That's the first test I did. Still same issue without this option.


for(int c = 0; c < 3; c++) ratios[c] *= norm;
if(recover_highlights && mask && reconstructed)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

could the diff I'm seeing on the integration test be there ? shouldn't this be activated only if using new v4 algo ?

In legacy_param() we have:

n->reconstruct_threshold = 3.0f; // for old edits, this ensures clipping threshold >> white level, so it's a no-op

Is the no-op really the case ?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There is no option to disable recovery, only you can use crazy high threshold for the highlight masking, so it is practically disabled. That could indeed be the cause, in that case this value needs to be increased (although it's actually a good thing, anyway, that highlights are a bit blurred where they clip).

@TurboGit
Copy link
Member

TurboGit commented Jun 5, 2020

@aurelienpierre : I'll push my FilmicRGB integration test.

@aurelienpierre
Copy link
Member Author

@TurboGit changes pushed

@TurboGit
Copy link
Member

TurboGit commented Jun 5, 2020

@aurelienpierre : I tested it and found it functional. No issue trying most sliders.

I have tested again the regression tests and got similar small dE. The diff image compared to the expected output is somewhat "worrisome". It seems that some random pixels get changed, here is a crop area with red pixels meaning diff:

image

The full diff image is:

diff-ok

Maybe you'll have an idea but this diff seems to rule out the threshold we were talking about as possible culprit.

@aurelienpierre
Copy link
Member Author

Weird. The only change I could see responsible for this is in filmic_spline() function, where I removed the vectorized masking that turned out to be slower than the naive version. Anyway, if it is random, it sounds like floating point arithmetic issues, nothing systematic, and with dE < 0.8, I think we should be fine.

@groutr
Copy link

groutr commented Jun 5, 2020

@aurelienpierre, from a user perspective, I think it would make more sense to have the "auto adjust hardness" an eyedropper button in the "look" tab instead of a checkbox. This would be similar to the eyedropper buttons in the "scene" tab that adjust a parameter based on the image.
Having auto-adjust checked, then changing hardness seems to override the auto-adjust. Then toggling the checkbox (on -> off -> on) basically acts as a reset on the hardness.

"Auto-adjust hardness" can't be a color picker because it has nothing to do with the image. It only adjusts the parameter formerly known as "display target power function" so the grey stays on the identity line. Therefore, hardness (= power function) is adjusted from white and black relative exposures.

perhaps if the "auto-adjust hardness" checkbox disabled the hardness slider, or made it disappear altogether similar to how "use custom middle-grey" checkbox makes the middle-grey slider appear.
What was confusing to me was that I was able to adjust hardness even though it was supposed to be auto adjusted, then the only way get back to auto-adjusted hardness was to toggle the checkbox.

@rkowalke
Copy link

rkowalke commented Jun 7, 2020

Thanks for your amazing work, @aurelienpierre. I have just finished reworking of 22k photos from the last decade with darktable 3.0 and Filmic V3, which was already a hugh step forward compared to my previous basecurve-based edits. So it was time for me to try out the current dev state of Filmic V4 and I'm really glad that my V3 XMP files could be imported without any problems and almost any photo looks like in dt 3.0 (with V3 option set).

However, I found some smaller issues with V4:

Blue artifacts

I discovered blue artifacts under V4 in combination with Luminance Y for a photo taken at St. Pauls Cathedral at a light installation (assume modern LED lights).

I cannot reproduce this neither with V3 color science option + Luminance Y nor with V4 + RGB power norm and otherwise identical settings.

This image needs correcting of WB, which was done with CAT of your other PR #4940. If I disable channel mixer RGB, there are less artifacts but still present, so I assume they are caused by Filmic V4 but boosted by WB correction.

Just a guess: If you look at the wavelet histograms, the red and green channels seem to be muted at the artifacts locations (compared to V3).

Compare Filmic V3 vs V4 - Luminance Y
Compare Filmic V3 vs  V4 - RGB power norm
Compare histograms

German translations missing

  • Tab "reconstruct" => "Rekonstruieren" or "Wiederherstellen"
    • "highlights clipping threshold" => "Lichter Abschneideschwellwert"
    • "highlights clipping transition" => "Lichter Abschneideübergang"
    • "balance structure/texture" => "Struktur/Textur - Balance"
    • "balance bloom/reconstruct" => "Überstrahlung/Wiederherstellung - Balance"
    • "balance grey/colorful details" => "Graue/Farbenfrohe Details - Balance"
  • Tab "look"
    • "hardness" => "Härtegrad"
  • Tab "options" => "Optionen"
    • "color science" => "Farbenlehre" or "Farbwissenschaft" or "Entsättigungsmethode" ?
    • "contrast in highlights" => "Kontrast in Lichtern"
    • "contrast in shadows" => "Kontrast in Schatten"
    • "use custom middle-grey values" => "Eigenen Wert für Mittelgrau verwenden"
    • "auto adjust hardness => "Härtegrad automatisch einstellen"
    • "iterations of high-quality reconstruction" => "Iterationen der hochqualitativen Wiederherstellung"

@aurelienpierre
Copy link
Member Author

@rkowalke you need to understand that no norm in there is perfect, otherwise there wouldn't be 4 of them. The issue you have here is out-of-gamut blue that might still end-up reasonably in the middle of the luminance range so it would not get the filmic desaturation (or even get resaturated if that's what you asked).

If using channel mixer reboot, you can increase the gamut compression factor. Otherwise, just stay away from the Y norm.

Copy link
Member

@TurboGit TurboGit left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good to go for me at this stage.

We need more field testing now and stabilize all this if needed before the release.

@TurboGit TurboGit merged commit 8623d73 into darktable-org:master Jun 8, 2020
@jenshannoschwalm
Copy link
Collaborator

@aurelienpierre i didn't test your new stuff until now...

Congrats ... REALLY GREAT STUFF.

I just gave it a hard try, took ~2000 shots in very different situations, colorful Utah landscape, german autumn mist, technical stuff, architectural, people. Switched to your suggested workflow, cleared history and voila.

Almost all images are at least good, most very good.

Thanks for doing this :-)

@chhil
Copy link

chhil commented Jun 9, 2020

@aurelienpierre
Will you be updating the auto filmic style for v4?

@GrahamByrnes
Copy link
Contributor

I tried it quickly last night... I was finally happy once I had pushed most of the adjustments to either max or min, viz reconstruction threshold etc. The default colour looked "nice", in a slightly bloomish, hand coloured way.

At the moment it doesn't do much for me... but then I'm a B&W photographer and I try hard to avoid excess texture, fake contrast and so on, so I wanted to find that minimalist option. It will take work to find the options I need.

BTW, at some stage I found that Base Curve had turned itself on again... weird bug?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
difficulty: hard big changes across different parts of the code base feature: enhancement current features to improve priority: medium core features are degraded in a way that is still mostly usable, software stutters scope: image processing correcting pixels
Projects
None yet
Development

Successfully merging this pull request may close these issues.