New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

presets & usability enhancement #1945

Closed
wants to merge 3 commits into
base: master
from

Conversation

Projects
None yet
5 participants
@aurelienpierre
Copy link
Contributor

aurelienpierre commented Dec 30, 2018

  1. rename the equalizer "contrast equalizer" because it's clearer and prepare the field for my tone equalizer
  2. deprecate the spot removal tool because there is retouch now.
@parafin

This comment has been minimized.

Copy link
Member

parafin commented Dec 30, 2018

This PR also disables sharpen by default.

@parafin

This comment has been minimized.

Copy link
Member

parafin commented Dec 30, 2018

Why have you chosen "contrast equalizer" name? It doesn't affect just the contrast, so the reasoning is not obvious.

@TurboGit TurboGit self-requested a review Dec 30, 2018

@TurboGit TurboGit self-assigned this Dec 30, 2018

@TurboGit TurboGit added this to the 2.8 milestone Dec 30, 2018

@TurboGit

This comment has been minimized.

Copy link
Member

TurboGit commented Dec 30, 2018

Do we really want to deprecate "spot removal" ? It is easier to use than retouch, it has been used since years by many people. I would keep it but have it not selected by default. But I have no string opinion on this.

@TurboGit

This comment has been minimized.

Copy link
Member

TurboGit commented Dec 30, 2018

I'm not really for having the lens correction module enabled by default. This will "hide" most details on the border of the image by default and it is always possible to have it enabled by default using an auto-preset.

@TurboGit

This comment has been minimized.

Copy link
Member

TurboGit commented Dec 30, 2018

Also, please make separate PR for each changes as some may be merged now and some need more discussion. The risk is having nice enhancement stopped because of some part are more delicate. Thanks.

@@ -1244,6 +1244,7 @@ void init(dt_iop_module_t *module)
module->params_size = sizeof(dt_iop_lensfun_params_t);
module->gui_data = NULL;
module->priority = 199; // module order created by iop_dependencies.py, do not edit!
module->default_enabled = 1;

This comment has been minimized.

@LebedevRI

This comment has been minimized.

@aurelienpierre

aurelienpierre Dec 30, 2018

Contributor

why ?

This comment has been minimized.

@LebedevRI

LebedevRI Dec 30, 2018

Member
  1. "user request" is simply not a reason-enough. auto-applied presets work for this module, for those who need it. i'm not observing any other arguments in favor of this change being presented here.
  2. the module will now be always-enabled-by-default in the pipeline, that is, unless it is disabled in the history stack. and with that, pretty much all the existing history stacks get broken (if they did not explicitly disable/enable the module already), for obvious reasons...

This comment has been minimized.

@aurelienpierre

aurelienpierre Dec 30, 2018

Contributor

2. the module will now be always-enabled-by-default in the pipeline, that is, unless it is disabled in the history stack. and with that, pretty much all the existing history stacks get broken (if they did not explicitly disable/enable the module already), for obvious reasons...

ok, that's a good point. I thought auto-presets were applied only when the initial history stack was created, not when there is already a previous history. That's sounds like a bug though.

This comment has been minimized.

@LebedevRI

LebedevRI Dec 30, 2018

Member

I thought auto-presets were applied only when the initial history stack was created, not when there is already a previous history.

Quiz: do you believe this change is about a preset, or not?

@@ -92,7 +92,7 @@ void init_presets(dt_iop_module_so_t *self)
// restrict to raw images
dt_gui_presets_update_ldr(_("sharpen"), self->op, self->version(), FOR_RAW);
// make it auto-apply for matching images:
dt_gui_presets_update_autoapply(_("sharpen"), self->op, self->version(), 1);
dt_gui_presets_update_autoapply(_("sharpen"), self->op, self->version(), 0);

This comment has been minimized.

@LebedevRI

LebedevRI Dec 30, 2018

Member

Likewise, no.

This comment has been minimized.

@aurelienpierre

aurelienpierre Dec 30, 2018

Contributor

Likewise, why ?

This comment has been minimized.

@TurboGit

TurboGit Dec 30, 2018

Member

This is a decision (and a good one I think) made at the start of darktable. A RAW is always a bit sharpened by all RAW software and so darktable do so. Again I suppose that one can create a preset to disable this module by default if needed.

This comment has been minimized.

@TurboGit

TurboGit Dec 30, 2018

Member

I withdraw my objection about this one.

@@ -115,7 +115,7 @@ typedef struct dt_iop_atrous_data_t

const char *name()
{
return _("equalizer");
return _("contrast equalizer");

This comment has been minimized.

@LebedevRI

LebedevRI Dec 30, 2018

Member

What @parafin said, this is not really correct; as it is evident from the presets, the module is not limited to contrast.

@aurelienpierre

This comment has been minimized.

Copy link
Contributor

aurelienpierre commented Dec 30, 2018

Do we really want to deprecate "spot removal" ? It is easier to use than retouch, it has been used since years by many people. I would keep it but have it not selected by default. But I have no string opinion on this.

The spot removal might have a simpler UI, but it is actually more tedious to find a clean area with no details and exact same luminance to sample. In practice, you have to fiddle with opacities and blending modes to get it to work properly, if you ever do. In the other hand, using the healing tool in retouch, even on the first layer (without enabling the wavelets decomposition), is fast and gives nice results, even if the UI is more loaded.

This PR also disables sharpen by default.

Yes, I forgot that. The motivation is there are 4 different ways in darktable to do fake sharpening (aka local contrast enhancement). The sharpen method is not the best, although it's the fastest, and the preset provided is too heavy for modern prime lenses + full frames, and too light for point-and-shoots and entry-level zooms. So, if you really want an auto-preset, you would have to profile lenses and make it dependent.

Why have you chosen "contrast equalizer" name? It doesn't affect just the contrast, so the reasoning is not obvious.

When you look at the code, the equalizer adds or removes contrast on Lab channels at different wavelet scales. So it's always a color contrast or luminance contrast adjustment. Then it does a thresholding, as a denoising (which is not really the right place in the pipe, and is fully redundant with the denoiseprofile wavelet, if you ask me), which is nothing more than a local contrast clipping, if you think about it. So the equalizer is all about local contrast, and having it named unclearly leads user to emulate its effect with several highpass filters for no reason at all.

I'm not really for having the lens correction module enabled by default. This will "hide" most details on the border of the image by default and it is always possible to have it enabled by default using an auto-preset.

I agree that it can be a preset, but it's to be consistent with the base curve auto apply. Why the base curve and not the lens ? I think it should be both or none.

Also, please make separate PR for each changes as some may be merged now and some need more discussion. The risk is having nice enhancement stopped because of some part are more delicate. Thanks.

I will branch the toolboxes, as it seems nobody is complaining about them. The rest is part of the same discussion about the design of the soft, I think.

@LebedevRI

This comment has been minimized.

Copy link
Member

LebedevRI commented Dec 30, 2018

Do we really want to deprecate "spot removal" ? It is easier to use than retouch, it has been used since years by many people. I would keep it but have it not selected by default. But I have no string opinion on this.

The spot removal might have a simpler UI, but it is actually more tedious to find a clean area with no details and exact same luminance to sample. In practice, you have to fiddle with opacities and blending modes to get it to work properly, if you ever do. In the other hand, using the healing tool in retouch, even on the first layer (without enabling the wavelets decomposition), is fast and gives nice results, even if the UI is more loaded.

To me that does not answer the question of "why should the existing simple module be deprecated?"
No one is forcing anyone to only use the less powerful "spot removal" module,
but this does look like trying to force to use the more powerful module.

This PR also disables sharpen by default.

Yes, I forgot that. The motivation is there are 4 different ways in darktable to do fake sharpening (aka local contrast enhancement). The sharpen method is not the best, although it's the fastest, and the preset provided is too heavy for modern prime lenses + full frames, and too light for point-and-shoots and entry-level zooms. So, if you really want an auto-preset, you would have to profile lenses and make it dependent.

The defaults aren't meant to be best.
dt should not try to have a magical set of auto-tuned modules enabled by default.

@aurelienpierre

This comment has been minimized.

Copy link
Contributor

aurelienpierre commented Dec 30, 2018

To me that does not answer the question of "why should the existing simple module be deprecated?"

Because you duplicate the feature and clutter the UI with a limited and flawed version of something else that works better.

dt should not try to have a magical set of auto-tuned modules enabled by default.

So why the auto-activation of the sharpening then ? Especially with an arbitrary preset that is not universal ?

aurelienpierre added some commits Dec 29, 2018

atrous : rename "contrast equalizer" instead of just "equalizer"
too many users don't get this module, make it more clear. Plus I'm preparing a tone equalizer.
disable sharpen by default
the sharpening is too intense with modern prime lenses

@aurelienpierre aurelienpierre force-pushed the aurelienpierre:preferences branch from 660c4af to a0ded1d Dec 30, 2018

@LebedevRI

This comment has been minimized.

Copy link
Member

LebedevRI commented Dec 30, 2018

To me that does not answer the question of "why should the existing simple module be deprecated?"

Because you duplicate the feature and clutter the UI with a limited and flawed version of something else that works better.

.. and the second part?
(also, you are well aware of the more modules module, i think? so i'm not sure how it clutters it even)

No one is forcing anyone to only use the less powerful "spot removal" module,
but this does look like trying to force to use the more powerful module.

Also, let's take step back. Is that new module fully functionally equivalent of the old one?
I have just tried it, and already stumbled into very obvious bugs (have you tried zooming in on the dst replacement area? that is a rc bug; though something tells me 2.6.0 may or may not be more full of these than usual)

dt should not try to have a magical set of auto-tuned modules enabled by default.

So why the auto-activation of the sharpening then ? Especially with an arbitrary preset that is not universal ?

Could be because the physical low-pass filter softens the image, or because the default demosaic kind-of results in softer image than the other mode, dunno.

@aurelienpierre

This comment has been minimized.

Copy link
Contributor

aurelienpierre commented Dec 30, 2018

Could be because the physical low-pass filter softens the image, or because the default demosaic kind-of results in softer image than the other mode, dunno.

50 % of the sensors post-2012 don't have a low-pass filter anymore. Post-2016, it would be like 75 % or so.

We have to settle this : do we want a straight-away-good-looking picture when the user opens the darkroom, or a do-it-from-scratch picture ? Because both ways, I'm fine. But for now, it seems the first option has been chosen for base curves and sharpening, while the second one has been chosen for lens correction. I'm just asking for consistency.

The sharpen preset is not, by any means, a one-size-fits all thing. On any recent 36+ Mpx, you need to disable it because it's overcooked. On any point-and-shoot or DSLR + kit lens, it's not enough but you can't push it too far without messing it up, because it's a good old unsharp masking (halos much ?). When it's enabled by default, you send a message to the user : "use it, it's standard". It's not. It's 1972 stuff we used before being able to deconvolve properly.

(also, you are well aware of the more modules module, i think? so i'm not sure how it clutters it even)

I know. It's where I hide all of dt's nonsense and geeks playtoys in garbage Lab.

Also, let's take step back. Is that new module fully functionally equivalent of the old one?
I have just tried it, and already stumbled into very obvious bugs (have you tried zooming in on the dst replacement area? that is a rc bug; though something tells me 2.6.0 may or may not be more full of these than usual)

I have abused that module for 4 months, there are some rough edges but nothing unusable.

@LebedevRI

This comment has been minimized.

Copy link
Member

LebedevRI commented Dec 30, 2018

Could be because the physical low-pass filter softens the image, or because the default demosaic kind-of results in softer image than the other mode, dunno.

50 % of the sensors post-2012 don't have a low-pass filter anymore. Post-2016, it would be like 75 % or so.

citation needed i guess.
The point about demosaic still remains.

We have to settle this : do we want a straight-away-good-looking picture when the user opens the darkroom, or a do-it-from-scratch picture ? Because both ways, I'm fine. But for now, it seems the first option has been chosen for base curves and sharpening, while the second one has been chosen for lens correction. I'm just asking for consistency.

If you don't put it like that, i'd go with "do-it-from-scratch picture".
But then, why do we need orientation, demosaic, wb, highlight recovery one will say.
So if you do put it like that i'd go with "let's just not touch the defaults".

The sharpen preset is not, by any means, a one-size-fits all thing. On any recent 36+ Mpx, you need to disable it because it's overcooked. On any point-and-shoot or DSLR + kit lens, it's not enough but you can't push it too far without messing it up, because it's a good old unsharp masking (halos much ?). When it's enabled by default, you send a message to the user : "use it, it's standard". It's not. It's 1972 stuff we used before being able to deconvolve properly.

Similarly, default PPG demosaic is very simple, amaze may or may not give you better results.
Default highlight recovery method is bad, reconstruct in lch is better (i never got around implementing proper inpainting method, oh well)
And so on.
Since dt intentionally does not have any of the pipe module hidden, one can change everything
that isn't nailed to it's place with nails, so i do not believe that that message is being sent.

(also, you are well aware of the more modules module, i think? so i'm not sure how it clutters it even)

I know. It's where I hide all of dt's nonsense and geeks playtoys in garbage Lab.

Do you believe darktable should get rid of it's Lab section of the pipeline, and do everything in [whatever] RGB?

Also, let's take step back. Is that new module fully functionally equivalent of the old one?
I have just tried it, and already stumbled into very obvious bugs (have you tried zooming in on the dst replacement area? that is a rc bug; though something tells me 2.6.0 may or may not be more full of these than usual)

I have abused that module for 4 months, there are some rough edges but nothing unusable.

That is a non-answer.

  1. Does that module has a mode in which it can act just like the old spot removal module, with no extra freq/wavelet stuff forced down one's throat?
  2. Do you see the bug i just reported? (it seems i tried on 2.7.0+44~gc2ecb03f5-dirty, maybe it was fixed already)
@aurelienpierre

This comment has been minimized.

Copy link
Contributor

aurelienpierre commented Dec 30, 2018

citation needed i guess.
The point about demosaic still remains.

Just browse dxomark.com database… Nikon has removed the LPF for every camera from the D800E (2012), D7100 (2013), D3300 (2014), D5300 (2013), D5 (2016). All the full frame Sony don't have them since the alpha 7 (2013). Fuji don't need them at all with their XTrans sensors. Pentax has a digital LPF on K1 and K5 that can be disabled in the firmware and the 645 doesn't seem to have one. Hasselblad X1D (2016) doesn't have it. Same for Phase One since at least IQ2 (2014). Same for Leica with the S2 (2016) and the M range since at least the M9 (2012). Panasonic has gone LPF-less tlast year for the first time, with the GH5, and this year with GX7. Only Canon is trapped in Middle-Age with only the 5DR having an optical correction for the LPF (2015). Overall, only Canon and low/mid-range Sony still have them as of now.

So… yes, the point about the demosaicing remains, but unsharp masking to revert it is like putting a trailer behind your Ferrari. I mean, it's ok if you run an Intel Atom 1.4 Ghz, but for everything else, use the equalizer and do yourself a favour.

But then, why do we need orientation, demosaic, wb, highlight recovery one will say.
So if you do put it like that i'd go with "let's just not touch the defaults".

The difference is orientation and wb read the EXIF to adapt their preset. Highlight recovery is a clipping that goes with ICC v2 output constraints. Demoisaic is obviously needed to create an image. Sharpen, on the contrary, is an arbitrary choice with arbitrary settings that don't care if you have a file from a 100 Mpx LPF-less medium-format with a pristine 90 mm f/2.8 or a 10 Mpx micro 4/3 with the 18-200mm "my-lens-does-it-all".

Do you believe darktable should get rid of it's Lab section of the pipeline, and do everything in [whatever] RGB?

Getting rid of them is no option at this point of history, but yes… No serious pixel-pusher ever pushes pixels in Lab. That's a space where exposure compensation is not a multiplication anymore and where large corrections of luminance actually desaturate the colors, even if they are not supposed to. It's not even that good as a connexion space for ICC profiles, now CIECAM02 tends to replace it slowly. The Lab nonsense of darktable is the main reason I have mainly done B&W photography in the past years and why I hacked the color balance module, which is the only module allowing to correct colors in RGB (but in gamma corrected sRGB, seriously… 🤦‍♂️ ).

  1. Does that module has a mode in which it can act just like the old spot removal module, with no extra freq/wavelet stuff forced down one's throat?

Yes. Use the clone tool on the first (black) default layer.

  1. Do you see the bug i just reported? (it seems i tried on 2.7.0+44~gc2ecb03f5-dirty, maybe it was fixed already)

I'm not able to reproduce with today's master.

@TurboGit

This comment has been minimized.

Copy link
Member

TurboGit commented Dec 30, 2018

Nikon has removed the LPF for every camera from the D800E (2012), D7100 (2013), D3300 (2014), D5300 (2013), D5 (2016). All the full frame Sony don't have them since the alpha 7 (2013). Fuji don't need them at all with their XTrans sensors. Pentax has a digital LPF on K1 and K5 that can be disabled in the firmware and the 645 doesn't seem to have one. Hasselblad X1D (2016) doesn't have it. Same for Phase One since at least IQ2 (2014). Same for Leica with the S2 (2016) and the M range since at least the M9 (2012). Panasonic has gone LPF-less tlast year for the first time, with the GH5, and this year with GX7. Only Canon is trapped in Middle-Age with only the 5DR having an optical correction for the LPF (2015).

You can add D850 (2017). I see your point and indeed maybe at this stage we should not activate the sharpen module by default.

@LebedevRI

This comment has been minimized.

Copy link
Member

LebedevRI commented Dec 30, 2018

I see.
I no longer recognize the direction in which darktable is moving.
I guess i should finish jumping ship at some point, or at the very least stop wondering into these PR here.
@TurboGit good luck.

@TurboGit

This comment has been minimized.

Copy link
Member

TurboGit commented Dec 30, 2018

@LebedevRI : we are still discussing, right?

@aurelienpierre

This comment has been minimized.

Copy link
Contributor

aurelienpierre commented Dec 30, 2018

I guess i should finish jumping ship at some point, or at the very least stop wondering into these PR here.

That would be a shame.

I'm trying here to think the workflow of dt from a photographer perspective. My regret is that for now, it's a plugin collection more than a photo workflow software. The qualities I expect from a such software are efficiency, consistency and predictibility. That PR is about improving the consistency part.

@parafin

This comment has been minimized.

Copy link
Member

parafin commented Dec 30, 2018

From where I'm sitting it looks like a more or less complete re-write of darkroom backend, but instead of doing it cleanly from the scratch it is merged in small steps that are masked as "improvements" and ignoring any notices about changes being incompatible with original design choices. I understand that it's much easier to write a new module and declare old one obsolete (instead of writing legacy_params code) and break old histories by changing iop order as one sees fit instead of thinking about compatibility. Yes, some of the old decisions in DT core are bad, but changing them requires careful and thought-through approach. Just re-writing code to fit a new idea will result in stuff breaking all over the place.
I'm not against the underlying idea of recent changes, but the way they are presented IMHO needs much work.

@aurelienpierre

This comment has been minimized.

Copy link
Contributor

aurelienpierre commented Dec 30, 2018

A lot of legacy_params have been written and obsolete modules are not removed, they are just flagged so they won't show up in the UI for fresh developments.

As for breaking compatibility, I think there is a line to draw between "old developments can't be open anymore" and "we have 0.2 % RMSE in the image between the new version and the old one".

The only thing that truely concerns me is how masks coordinates will be moved when modules will be reordered, especially around the geometric distortions (perspective, liquify, retouch). Apart from that, it's mostly an hard-coded constant that becomes exposed to the user. Sure, the color spaces will need to be fitted between contiguous modules, but some modules already do internal color-spaces conversions (tone curve, color balance), it's just a matter of spreading it.

It might be the opportunity to merge "small" modules too (velvia + vibrance, colorize + splittoning, etc.) because OpenCL overhead and I/O latencies are really damaging for performance when not much is done in the kernels. It often ends-up with 60 % of the running time lost in buffers copies.

A complete rewriting seems to me the most efficient way to kiss compatibility good-bye.

@upegelow

This comment has been minimized.

Copy link
Member

upegelow commented Dec 30, 2018

A complete rewriting seems to me the most efficient way to kiss compatibility good-bye.

Then better start with a new project.

From time to time we see revolutionary ideas how things could have been done in a much better way. However, the darktable project is dead in the moment when we tell people that they can forget there legacy history stacks because we some great idea of the day that is now so much better.

I have always been strongly advocating for consistency and keeping all history stacks valid even from day one. If we give up that part I am out.

@TurboGit

This comment has been minimized.

Copy link
Member

TurboGit commented Dec 30, 2018

I have always been strongly advocating for consistency and keeping all history stacks valid even from day one. If we give up that part I am out.

I'm also advocating for this. But here we are not talking about breaking the history which is BTW already broken in a far worst way that what Aurélien propose. Just think about the different process we have plain CPU, OpenCL, SSE. The three are certainly not giving the same results. So a user going from computer 1 on a CPU to a new computer with a GPU won't have the same result when exporting picture. And we don't know what in the future the processor factory will come with.

So as Aurélien said we don't want to break history but inevitably we may have a small shift when changing the computing environment or the way the algorithm is used on the pixel-pipe.

@aurelienpierre

This comment has been minimized.

Copy link
Contributor

aurelienpierre commented Dec 31, 2018

And if anyone has doubts, I have 7 years of archives in dt DB, I'm not ready to lose them just yet either.

The goal is to overcome bad or locked design step by step by making hard-coded decisions user-configurable, so add options and make the legacy configuration only one of the available options (module order, with legacy order by default, or more RGB modes in modules in addition of legacy Lab). That way, you retain the ability to process old edits (hard-coded stuff just moves to database preferences) but don't get locked into past choices. Rawtherapee does that: many modules have a Lab and a RGB mode, and several RGB spaces for the pipeline, consistent with the state of the art (ACES and such).

But, apart from that, I would really like to trigger design and ergonomics discussions. Like what is the using path we want to draw for users so their editings are efficient, predictible and reproductible. I have edited weddings in dt, that was my worst experience ever. It's nearly impossible to truly duplicate history stacks, too many modules have bad side-effects when you push them, so you end up doing manual case-by-case edits because it's faster than trying to adjust not-so-generic average presets. Working on linearly encoded, scene-referred RGB would have been so much easier…

@parafin

This comment has been minimized.

Copy link
Member

parafin commented Dec 31, 2018

Why then have you already changed order of some modules if the goal is to have it configurable? Just leave old defaults as is and work on the new system. Which BTW you should describe somewhere in full, then anyone can compare it with current one and have his own conclusion on how compatible they are. Doing incremental changes just hides how much you want/need to change.

@parafin

This comment has been minimized.

Copy link
Member

parafin commented Dec 31, 2018

This is how one goes round doing re-design of such size:

  1. Write up some text describing flaws of the current code, goals of the re-design, etc.
  2. Discuss it.
  3. Come up with steps how new system will be integrated.
  4. Discuss them.
  5. Prepare a PR for each step, test the new code, merge it after a review.
  6. Think about which changes can be harmlessly backported to old history stacks and apply them after careful consideration.

I think we're going about it in a reverse direction. You say you want to trigger design discussions, but you completely skipped steps 1-4 (or have I missed something here?) and we're stuck discussing technical details of each new change instead of the whole picture. You obviously know the way you want DT to work, but don't you think it shouldn't be one man's decision? For one other people know DT internals much better.

My fear here that the way this is going after everything gets merged we end up with completely unmanageable, untested and subtly broken code, much worse than we have now (which is already not good). Any new big feature needs to be cleanly designed, so that overall quality of the DT's code improves, not degrade further.

That's just my opinion, and given that I don't spend any considerable time developing DT it can be completely ignored. Just maybe give it some thought. I'm not going to comment further on these changes.

@aurelienpierre

This comment has been minimized.

Copy link
Contributor

aurelienpierre commented Dec 31, 2018

I think we're going about it in a reverse direction. You say you want to trigger design discussions, but you completely skipped steps 1-4 (or have I missed something here?) and we're stuck discussing technical details of each new change instead of the whole picture. You obviously know the way you want DT to work, but don't you think it shouldn't be one man's decision? For one other people know DT internals much better.

You are unfair, I have been discussing that for 3 months on IRC with @hanatos and @houz, I have shown the benefits of the linear RGB scene-referred workflow many times on pixls.us, along with the added value of my changes in color balance, filmic and unbreak color profile in videos and tutorials… I have discussed the pipeline vs. UI order of modules for 2 weeks on the dev mailing list. I have produced a comparison of filmic vs. base curve results, and shown that 2 out of 3 global tonemapping operators are broken, because they were designed in RGB and implemented in dt in Lab… So far, no dev has commented on the results and the functionnal aspect, only users seem to care.

All the feedback I have had from devs on the changes I propose is technical objections, no discussion has ever lead to the added value and benefits for the workflow. Yes… I get it… It might break the soft. But a soft that never breaks is a code graveyard. The modules were originally designed to be moved around (otherwise the modular approach is only overhead), but that was abandonned at some point.

Why then have you already changed order of some modules if the goal is to have it configurable? Just leave old defaults as is and work on the new system. Which BTW you should describe somewhere in full, then anyone can compare it with current one and have his own conclusion on how compatible they are. Doing incremental changes just hides how much you want/need to change.

So far, only the crop and rotate modue has been moved sooner in the pipe, after perspective correction, and the reasons are documented in the PR #1791. This is safe from the image processing point of view and the cropping was trapped in the middle of the color adjustements for no reason.

My fear here that the way this is going after everything gets merged we end up with completely unmanageable, untested and subtly broken code, much worse than we have now (which is already not good). Any new big feature needs to be cleanly designed, so that overall quality of the DT's code improves, not degrade further.

Seriously, we support a soft on Windows, Mac, Linux × (Gnome + KDE + zillions of derivatives), on architectures like x86_64, ARM and other I have never heard of… We don't have enough testers to proof-read everything on exotic hardware + distros, at some point if you don't take a chance to release subtly broken code, just take your Adobe Cloud subscription, it's a lost cause.

@TurboGit

This comment has been minimized.

Copy link
Member

TurboGit commented Dec 31, 2018

@parafin : if I agree with you in general it is also clear that there is not enough energy around to discuss and review everything. This makes PR staying for years on GitHub (see the duplicate module case, 2 years sitting there). So we also want to ensure dt is moving forward (yes as Aurélien said with the possibility to have temporary some breakage), a soft not moving is a dead one! We do also want to not discourage contributors.

We have a great user based and this is a chance that most little issues introduced are detected quite fast.

That's also a good incentive to merge sooner, this year release everything was done starting on Sept. Far too late, I have put far too much energy to have something for this Christmas.

That's my thinking and trade-off today. I'd love to see 50 or more devs around to review and discuss everything, but that's not the case.

After all, we are doing well, no? The current state of darktable is great, I'm using almost daily to produce wonderful images.

@TurboGit

This comment has been minimized.

Copy link
Member

TurboGit commented Jan 12, 2019

I suppose this could be closed now?

@aurelienpierre

This comment has been minimized.

Copy link
Contributor

aurelienpierre commented Jan 14, 2019

What remains in this PR is renaming the equalizer and deprecation of the spot removal.

@TurboGit

This comment has been minimized.

Copy link
Member

TurboGit commented Jan 14, 2019

I think that the renaming of the equalizer will make sense in the tone-equalizer PR. I suppose that these are related? To have a "contrast equalizer" and a "tone equalizer" and avoid confusion, right?

@aurelienpierre

This comment has been minimized.

Copy link
Contributor

aurelienpierre commented Jan 14, 2019

To have a "contrast equalizer" and a "tone equalizer" and avoid confusion, right?

Yes.

I think that the renaming of the equalizer will make sense in the tone-equalizer PR

Ok then.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment