Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[WIP] Segmentation based highlights recovery for bayer sensors #10716

Conversation

jenshannoschwalm
Copy link
Collaborator

@jenshannoschwalm jenshannoschwalm commented Dec 28, 2021

EDIT: reflection current status

The new highlight recovery algorithm only works for standard bayer sensors.
It has been developed in collaboration by Iain from the gmic team and me.

The original idea was presented by Iain in pixls.us in: https://discuss.pixls.us/t/highlight-recovery-teaser/17670

and has been extensively discussed over the last months. No other external modules (like gmic …) are used. no OpenCL codepath yet.

I have been testing this on many images (all with very heavy clipping) and this algo does not fix all problems but for the vast majority of images is it far better than what we have atm. On 'normal' images - some parts are blown out because of wrong exposure like clouds, building surfaces, skin ... - this works just good imho.

@rawfiner - i promised to ping you.
@aurelienpierre - any comments?

The algorithm follows these basic ideas:

  1. We understand the bayer data as superpixels, each having one red, one blue and two green photosites
  2. We analyse all data (without wb correction applied) on the channels independently so resulting in 4 color-planes
  3. We want to keep details as much as possible; we assume that details are best represented in the color channel having
    the minimum value. So beside the 4 color planes we also have a plane holding the minimum values (pminimum)
  4. In all 4 color planes we look for isolated areas being clipped (segments).
    Inside these segments (including borders around) we look for a candidate to represent the value we take for restoration.
    Choosing the candidate is done at all non-clipped locations of a segment, the best candidate is selected via a weighing
    function - the weight is derived from
    • the local standard deviation in a 5x5 area and
    • the median value of unclipped positions also in a 5x5 area.
      The best candidate points to the location in the color plane holding the reference value.
      If there is no good candidate we use an averaging approximation over the whole sehment.
  5. We evaluated several ways to further reduce the pre-existing color cast, atm we calc linearly while using a correction
    coeff for every plane.
    We also tried using some gamma correction which helped in some cases but was unstable in others.
  6. The restored value at position 'i' is basically calculated as
    val = candidate + pminimum[i] - pminimum[candidate_location];

For the segmentation i implemented and tested several approaches including Felszenzwalb and a watershed algo,
both have problems with identifying the clipped segments in a plane. I ended up with this:

  1. Doing the segmentation in every color plane.
  2. The segmentation algorithm uses a modified floodfill, it also takes care of the surrounding rectangle of every segment
    and marks the segment borders.
  3. After segmentation we check every segment for
    • the segment's best candidate via the weighing function
    • the candidates location
  4. To combine small segments for a shared candidate we use a morphological closing operation, the radius of that op
    can be chosen interactively between 0 and 10.
  5. To avoid single clipped photosites (often found at smooth transitions from not-clipped to clipped) we prepose
    a morphological opening with a very small radius before segmentation.

@TurboGit
Copy link
Member

Tested on one picture and I get some magenta cast in the sky that I was not able to remove. Any way around that? Maybe I'm not using this new mode properly?

@jenshannoschwalm
Copy link
Collaborator Author

Could you share the raw? For some images the white point is not perfect. You might try with reducing the clip value a bit. The former reconstruct color module took care of such by setting clip to something below the UI value.

Could do that here too.

@TurboGit
Copy link
Member

Sure I can share it, this was a "free" picture in a French magazine which talked about RAW software (darktable was one of them), and this picture was about demoed for the highlight recovery as the sky has many channels blown out!

Anyway, here is the link:

https://drive.google.com/file/d/1W0N67cVSXeivz-avMiBhB7cY4BKQvpFk/view?usp=sharing

@TurboGit
Copy link
Member

Here is another one causing even more troubles:

https://drive.google.com/file/d/1XFKbgAkbLfckY-JZKrn4mP9MsxLTc59c/view?usp=sharing

@jenshannoschwalm
Copy link
Collaborator Author

I can see that too. ATM i find

  1. filmic rgb "preserve chrominance" default fails here on some reason, "no" is better

Will look into it ...

@rawfiner
Copy link
Collaborator

filmic rgb "preserve chrominance" default fails here on some reason, "no" is better

The problem is here whatever filmic preserve chrominance value: see what happens if you disable filmic and reduce exposure to underexpose the image quite a lot: the highlights have their color cast.
Setting chrominance preservation to "no" just hides the issue because it desaturates highlights.

@rawfiner
Copy link
Collaborator

I get a strange behavior on this image:
P2120065

raw is here: https://drive.google.com/file/d/1lfVZRgCinKbV0LF8nSfMc6wj3rY0Jo2S/view?usp=sharing

@jenshannoschwalm
Copy link
Collaborator Author

About the images @TurboGit gave: Both have very large areas clipped and the candidate detection via segmentation fails here. BTW the debugging option help here to detect such issues.

There might be a misunderstanding: the algorithm doesn't try to remove a cast by any means of color->grey correction, it tries to "guess" the sensor data as it should be estimated from mimimum plane and candidate.
So: a purple color cast can be understood a "missing blue" as blue is clipped. What we do here, find a better (higher) blue value so the cast is reduced.
In effect the output data of the reconstructed bayer data can be up to 2*clip - this might need correction of parameters in filmic.

Removing of the remaining color casts is deferred to filmic

My mentioning the filmic-rgb module setting: yes i am aware of the algo. I was wandering about the introduced violet cast. Tests show that is is introduced by a bad candidate.

@jenshannoschwalm
Copy link
Collaborator Author

I get a strange behaviour on this image:

Could not reproduce so far. I guess this is depending on the exact clipping setting?

@TurboGit
Copy link
Member

A question, is that the very same algorithm that Aurélien is implementing here #10711? If yes, maybe you need to coordinate.

@jenshannoschwalm
Copy link
Collaborator Author

No, this is completely different. Just coincidence we both have been working on the same subject: highlights.

Me and Iain from gmic take the path via segmentation and finding a good candidate for a photosite, @aurelienpierre goes via interpixel correlation somehow (if i understand his code correctly). I tried his code too, for some images his algo is better, for some images this pr is certainly better imho.

@jenshannoschwalm
Copy link
Collaborator Author

The segmentation process depends on the specific setting of the clipping threshold. This had lead to instability.

The latest two commits reduce this problem a lot and also avoid some oversegmentation.

@rawfiner
Copy link
Collaborator

Could not reproduce so far. I guess this is depending on the exact clipping setting?

It seems to be fixed by your last commits :-)

@jenshannoschwalm
Copy link
Collaborator Author

Latest commit has removed all "developing internal" stuff while keeping all implemented functionality.

As we only have two UI parameters used in the algo we can keep module version the same helping testing on master developed images and new stuff in other highlights related PR's

@TurboGit TurboGit added this to the 4.0 milestone Dec 30, 2021
@jenshannoschwalm
Copy link
Collaborator Author

Latest commit

  • slightly changes calculation of the correction coefficient thus improves green/blue balance
  • moves #defines to specific file

@jenshannoschwalm jenshannoschwalm changed the title Another highlights revovery algorithm [WIP] Segmentation based highlights recovery for bayer sensors Jan 6, 2022
aurelienpierre added a commit to aurelienpierreeng/ansel that referenced this pull request Jan 7, 2022
@jenshannoschwalm jenshannoschwalm force-pushed the highlights_revovery_pr1 branch from 07f40ca to b17ac9f Compare January 9, 2022 05:46
** Overview **

The new highlight restoration II algorithm only works for standard bayer sensors.
It has been developed in collaboration by Iain from the gmic team and Hanno Schwalm from dt.

The original idea was presented by Iain in pixls.us in: https://discuss.pixls.us/t/highlight-recovery-teaser/17670

and has been extensively discussed over the last months.
Prototyping and testing ideas has been done by Iain using gmic, Hanno did the implementation and integration into dt’s
codebase. No other external modules (like gmic …) are used, the current code has been tuned for performance using omp,
no OpenCL codepath yet.

** Main ideas **

The algorithm follows these basic ideas:
1. We understand the bayer data as superpixels, each having one red, one blue and two green photosites
2. We analyse all data (without wb correction applied) on the channels independently so resulting in 4 color-planes
3. We want to keep details as much as possible; we assume that details are best represented in the color channel having
   the minimum value. So beside the 4 color planes we also have a plane holding the minimum values (pminimum)
4. In all 4 color planes we look for isolated areas being clipped (segments).
   Inside these segments including borders around we look for a candidate to represent the value we take for restoration.
   Choosing the candidate is done at all non-clipped locations of a segment, the best candidate is selected via a weighing
   function - the weight is derived from
   - the local standard deviation in a 5x5 area and
   - the median value of unclipped positions also in a 5x5 area.
   The best candidate points to the location in the color plane holding the reference value.
   If there is no good candidate we use an approximation.
5. We evaluated several ways to further reduce the pre-existing color cast, atm we calc linearly while using a correction
   coeff for every plane.
   We also tried using some gamma correction which helped in some cases but was unstable in others.
6. The restored value at position 'i' is basically calculated as
     val = candidate + pminimum[i] - pminimum[candidate_location;
7. For locations with all planes clipped we might do a synthesis in pminimum, the value for every position is derived
   from the local gradient at the border and basically the distance.
   This code part has been surprisingly difficult to implement (avoiding ridges, good transition, … ),
   the existing code is working ok (with some minor issues) but is rather slow and not perfect.
   It has not been included in the first pr and will be re-evaluated.

For segmentation i implemented and tested several approaches including Felszenzwalb and a watershed algo,
both have problems with identifying the clipped segments in a plane. I ended up with this:

1. Doing the segmentation in every color plane.
2. The segmentation algorithm uses a modified floodfill, it also takes care of the surrounding rectangle of every segment
   and marks the segment borders.
3. After segmentation we check every segment for
   - the segment's best candidate via the weighing function
   - the candidates location
4. To combine small segments for a shared candidate we use a morphological closing operation, the radius of that op
   can be chosen interactively between 0 and 10.

Hanno & Ian 2021/12
- morphological closing operation
- segmentizing via a modified floodfill algo
- highlights synthesis for all-channels-clipped has not been included yet.
  Anyway, this helps only in a few situations, will follow later
- of course the module version changes
- some gtk widgets are implemted (but not visible atm) for debugging. The
  parameters are expected to be stable already.
we sometimes might want a
- slighly less than 1 parameter
- sometimes it might be better better suited to use a even larger one
- explicit dilate and erode
- introduce an ever smaller radius of 0
Before the segments combining there is now a preparation removing single clipped locations.
This helps in two ways:
- less segments resulting in a slightly better performance
- the "clipping threshold instability" is greatly reduced.
- allow two float values to be kept here
- due to the initial morphological opening we end up in three possible states for any clipped photosite
  We take care for situations whith good or bad weight segments and also for remaining isolated clipped photosites.

- writing the reconstructed data is now done in two steps to allow
  - interchannel correction
  - do a gaussian for every clipped photosite when writing back bayer data to reduce effect of minimum plane noise
    and overshoots for very small segments or single shots.
- try to get other builds happy too
As 10711 is concidered to be stable and ready to merge this is here allow testig of this algo without
later history hassle.

Also simpler gui changed
@jenshannoschwalm
Copy link
Collaborator Author

Still evaluating color managing so pending and WIP

@jenshannoschwalm
Copy link
Collaborator Author

There is heavy work on this -- closing this for now until there is something really good.

@jenshannoschwalm jenshannoschwalm deleted the highlights_revovery_pr1 branch April 20, 2024 10:55
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
feature: enhancement current features to improve scope: image processing correcting pixels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants