Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

equalize_adapthist: More efficient processing scheme with symmetrical padding #4655

Open
wants to merge 3 commits into
base: main
Choose a base branch
from

Conversation

m-albert
Copy link
Contributor

@m-albert m-albert commented May 7, 2020

Description

Modified the processing scheme for obtaining CLAHE results. By performing symmetrical padding, the image blocks of size kernel_size to be processed can be overlapped more efficiently with the input image shape. This reduces overhead and increases processing speed while only slightly changing the results.

This has previously been discussed with @emmanuelle, @jni and @VincentStimper in the recent rewrite of equalize_adapthist #4598.

In addition to changing the processing scheme, this PR adapts the test values that changed slightly due to introduced implementation change.

Visual comparison between master and this PR (default arguments to equalize_adapthist):
image

Performance comparison in 2D (improvement for large kernel sizes as number of blocks to process per dimension is reduced by one):

img = util.img_as_float(data.astronaut())[:,:,0]
img = img.astype(np.float64) / img.max()

kernel_size = 10
%timeit exposure._adapthist.equalize_adapthist_master(img, kernel_size) # 0.194s
%timeit exposure._adapthist.equalize_adapthist_PR(img, kernel_size) # 0.191s

kernel_size = 100
%timeit exposure._adapthist.equalize_adapthist_master(img, kernel_size) # 0.0329s
%timeit exposure._adapthist.equalize_adapthist_PR(img, kernel_size) # 0.0257s

Checklist

For reviewers

  • Check that the PR title is short, concise, and will make sense 1 year
    later.
  • Check that new functions are imported in corresponding __init__.py.
  • Check that new features, API changes, and deprecations are mentioned in
    doc/release/release_dev.rst.

…ly. This allows a more efficient rearrangement of the image into blocks to be processed. This increases processing significantly, especially for large kernel sizes. This change has an effect on the result image, which is however tiny.
…troducing different padding / processing scheme.
@m-albert
Copy link
Contributor Author

m-albert commented May 7, 2020

Here's another visual comparison from the scikit-image examples that @emmanuelle had previously suggested in #4598.

master PR branch

Another view on the same comparison:
image

@@ -147,13 +158,11 @@ def _clahe(image, kernel_size, clip_limit, nbins):

# calculate graylevel mappings for each contextual region
# rearrange image into flattened contextual regions
ns_hist = [int(s / k) - 1 for s, k in zip(image.shape, kernel_size)]
ns_hist = [int(s / k) for s, k in zip(image.shape, kernel_size)]
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why does the -1 go away here?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In the new scheme, the histogram needs to be calculated for an additional block along each dimension. See the illustration in my comment below.


# rearrange image into blocks for vectorized processing
ns_proc = [int(s / k) for s, k in zip(image.shape, kernel_size)]
ns_proc = [int(s / k) - 1 for s, k in zip(image.shape, kernel_size)]
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

could you please explain this change?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

One block can be spared in the interpolation step, see illustration below.

@emmanuelle
Copy link
Member

Thanks for the PR @m-albert ! I observed in the tests that the PSNR is always a bit lower compared to master, is it just a coincidence? (can you try with other images?)

@m-albert
Copy link
Contributor Author

m-albert commented Jul 23, 2020

Thanks for your review @emmanuelle!

To clarify the questions you raised reviewing the code and to make sure I'm not confusing things here are illustrations showing the processing schemes before and after.

Master branch:
image

PR:
image

Hope that clarifies what's happening in my changes: While the histogram needs to be computed on an additional block, less blocks need to be interpolated in the subsequent step.

Regarding your other comment, indeed the PSNR in the test cases go down. To check whether this is coincidence, as you suggested, here's a comparison of PSNR values for some more images (first row corresponds to the test case in test_adapthist_grayscale).

This is the code I used:

import numpy as np
from skimage.exposure.tests.test_exposure import peak_snr
from skimage.exposure import equalize_adapthist
from skimage import data
from skimage import util
from skimage.color import rgb2gray

images = [
    data.astronaut(),
    data.brick(),
    data.camera(),
    data.cell(),
    data.chelsea(),
    data.coffee(),
    data.coins(),
    data.grass(),
    data.gravel(),
    data.hubble_deep_field(),
    data.moon(),
    data.retina(),
]

for iim, im in enumerate(images):
    img = util.img_as_float(im)
    img = rgb2gray(img)
    img = np.dstack((img, img, img))

    adapted = equalize_adapthist(img, kernel_size=(57, 51),
                                 clip_limit=0.01, nbins=128)
    psnr = peak_snr(img, adapted)
    print(iim, psnr)

and these are the numerical results:

image  PSNR master PSNR PR
data.astronaut() 100.13992149786785 99.23760840792427
data.brick() 79.04685686294852 78.41742170744122
data.camera() 83.22479223896904 82.80436192671775
data.cell() 106.6086435225474 106.90880770117212
data.chelsea() 73.30132992247012 74.68105480078299
data.coffee() 114.53010862761974 115.08138018008111
data.coins() 135.25784943770748 134.01159188367382
data.grass() 109.52261404745748 109.83859350917353
data.gravel() 105.59672858284816 105.6596956426365
data.hubble_deep_field() 139.88559343814984 139.41834883686158
data.moon() 86.17152454514206 86.48462462531728
data.retina() 79.99757208366384 79.80162959968398

It seems that PSNR randomly goes up and down for the different images. So hopefully the PR doesn't introduce meaningful changes/problems.

Base automatically changed from master to main February 18, 2021 18:23
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants