-
-
Notifications
You must be signed in to change notification settings - Fork 2.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
equalize_adapthist
: More efficient processing scheme with symmetrical padding
#4655
base: main
Are you sure you want to change the base?
Conversation
…ly. This allows a more efficient rearrangement of the image into blocks to be processed. This increases processing significantly, especially for large kernel sizes. This change has an effect on the result image, which is however tiny.
…troducing different padding / processing scheme.
…ng / padding scheme.
Here's another visual comparison from the
|
@@ -147,13 +158,11 @@ def _clahe(image, kernel_size, clip_limit, nbins): | |||
|
|||
# calculate graylevel mappings for each contextual region | |||
# rearrange image into flattened contextual regions | |||
ns_hist = [int(s / k) - 1 for s, k in zip(image.shape, kernel_size)] | |||
ns_hist = [int(s / k) for s, k in zip(image.shape, kernel_size)] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why does the -1
go away here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In the new scheme, the histogram needs to be calculated for an additional block along each dimension. See the illustration in my comment below.
|
||
# rearrange image into blocks for vectorized processing | ||
ns_proc = [int(s / k) for s, k in zip(image.shape, kernel_size)] | ||
ns_proc = [int(s / k) - 1 for s, k in zip(image.shape, kernel_size)] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
could you please explain this change?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
One block can be spared in the interpolation step, see illustration below.
Thanks for the PR @m-albert ! I observed in the tests that the PSNR is always a bit lower compared to master, is it just a coincidence? (can you try with other images?) |
Thanks for your review @emmanuelle! To clarify the questions you raised reviewing the code and to make sure I'm not confusing things here are illustrations showing the processing schemes before and after. Hope that clarifies what's happening in my changes: While the histogram needs to be computed on an additional block, less blocks need to be interpolated in the subsequent step. Regarding your other comment, indeed the PSNR in the test cases go down. To check whether this is coincidence, as you suggested, here's a comparison of PSNR values for some more images (first row corresponds to the test case in This is the code I used: import numpy as np
from skimage.exposure.tests.test_exposure import peak_snr
from skimage.exposure import equalize_adapthist
from skimage import data
from skimage import util
from skimage.color import rgb2gray
images = [
data.astronaut(),
data.brick(),
data.camera(),
data.cell(),
data.chelsea(),
data.coffee(),
data.coins(),
data.grass(),
data.gravel(),
data.hubble_deep_field(),
data.moon(),
data.retina(),
]
for iim, im in enumerate(images):
img = util.img_as_float(im)
img = rgb2gray(img)
img = np.dstack((img, img, img))
adapted = equalize_adapthist(img, kernel_size=(57, 51),
clip_limit=0.01, nbins=128)
psnr = peak_snr(img, adapted)
print(iim, psnr) and these are the numerical results:
It seems that PSNR randomly goes up and down for the different images. So hopefully the PR doesn't introduce meaningful changes/problems. |
Description
Modified the processing scheme for obtaining CLAHE results. By performing symmetrical padding, the image blocks of size
kernel_size
to be processed can be overlapped more efficiently with the input image shape. This reduces overhead and increases processing speed while only slightly changing the results.This has previously been discussed with @emmanuelle, @jni and @VincentStimper in the recent rewrite of
equalize_adapthist
#4598.In addition to changing the processing scheme, this PR adapts the test values that changed slightly due to introduced implementation change.
Visual comparison between master and this PR (default arguments to
equalize_adapthist
):Performance comparison in 2D (improvement for large kernel sizes as number of blocks to process per dimension is reduced by one):
Checklist
./doc/examples
(new features only)./benchmarks
, if your changes aren't covered by anexisting benchmark
For reviewers
later.
__init__.py
.doc/release/release_dev.rst
.