Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

non_local_means : patch size argument for local mean and variance #1245

Closed
wants to merge 3 commits into from

Conversation

@riddhishb
Copy link
Contributor

commented May 14, 2017

Added the argument which allows user to change the patch size used for computation of local mean and local variance in the non_local_means.
Addresses the issue brought up in #1178

@arokem
Copy link
Member

left a comment

Looks good to me. Could you please add a test that varies lmean_radius?



@cython.boundscheck(False)
@cython.wraparound(False)
@cython.cdivision(True)
cdef double _local_variance(double[:, :, :] ima, double mean, int x, int y, int z) nogil:
cdef double _local_variance(double[:, :, :] ima, double mean, int x, int y, int z, int p) nogil:

This comment has been minimized.

Copy link
@arokem

arokem May 14, 2017

Member

This is going over 80 characters. Please break into two lines, by adding a line-break somewhere.

This comment has been minimized.

Copy link
@riddhishb

riddhishb May 14, 2017

Author Contributor

cool!

for pz in range(z - 1, z + 2):
for px in range(x - p, x + p + 1):
for py in range(y - p, y + p + 1):
for pz in range(z - p, z + p + 1):
if ((px >= 0 and py >= 0 and pz > 0) and
(px < dims0 and py < dims1 and pz < dims2)):
ss += (ima[px, py, pz] - mean) * (ima[px, py, pz] - mean)

This comment has been minimized.

Copy link
@arokem

arokem May 14, 2017

Member

Doesn't variance have to be normalized by the volume?

This comment has been minimized.

Copy link
@riddhishb

riddhishb May 14, 2017

Author Contributor

Variance is normalized by the count variable which is essentially the volume, just a gimmick of the code

@riddhishb

This comment has been minimized.

Copy link
Contributor Author

commented May 14, 2017

@arokem Thats a good idea, let me add a test

@coveralls

This comment has been minimized.

Copy link

commented May 14, 2017

Coverage Status

Coverage increased (+0.004%) to 85.61% when pulling 9ba5451 on riddhishb:patch-fix-nlmeans into 3f3e2f1 on nipy:master.

1 similar comment
@coveralls

This comment has been minimized.

Copy link

commented May 14, 2017

Coverage Status

Coverage increased (+0.004%) to 85.61% when pulling 9ba5451 on riddhishb:patch-fix-nlmeans into 3f3e2f1 on nipy:master.

@coveralls

This comment has been minimized.

Copy link

commented May 14, 2017

Coverage Status

Coverage increased (+0.004%) to 85.61% when pulling 9ba5451 on riddhishb:patch-fix-nlmeans into 3f3e2f1 on nipy:master.

1 similar comment
@coveralls

This comment has been minimized.

Copy link

commented May 14, 2017

Coverage Status

Coverage increased (+0.004%) to 85.61% when pulling 9ba5451 on riddhishb:patch-fix-nlmeans into 3f3e2f1 on nipy:master.

@coveralls

This comment has been minimized.

Copy link

commented May 15, 2017

Coverage Status

Coverage increased (+0.004%) to 85.61% when pulling 9ba5451 on riddhishb:patch-fix-nlmeans into 3f3e2f1 on nipy:master.

1 similar comment
@coveralls

This comment has been minimized.

Copy link

commented May 15, 2017

Coverage Status

Coverage increased (+0.004%) to 85.61% when pulling 9ba5451 on riddhishb:patch-fix-nlmeans into 3f3e2f1 on nipy:master.

@codecov-io

This comment has been minimized.

Copy link

commented May 15, 2017

Codecov Report

Merging #1245 into master will increase coverage by <.01%.
The diff coverage is 100%.

Impacted file tree graph

@@            Coverage Diff             @@
##           master    #1245      +/-   ##
==========================================
+ Coverage   87.04%   87.05%   +<.01%     
==========================================
  Files         225      225              
  Lines       28109    28116       +7     
  Branches     3014     3014              
==========================================
+ Hits        24468    24475       +7     
  Misses       2968     2968              
  Partials      673      673
Impacted Files Coverage Δ
dipy/denoise/non_local_means.py 100% <ø> (ø) ⬆️
dipy/denoise/tests/test_non_local_means.py 97.05% <100%> (+0.33%) ⬆️

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 3f3e2f1...9ba5451. Read the comment docs.

@@ -346,7 +349,8 @@ cpdef upfir(double[:, :] image, double[:] h):
@cython.boundscheck(False)
@cython.wraparound(False)
@cython.cdivision(True)
def nlmeans_block(double[:, :, :]image, double[:, :, :] mask, int patch_radius, int block_radius, double h, int rician):
def nlmeans_block(double[:, :, :]image, double[:, :, :] mask, int patch_radius,

This comment has been minimized.

Copy link
@Garyfallidis

Garyfallidis Jun 2, 2017

Member

indentation issue

@ssheybani

This comment has been minimized.

Copy link
Contributor

commented Jun 2, 2017

I'm comparing your nlmeans() and non_local_means(). But I'm not sure how we can use the sigma given by estimate_sigma function in noise_estimate.py to test your non_local_means(). Because non_local_means() expects a single float value for sigma, while estimate_sigma() gives an array (with as many elements as the gradient directions).
What I've done so far is that I gave both nlmeans and non_local_means the average of the sigmas. Here's the result:
nlm olv vs new

I also tried sigma=5 for both. Here is the result:
nlm olv vs new sigma5

In both cases, it looks like non_local_means is oversmoothing.
By the way, for the execution time, what I observed was that for small datasets (smal portions of complete datasets), non_local_means is faster. but when the dataset is large enough, non_local_means is twice slower than nlmeans.

This is the code that I used:
https://gist.github.com/ssheybani/ef61d1a2032989d0b69e720568bc94ad

@samuelstjean

This comment has been minimized.

Copy link
Contributor

commented Jun 19, 2017

While we are there, the other half (and most important I'd say) part of the issue is that the outer loop does iterate over j,i,k, but it passes to the inner function the indexes in the i,j,k order, so that effectively swap in your local block the x and y coordinate.

Since a small region is mostly homogeneous, I guess that's why you don't see the problem much, but it could explain the problem reported above as the weights will be inflated, and less stuff gets compared because of that.

@arokem

This comment has been minimized.

Copy link
Member

commented Mar 15, 2019

Closing. @skoudoro is going to follow up on this. We are looking to streamline things quite a bit before our 1.0.

@arokem arokem closed this Mar 15, 2019

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
8 participants
You can’t perform that action at this time.