New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

nlmeans problem #1083

Closed
mdesco opened this Issue Jun 22, 2016 · 34 comments

Comments

Projects
None yet
7 participants
@mdesco
Contributor

mdesco commented Jun 22, 2016

The nlmeans version of 0.12dev is considerably different from previous releases. As a result, our in-house scripts in my lab and at the hospital using the current nlmeans of the 0.12dev gives "broken" and unsatisfactory results. The nlmeans is over-blurring heavily at the moment.

Attached is a figure comparing the 0.11 nlmeans result verus 0.12dev result with the same sigma value.
nlmeans_broken

I have a very hard time to read the code (https://github.com/nipy/dipy/blob/master/dipy/denoise/denspeed.pyx) but here are some potential explanations:

  • there is a new search region size but the denominator where the sigma value may not have been correctly updated
  • there is something fishy going on with the sigma

I see this comes from PR #1018 and this issue: #1011

One has to be very careful changing the default behaviour of patch size and block size. The previous nlmeans version was really mimicking the behaviour of the C++ and Matlab implementation of the its inventor Pierrick Coupé, Nicolas Wiest-Daesslé that goes ways back to our miccai papers from 2007 to 2008.

@Garyfallidis

This comment has been minimized.

Member

Garyfallidis commented Jun 22, 2016

Hi @mdesco and thank you for the feedback. I think that most likely there is nothing broken. I think what happened is that Riddhish (@riddhishb) improved our current implementation and now we can have adaptive nlmeans too which is even better than standard nlmeans in many ways. Riddhish is coming from a lab with expertise in denoising and his supervisor was part of the MNI denoising team back in the days when these methods were created (before 2008).

Probably this is again an issue of normalization/sigma estimation but that is a problem in the recent papers that didn't follow the standards for previous non-local denoising papers. Probably not a problem that Riddhish introduced.

I will assign this PR to Riddhish as he is the main denoising expert right now and hopefully you can both decide what is the correct normalization/sigma or any other issue.

I think the addition of the blockwise version does not affect the patch-based version.

Finally, it makes sense that you cannot immediately figure out the code as it has been severely optimized for speed. I hope this helps. @riddhishb because this affects the behavior of what is in master we will need to prioritize fixing or explain to @mdesco the change and of course document the change better. Thank you both.

@riddhishb

This comment has been minimized.

Contributor

riddhishb commented Jun 22, 2016

@mdesco Thanks for pointing this out sir. So I will outline what happened, the previous dipy versions had nlmeans implementation following the Coupe's method but was doing only voxelwise averaging instead of a more robust blockwise approach. However the implementation had a bug #1011 fixed in #1018.

However, the bug fix still kept the voxelwise averaging, so what I am currently working on (PR: #1066 ), which introduces two new things

  1. A keyword option to the nlmeans implementation which lets us choose between the current voxelwise implementation and the blockwise implementation.
  2. An adaptive denoising method

So just for the comparison between the voxelwise and blockwise implementation do have a look at the following figure(the blockwise is pretty good and preserves sharpness a lot)

figure_1-7

We are working on this PR getting merged real soon, like in coming week or so. Please let me know if you have any more comments.

@Garyfallidis

This comment has been minimized.

Member

Garyfallidis commented Jun 22, 2016

Hi @riddhishb what @mdesco is telling you is that when he calls the command before and after the change in master the result is different (looking oversmoothed). Will this be fixed after merging #1066 ? I mean what you think introduced oversmoothing? Thanks in advance.

@riddhishb

This comment has been minimized.

Contributor

riddhishb commented Jun 22, 2016

@Garyfallidis I will look at the code to exactly figure out what introduces the oversmoothing, The changes is due to the bug fix of #1011 by #1018 . And no by #1066 the oversmoothing in the voxelwise implementation (denspeed.pyx) is still there, but we will have 2 ways of implementation of nlmeans and to get a more sharper output one can use the blockwise implementation.

Also voxelwise implementation is for the speed purposes, the actual Coupe's implementation follows the blockwise approach being introduced in #1066

@Garyfallidis

This comment has been minimized.

Member

Garyfallidis commented Jun 22, 2016

Okay thanks for the feedback. I know you have done an excellent job with your denoising work. We just need to bring everyone up to speed and explain if they need to use some different parameters than before. Make sure you can explain what brought the change and what we need to change in our tutorial and the way we feed sigma in nlmeans. Keep it up @riddhishb!

@mdesco

This comment has been minimized.

Contributor

mdesco commented Jun 22, 2016

@riddhishb, I would recommend taking a perfect 3D image I, create I' adding Gaussian noise with sigma and then denoise with nlmeans 0.12dev versus 0.11.

On my side, if I do this with the phantomas b=0 image, the 0.12dev version is over-smoothing and the 0.11 is fine.
screenshot 2016-06-22 17 16 50

@Garyfallidis

This comment has been minimized.

Member

Garyfallidis commented Jun 22, 2016

@mdesco can you send this image (dataset) to @riddhishb ? And if you use smaller sigma do you see something closer to the right?

@riddhishb

This comment has been minimized.

Contributor

riddhishb commented Jun 22, 2016

@Garyfallidis Thanks :), I agree we do have to look at what sigma will give the best result in the voxelwise implementation. Will have a look into it.

@mdesco yes I will try that. I will have to look more closely at the PR that I put out #1018 , as far as I know I followed the algorithm correctly and fixed for #1011, but maybe something is missing. Thank you :)

@riddhishb

This comment has been minimized.

Contributor

riddhishb commented Jun 22, 2016

@Garyfallidis One suggestion, the current denspeed implementation is voxelwise average, which is not as good a method as the block wise average (as can be seen from the above results), and the only reason we kept that was due to the speed considerations i.e it was much faster than the blockwise approach.

But now after the last optimization of the nlmeans_block , the speeds are

patch_radius = 1, block_radius = 1
Voxelwise = 0.84 sec (the denspeed.pyx)
Blockwise = 0.69 sec (the nlmeans_block.pyx)

patch_radius = 2, block_radius = 1
Voxelwise = 3.02 sec
Blockwise = 1.41 sec

patch_radius = 1, block_radius = 2
Voxelwise = 2.46 sec
Blockwise = 1.37 sec

So my point is that the reason for keeping the voxelwise average is no longer necessary and we can totally remove that from the nlmeans, which will get rid of the oversmoothing issue too. Plus the block wise method is more mathematically robust, and the code will cause much less confusion.

The only advantage of denspeed is that it's parallelised!
Do let me know what do you think.

@Garyfallidis

This comment has been minimized.

Member

Garyfallidis commented Jun 22, 2016

I am not satisfied by this answer @riddhish because what is urgent now is to see what creates the oversmoothing of the voxel average. Voxel (patch-based) averaging was used before right? And as @mdesco showed you there was no oversmoothing. I am online if you want to look at this together.

@riddhishb

This comment has been minimized.

Contributor

riddhishb commented Jun 22, 2016

@Garyfallidis Ok, I am looking at the changes which I made in the denspeed.pyx in #1018
The voxelwise averaging was used but was incorrectly implemented. It would be better to see the changes, let me put them here.

@Garyfallidis

This comment has been minimized.

Member

Garyfallidis commented Jun 22, 2016

Make sure that you explain in your response why it was incorrectly implemented.

@omarocegueda

This comment has been minimized.

Contributor

omarocegueda commented Jun 22, 2016

Hello everyone, =)
I was reading this thread, and there is something I'm really worried about
when I read assertions such as "being more robust", "having better
results", "mathematically more robust" based only on visual assessments.
Please take into account that denoising in this context (brain MRI) is
very different from computational photography: our objective is not to
obtain images that "look nicer", the objective is to effectively remove the
actual noise from the images introducing the least bias possible. From my
limited experience on this field, it is extremely easy to fall in the trap
of thinking that we got "better" denoising results just because we see edge
preserving properties or feature enhacement (which are good indicators in
photography), just to realize that the resulting images contain important
biases that affect further processing (FA's, MD's tractography, etc.). So,
before making such assertions, it is necessary to perform extensive studies
to understand what is the effect of the denoising algorithm on the actual
application we have in mind. I have seen this effort on Coupe's and
Manjon's papers demonstrating that the effect is acceptable so reproducing
their results very accurately is important (so we don't need to redo their
evaluation). If our results differ substantially from theirs then we need
to perform the same evaluations to make sure the results are actually
better, just a visual assessment is not enough.

On Wed, Jun 22, 2016 at 7:53 AM, Riddhish Bhalodia <notifications@github.com

wrote:

@mdesco https://github.com/mdesco Thanks for pointing this out sir. So
I will outline what happened, the previous dipy versions had nlmeans
implementation following the Coupe's method but was doing only voxelwise
averaging instead of a more robust blockwise approach. However the
implementation had a bug #1011 #1011
fixed in #1018 #1018.

However, the bug fix still kept the voxelwise averaging, so what I am
currently working on (PR: #1066 #1066
), which introduces two new things

  1. A keyword option to the nlmeans implementation which lets us choose
    between the current voxelwise implementation and the blockwise
    implementation.
  2. An adaptive denoising method

So just for the comparison between the voxelwise and blockwise
implementation do have a look at the following figure(the blockwise is
pretty good and preserves sharpness a lot)

[image: figure_1-7]
https://cloud.githubusercontent.com/assets/7941975/16271291/061bd54a-38b7-11e6-9236-5d0c8aa89313.png

We are working on this PR getting merged real soon, like in coming week or
so. Please let me know if you have any more comments?


You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
#1083 (comment), or mute
the thread
https://github.com/notifications/unsubscribe/ABiLh_ZM_PldZGWUJumhpsSUDjTwhiHTks5qOUx7gaJpZM4I7mUX
.

"Cada quien es dueño de lo que calla y esclavo de lo que dice"
-Proverbio chino.
"We all are owners of what we keep silent and slaves of what we say"
-Chinese proverb.

http://www.cimat.mx/~omar

@Garyfallidis

This comment has been minimized.

Member

Garyfallidis commented Jun 22, 2016

👍

@riddhishb

This comment has been minimized.

Contributor

riddhishb commented Jun 22, 2016

@omarocegueda fair point! such assertions are half baked sorry for that, will analyze the code more @Garyfallidis Will let you know what might be causing the oversmoothing.

these were the changes made
https://github.com/nipy/dipy/pull/1018/files

@riddhishb

This comment has been minimized.

Contributor

riddhishb commented Jun 22, 2016

@Garyfallidis @omarocegueda I referred the issue in the line note in the code
https://github.com/nipy/dipy/pull/1018/files

@grlee77

This comment has been minimized.

Contributor

grlee77 commented Jun 22, 2016

@riddhishb
I haven't checked the papers lately, but have spent quite a bit of time on these algorithms in the past. You seem to have swapped order of the loops over blocks vs. patches in #1018 which I think is a mistake. I added some comments inline over there.

@riddhishb

This comment has been minimized.

Contributor

riddhishb commented Jun 22, 2016

@grlee77 correct! , but I have swapped them consistently, so it just becomes a naming inconsistency, which I will correct but still wont affect the oversmoothing.

@riddhishb

This comment has been minimized.

Contributor

riddhishb commented Jun 22, 2016

@grlee77 So I believe we are still at square one :(

@riddhishb

This comment has been minimized.

Contributor

riddhishb commented Jun 22, 2016

@grlee77 I updated the code to this, do check if it has the changes you were referring to ?
https://github.com/riddhishb/RandomStuff/blob/master/denspeed.pyx

The output is still oversmoothed!, what it essentially did was to swap the places of block and patch

@mdesco mdesco closed this Jun 22, 2016

@mdesco mdesco reopened this Jun 22, 2016

@grlee77

This comment has been minimized.

Contributor

grlee77 commented Jun 22, 2016

That looks like what I suggested for the loops. The normalization is also different, though. If you also restore the old setting of:

denom = sqrt(2) * (sigm / patch_vol_size)**2
w = exp(-(summ / patch_vol_size) / denom)

I suspect things would become sharper again. (although I am not sure why that sqrt(2) was there in the first place)

@riddhishb

This comment has been minimized.

Contributor

riddhishb commented Jun 23, 2016

@grlee77 Only this change, even minus the other changes you pointed out was causing the oversmoothing.
Following the paper (http://www.ncbi.nlm.nih.gov/pubmed/18390341) I changed only the normalization, as

denom = (sigm / patch_vol_size)**2
w = exp(-summ / denom)

this starts giving sharper results again as follows (at the voxelwise output)

figure_1-8

@Garyfallidis @omarocegueda @mdesco do have a look at this and suggest any more changes. I will put these changes in #1066

@mdesco

This comment has been minimized.

Contributor

mdesco commented Jun 23, 2016

To me (and others clearly), there is an issue here. At this point, my suggestion would be the following:
i- roll back to version 0.11
ii- make sure that improvements in speed, voxelwise/blockwise can be done in a separate PRs making sure nothing gets broken in the master

As suggested by @omarocegueda, evaluation of denoising in DWI is difficult and I think we have done a pretty good job at doing so in St-Jean et al media 2016 (http://www.sciencedirect.com/science/article/pii/S1361841516000335).

You should:

  1. Grab the DWIs from the ISBI 2013 challenge data (maybe start with hardi-scheme_SNR30). It is a simplified phantom with very "easy" noise, i.e. same sigma everywhere in the 4D dataset.
    http://hardi.epfl.ch/static/events/2013_ISBI/training_data.html
    (maybe these datasets or the ones used in the NLSAM paper, also available, could be added in the dipy test routines)
  2. The sigma is known and you can give it to yourself.
  3. run nlmeans 0.11 and your 0.12dev or improvements
  4. Look at the result
  5. quantify PSNR, MSSIM, FA, MD, ODF orientations, etc...
  6. You will be convinced that your current implementation is over-blurring
@riddhishb

This comment has been minimized.

Contributor

riddhishb commented Jun 23, 2016

@mdesco All right will do that! :) Let me do this today and post the results here.

@samuelstjean

This comment has been minimized.

Contributor

samuelstjean commented Jun 23, 2016

Tom make boudle and because Maxime did not mention it explicitely : We
actually used the ground truth value of sigma that was used for
generating the noise, on 2 different phantoms, and still saw the issue,
hence the post. Real value for the isbi phantom is 1072 for SNR
(computed as SNR = b0/sigma, and b0 is constant for this one).

Also have a bunch of various SNR for a fancier version of the phantom
online on my repo, but this version will help find out the problem just
fine for now.

Le 2016-06-23 à 07:24, Riddhish Bhalodia a écrit :

@mdesco https://github.com/mdesco All right will do that! :) Let me
do this today and post the results here.


You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
#1083 (comment), or
mute the thread
https://github.com/notifications/unsubscribe/AC4-6Ph57OWq2Bsk3V7NViBqpWn4-KZLks5qOhidgaJpZM4I7mUX.

@riddhishb

This comment has been minimized.

Contributor

riddhishb commented Jun 23, 2016

@samuelstjean so the sigma is 1072 ?

@samuelstjean

This comment has been minimized.

Contributor

samuelstjean commented Jun 23, 2016

For the last case Maxime posted, with the original isbi 2013 hardi
phantom, yes, that is the real value of the noise standard deviation
(which we fed manually to the algorithm).

Le 2016-06-23 à 07:53, Riddhish Bhalodia a écrit :

@samuelstjean https://github.com/samuelstjean so the sigma is 1072 ?


You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
#1083 (comment), or
mute the thread
https://github.com/notifications/unsubscribe/AC4-6IJP9zOH7ANF81tldXA7DTqWTTu9ks5qOh9jgaJpZM4I7mUX.

@riddhishb

This comment has been minimized.

Contributor

riddhishb commented Jun 23, 2016

@samuelstjean ok :) That helps!

@riddhishb riddhishb closed this Jun 23, 2016

@riddhishb riddhishb reopened this Jun 23, 2016

@riddhishb

This comment has been minimized.

Contributor

riddhishb commented Jun 23, 2016

I did not quantify everything, but the following figure and the PSNR will give an indication of how the changes proposed made the difference to the oversmoothing in 0.12dev.
Also this is done with patch_size = 3, block_size = 3, over one direction

figure_1-10

@samuelstjean

This comment has been minimized.

Contributor

samuelstjean commented Jun 23, 2016

I guess we are not using the same definitions, how can you have patch size
< block size? Should always be the opposite, as patch size runs inside
block size, and equality, as you pointed out, gives out identity since
there is nothing different to compare.

2016-06-23 9:22 GMT+02:00 Riddhish Bhalodia notifications@github.com:

I did not quantify everything, but the following figure and the PSNR will
give an indication of how the changes proposed made the difference to the
oversmoothing in 0.12dev.
Also this is done with patch_size = 2, block_size = 1

[image: figure_1-10]
https://cloud.githubusercontent.com/assets/7941975/16294891/15e801ee-3941-11e6-8186-e1362247c005.png


You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
#1083 (comment), or mute
the thread
https://github.com/notifications/unsubscribe/AC4-6In-dkl0D4Z5xn9st5zUsQDdOm3Sks5qOjRJgaJpZM4I7mUX
.

@riddhishb

This comment has been minimized.

Contributor

riddhishb commented Jun 23, 2016

@samuelstjean I am sorry that was patch_size = block_size = 1. Also according to me the patch does not mean inside the block only, yes the central voxels of the patches will be inside the block but the other things can be outside, as pointed shown in the following figure taken from http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=4359947

screen shot 2016-06-23 at 1 37 40 pm

So essentially that is what is different, In the 0.11 version code the patches are strictly inside the block and hence was causing the identity problem.

@samuelstjean

This comment has been minimized.

Contributor

samuelstjean commented Jun 23, 2016

So, isn't that definition only valid in the blockwise version? According to
section 2.1 of
http://scil.dinf.usherbrooke.ca/wp-content/papers/descoteaux-etal-miccai08.pdf
they use a different definition.

2016-06-23 10:09 GMT+02:00 Riddhish Bhalodia notifications@github.com:

@samuelstjean https://github.com/samuelstjean I am sorry that was
patch_size = block_size = 1. Also according to me the patch does not mean
inside the block only, yes the central voxels of the patches will be inside
the patch but the other things can be outside, as pointed shown in the
following figure taken from
http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=4359947

[image: screen shot 2016-06-23 at 1 37 40 pm]
https://cloud.githubusercontent.com/assets/7941975/16296008/a5ad664c-3947-11e6-80d9-d81b48a7b884.png

So essentially that is what is different, In the 0.11 version code the
patches are strictly inside the block and hence was causing the identity
problem.


You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
#1083 (comment), or mute
the thread
https://github.com/notifications/unsubscribe/AC4-6Lbbn_R4OFH2ERrzWiNRJ9pgX7alks5qOj8lgaJpZM4I7mUX
.

@riddhishb

This comment has been minimized.

Contributor

riddhishb commented Jun 23, 2016

When I read the paper (http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=4359947) I understood that it's for both the methods, but yeah the miccai08 paper uses a different definition. I think there is certain discrepancy between definitions, lets wait for others to comment on this, as of which one shall we use.
This is bit frustrating! :D

@arokem

This comment has been minimized.

Member

arokem commented Sep 16, 2016

Looks like this was resolved in #1088, but feel free to reopen if this is still an issue.

@arokem arokem closed this Sep 16, 2016

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment