New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Contextual Enhancement update: fix SNR issue, fix reference #951

Merged
merged 12 commits into from Apr 1, 2016

Conversation

Projects
None yet
3 participants
@stephanmeesters
Contributor

stephanmeesters commented Mar 1, 2016

This update fixes the SNR issue in the demo which was wrongly calculated. The SNR is now based on the b=0 volume instead of the entire diffusion volume. The SNR after applying noise should now be around 4.5.

One reference at the bottom of the demo has errors in it and was corrected.

Linear Scale Spaces for Fiber Enhancement in DWI-MRI.
J Math Imaging Vis, 46(3):326-368.
.. [DuitsAndFranken_JMIV] R. Duits and E. Franken (2011) Left-invariant diffusions

This comment has been minimized.

@arokem

arokem Mar 2, 2016

Member

Different reference for the same statement?

You might want to change "DuitsAndFranken_JMIV" to reflect this change.

@stephanmeesters

This comment has been minimized.

Contributor

stephanmeesters commented Mar 2, 2016

Thanks @arokem I've corrected it to [DuitsAndFranken_IJCV].

@arokem

This comment has been minimized.

Member

arokem commented Mar 2, 2016

Great. Thanks! This is ready to go, from my POV. If someone else wants to take a look, please do so in the next couple of days.

@arokem

This comment has been minimized.

Member

arokem commented Mar 5, 2016

I just ran this example on my machine with newer version of numpy, and I am getting the following error:

Traceback (most recent call last):
  File "contextual_enhancement.py", line 149, in <module>
    sh_order=8, test_mode=True)
  File "dipy/denoise/shift_twist_convolution.pyx", line 68, in dipy.denoise.shift_twist_convolution.convolve (dipy/denoise/shift_twist_convolution.c:2147)
TypeError: 'numpy.float64' object cannot be interpreted as an index

What's even more worrisome is that this error does not arise when running the tests (meaning the tests are missing some realistic failure mode). Could you please take a look?

@samuelstjean

This comment has been minimized.

Contributor

samuelstjean commented Mar 7, 2016

New numpy doesn't like float index, even if it is 1.0, so you can just recast them as integer most of the time.

@samuelstjean

This comment has been minimized.

Contributor

samuelstjean commented Mar 7, 2016

Oh and also, according to the top right picture, there seems to be a spurious ninja star fodf, might want to tune down a bit the noise (or cheat and crop it out, but that seems dishonest even though I mention it).

@stephanmeesters

This comment has been minimized.

Contributor

stephanmeesters commented Mar 7, 2016

@arokem Unfortunately I can't reproduce the problem on my MacBook. I've upgraded Numpy to 1.10.4, used build_ext and ran the demo without any errors.

Looking at line 68 of shift_twist_convolution.pyx the problem appears to be inside np.amax?
output_norm = output * np.amax(odfs_dsf)/np.amax(output)

I can test this tomorrow on a Windows computer.

@samuelstjean Changing the seed to np.random.seed(2) appears to get rid of the ninja star. I guess for the demonstration this will do. Alternatively I could reduce the noise a bit (a lot of noise is added as-is, but that does make the demo more compelling).

@samuelstjean

This comment has been minimized.

Contributor

samuelstjean commented Mar 7, 2016

Well, I just checked the data, and you might be pushing it a bit too much. Sure, it does work and show your point, but that's not the most realistic example sadly. I mean, this dataset has 150 directions, which might compensate for the low SNR of the dataset.

One of the b0s

screenshot from 2016-03-07 13-40-02

Same slice, one of the DWIs

screenshot from 2016-03-07 13-40-06

Might explain the ninja star and the need to re-sharpen the odfs.

@arokem

This comment has been minimized.

Member

arokem commented Mar 7, 2016

Yeah - it seems that 1.11 is not out yet (I am working with a development
numpy, so living in the future... :-D)

This one's a bit hard to debug, because it's happening inside of the cython
code. I'll try to see what I can do. For now, what do you think about
@samulestjean's comments about the noise? I personally don't really mind if
the example is slightly contrived (for example, if the noise is
"unrealistically" high, or the ODFs look a bit funky in places). What do
you think?

On Mon, Mar 7, 2016 at 4:27 AM, Stephan Meesters notifications@github.com
wrote:

@arokem https://github.com/arokem Unfortunately I can't reproduce the
problem on my MacBook. I've upgraded Numpy to 1.10.4, used build_ext and
ran the demo without any errors.

Looking at line 68 of shift_twist_convolution.pyx the problem appears to
be inside np.amax?
output_norm = output * np.amax(odfs_dsf)/np.amax(output)

I can test this tomorrow on a Windows computer.

@samuelstjean https://github.com/samuelstjean Changing the seed to
np.random.seed(2) appears to get rid of the ninja star. I guess for the
demonstration this will do. Alternatively I could reduce the noise a bit (a
lot of noise is added as-is, but that does make the demo more compelling).


Reply to this email directly or view it on GitHub
#951 (comment).

@samuelstjean

This comment has been minimized.

Contributor

samuelstjean commented Mar 7, 2016

An unrealistic example is not really an example to follow, they are supposed to showcase features in a realistic, everyday useable setting, not be a clickbait for people to try out stuff. Even if you put something like SNR 10, that will still be an improvement over not doing it, which should be the main point.

Just doing CSD -> convolve > SDT is already a bit weird, but if you push it too much it does result in multiple spurious peaks, which is not an improvement but rather a problem for tractography/peaks derived metrics.

@stephanmeesters

This comment has been minimized.

Contributor

stephanmeesters commented Mar 7, 2016

I agree with @samuelstjean that the example we use right now is unrealistic. To me it's amazing that it can still produce something similar to the original data, and it does nicely demonstrate the power of using context to enhance data, but I suppose the user would benefit from a more realistic example.

I'll give it a go with SNR=10 or SNR=15. Maybe also try to reduce the number of directions to 32 or 64.

@samuelstjean

This comment has been minimized.

Contributor

samuelstjean commented Mar 8, 2016

Glad you agree, I also checked a map of implausible signals (that is, any diffusion volume is higher than the first b0) for fun, and it is full of it. Mind you, just toning it down a bit might be enough if you don't want to rewrite the whole example. As for subsampling, it is probably easier to grab another dataset instead, look through https://github.com/nipy/dipy/blob/master/dipy/data/fetcher.py for what is available. Oh and also I am working or something somewhat similar, so that's why I kind of point out possible flaws (pretty much stumbled upon the same issues actually), it's not to be rude don't worry.

screenshot from 2016-03-08 10-11-27

stephanmeesters added some commits Mar 10, 2016

@stephanmeesters

This comment has been minimized.

Contributor

stephanmeesters commented Mar 11, 2016

Interesting map, I guess the dataset we are using here isn't of best quality to begin with. I have tried some SNR settings and the results of SNR=10 appear to especially fix the alignment of some noisy glyphs well (e.g. at the internal capsule region). It did have some alignment problems at the corpus callosum near the edge, so to avoid these boundary issues I've increased the image region a little bit and then cropped it back down.

enhancements

@stephanmeesters

This comment has been minimized.

Contributor

stephanmeesters commented Mar 17, 2016

I've managed to fix the crash that happened with new Numpy, replacing the multiplication with np.multiply did the trick.

@arokem

This comment has been minimized.

Member

arokem commented Mar 17, 2016

Thanks for doing that! This looks like a numpy bug to me. I'll report back
to the numpy developers about it.

Would it be possible to add a test that fails without this fix? The fact
that we didn't catch this before means that the tests are somehow not
covering this use-case.

On Thu, Mar 17, 2016 at 7:42 AM, Stephan Meesters notifications@github.com
wrote:

I've managed to fix the crash that happend with new Numpy, replacing the
multiplication with np.multiply did the trick.


You are receiving this because you were mentioned.
Reply to this email directly or view it on GitHub
#951 (comment)

@stephanmeesters

This comment has been minimized.

Contributor

stephanmeesters commented Mar 24, 2016

I came down with the flu this week and wasn't able to take a look, sorry.

Interesting discussion, unfortunately I don't have enough Numpy skills to really know what's going on. However, I've added a test case that fails with new Numpy. It turned out that the tests don't call convolve(..) but instead call convolve_sf(..) so the error wasn't triggered. I've added a parameter called numpy_test.

stephanmeesters added some commits Mar 29, 2016

@arokem

This comment has been minimized.

Member

arokem commented Mar 29, 2016

I am sorry - I think I didn't communicate my intention clearly enough.

I didn't mean that you should add back the failing code and then test that separately. Rather, I was worried that the only way to trigger the error with the previous code was by running the example. That is, the tests were passing just fine on my machine, but when I ran the example, all of a sudden I ran into this bug. When that happens, it means that the line of code in question wasn't being used in the tests, or not being used in a realistic manner. Because this is in the Cython portion, we don't yet have a way to automatically detect whether the code is being exercised by the tests (and see #578)

So my intention here was to add tests that exercise this line: fail without the fix and then pass when the fix is added. Either way, special-casing things for the tests doesn't make sense -- the tests are supposed to run the code in a more or less realistic manner. If a certain code-path is only ever taken in the tests, it's not a very helpful test (and that code-path need not be there).

@stephanmeesters

This comment has been minimized.

Contributor

stephanmeesters commented Mar 30, 2016

OK I see -- I thought that maybe you were adding another bot that would test the development version of Numpy, and needed the faulty code to test this bot.

I've added a test for the normalization part of the routine (the line with np.multiply), which fails without this fix. This test also uses convolve (which takes only data in spherical harmonic form) instead of convolve_sf. So now all routines should be properly tested.

[DuitsAndFranken2011] R. Duits and E. Franken (2011) Morphological and
Linear Scale Spaces for Fiber Enhancement in
DWI-MRI. J Math Imaging Vis, 46(3):326-368.
DIPY. ISMRM 2016 conference.

This comment has been minimized.

@arokem

arokem Mar 30, 2016

Member

This reference has 2016 in one place and 2015 in another (in parentheses).

[DuitsAndFranken2011] R. Duits and E. Franken (2011) Morphological and
Linear Scale Spaces for Fiber Enhancement in DWI-MRI.
J Math Imaging Vis, 46(3):326-368.
ISMRM 2016 conference.

This comment has been minimized.

@arokem

arokem Mar 30, 2016

Member

And same here.

@arokem

This comment has been minimized.

Member

arokem commented Mar 30, 2016

Fantastic. Thanks for the work on this! I think it's ready to go now (except for the small incongruence in the years of the ISMRM conference paper). Unless anyone objects, I'll merge this in a couple of days, after you've had a chance to straighten that out.

@@ -246,7 +248,7 @@
:align: center
The results after enhancements. Top-left: original noiseless data.
Bottom-left: original data with added Rician noise (SNR=2). Bottom-right:
Bottom-left: original data with added Rician noise (SNR=4.5). Bottom-right:

This comment has been minimized.

@samuelstjean

samuelstjean Mar 30, 2016

Contributor

SNR is now 10 according to lines above

@@ -188,17 +190,17 @@
csd_sf_enh_sharp = sh_to_sf(csd_shm_enh_sharp, sphere, sh_order=8)
# Normalize the sharpened ODFs
csd_sf_enh_sharp = csd_sf_enh_sharp * np.amax(csd_sf_orig)/np.amax(csd_sf_enh_sharp)
csd_sf_enh_sharp = csd_sf_enh_sharp * np.amax(csd_sf_orig)/np.amax(csd_sf_enh_sharp) * 1.25

This comment has been minimized.

@samuelstjean

samuelstjean Mar 30, 2016

Contributor

Why the different sharpening now? I'm too lazy to regenerate the pictures at the moment.

Oh it seems like it was in the sphere viz function before I guess while looking below.

@stephanmeesters

This comment has been minimized.

Contributor

stephanmeesters commented Mar 31, 2016

Thanks I have corrected the error in the reference and the SNR in the caption.

@arokem arokem merged commit 2b4e957 into nipy:master Apr 1, 2016

1 check passed

continuous-integration/travis-ci/pr The Travis CI build passed
Details
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment