New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Contextual enhancements of ODF/FOD fields #762

Merged
merged 20 commits into from Feb 9, 2016

Conversation

Projects
None yet
4 participants
@stephanmeesters
Contributor

stephanmeesters commented Nov 5, 2015

Included is the code for contextual enhancement of ODF/FOD fields based on the work of Remco Duits and Jorg Portegies.

Eleftherios has assisted a lot with the integration into DiPy. The enhancements are written in Cython and support OpenMP.

Files:
doc/examples/contextual_enhancement.py -- describes the method and showcases the enhancement
dipy/denoise/enhancement_kernel.py -- computes a lookup-table
dipy/denoise/shift_twist_convolution.py -- performes the convolution over the space of positions and orientations
dipy/denoise/tests/test_kernel.py -- tests if the kernel values are computed correctly, if the symmetric properties of the kernel are there, and if the convolution does its job.

If needed, pictures for the demo are here (2 images generated by the example code, cropped; one new image).
https://drive.google.com/folderview?id=0B-yaAUqTzxPBOC1FLWxtekhfSWM&usp=sharing

"""
"""
The enhancement is evaluated on the Stanford HARDI dataset (160 orientations, b=3000s/mm^2)

This comment has been minimized.

@arokem

arokem Nov 5, 2015

Member

150 directions. b=2000

csd_shm_noisy = csd_fit_noisy.shm_coeff
"""
A lookup-table is created, containing rotated versions of the kernel :math:`P_t` sampled over a discrete set of orientations. In order to ensure rotationally invariant processing, the discrete orientations are required to be equally distributed over a sphere. By default, a sphere with 100 directions is used.

This comment has been minimized.

@arokem

arokem Nov 5, 2015

Member

PEP8, wrap to 80 characters

@Garyfallidis

This comment has been minimized.

Member

Garyfallidis commented Nov 6, 2015

Hi @stephanmeesters the Travis bots are failing because you still have the openmp compiler set to clang in setup.py. See here https://travis-ci.org/nipy/dipy/jobs/89665233

@stephanmeesters

This comment has been minimized.

Contributor

stephanmeesters commented Nov 7, 2015

The Travis bots appear to be OK now, except for one test failing (dipy.viz.tests.test_fvtk_actors.test_slicer).

@stephanmeesters

This comment has been minimized.

Contributor

stephanmeesters commented Nov 17, 2015

Travis bots are fine now

@Garyfallidis

This comment has been minimized.

Member

Garyfallidis commented Nov 17, 2015

Good. Sorry for delaying to review this. We will start reviewing this PR after the release which will happen in a few days. We need first to release what has been already merged for some time.

@stephanmeesters

This comment has been minimized.

Contributor

stephanmeesters commented Nov 17, 2015

No problem, hopefully we can include it in the next release.

@Garyfallidis

This comment has been minimized.

Member

Garyfallidis commented Nov 17, 2015

Definitely!

@Garyfallidis

This comment has been minimized.

Member

Garyfallidis commented Jan 13, 2016

Now that we fixed most of the issues with the windows/osx/numpy1.10 etc. we need to give a good review to this nice PR by @stephanmeesters. GGT!!!

cdef object sphere
cdef object num_threads
## Python functions

This comment has been minimized.

@Garyfallidis

Garyfallidis Jan 15, 2016

Member

Remove comment and reduce spaces between cdef and init.

Spatial diffusion
D44 : float
Angular diffusion
t : float

This comment has been minimized.

@Garyfallidis

Garyfallidis Jan 15, 2016

Member

Leave only one space after t

def __init__(self, D33, D44, t, force_recompute=False,
orientations=None, test_mode=False, num_threads=None):
""" Compute a look-up table for the contextual
enhancement kernel

This comment has been minimized.

@Garyfallidis

Garyfallidis Jan 15, 2016

Member

indent correctly

""" Compute a look-up table for the contextual
enhancement kernel
n_pts = orientations
theta = np.pi * np.random.rand(n_pts)
phi = 2 * np.pi * np.random.rand(n_pts)
hsph_initial = HemiSphere(theta=theta, phi=phi)

This comment has been minimized.

@Garyfallidis

Garyfallidis Jan 15, 2016

Member

Remove one space

def evaluate_kernel(self, x, y, r, v):
return self.k2(x, y, r, v)
# Cython functions

This comment has been minimized.

@Garyfallidis

Garyfallidis Jan 15, 2016

Member

Remove comment and empty lines.

@cython.cdivision(True)
cdef void create_lookup_table(self, test_mode = False):
""" Compute the look-up table based on the parameters set
during class initialization

This comment has been minimized.

@Garyfallidis

Garyfallidis Jan 15, 2016

Member

Indent this line on the left.

int OR1 = orientations.shape[0]
int OR2 = orientations.shape[0]
int N = self.kernelsize
int hn = (N-1)/2

This comment has been minimized.

@Garyfallidis

Garyfallidis Jan 15, 2016

Member

For safety you can change all int to cnp.npy_intp so that you don't get into any troubles in the different operating systems and architectures.

@cython.cdivision(True)
cdef void estimate_kernel_size(self):
""" Estimates the dimensions the kernel should
have based on the kernel parameters.

This comment has been minimized.

@Garyfallidis

Garyfallidis Jan 15, 2016

Member

same as before

with nogil:
for angv in range(OR1):
for angr in range(OR2):
for xp in range(-hn, hn+1):

This comment has been minimized.

@Garyfallidis

Garyfallidis Jan 15, 2016

Member

spaces between operators

break;
N = ceil(i)*2
if N%2 == 0:

This comment has been minimized.

@Garyfallidis

Garyfallidis Jan 15, 2016

Member

Add spaces between operators. In Cython we keep pep8 as much as we can except from the line size of 79 characters because it becomes difficult to hold that when you have multiple for loops. But please try to stick with the rest of pep8 with cython it makes the code much easier to read and review.

cdef double k2(self, double [:] x, double [:] y,
double [:] r, double [:] v) nogil:
""" Evaluate the kernel at position x relative to
position y, with orientation r relative to orientation v.

This comment has been minimized.

@Garyfallidis

Garyfallidis Jan 15, 2016

Member

Same as before

stephanmeesters added some commits Jan 25, 2016

@stephanmeesters

This comment has been minimized.

Contributor

stephanmeesters commented Jan 25, 2016

One of the bots did not like the memoryview that accessed an array of shape (0,0,7,7,7). Changing cdef double [:, :, :, :, ::1] lookuptable to cdef double [:, :, :, :, :] lookuptable seemed to fix it. I do still use [:, :, :, :, ::1] within shift_twist_convolution.pyx and that works fine, so hopefully there is no loss of performance. Is this OK?

Linear Scale Spaces for Fiber Enhancement in DWI-MRI.
J Math Imaging Vis, 46(3):326-368.
.. [RodirguesEurographics] P. Rodrigues, R. Duits, B. Romeny, A. Vilanova

This comment has been minimized.

@samuelstjean

samuelstjean Jan 27, 2016

Contributor

typo in mr. Rodrigues name

This comment has been minimized.

@arokem

arokem Jan 27, 2016

Member

whoops. mea culpa

double [:] c
double kernelval
with gil:

This comment has been minimized.

@samuelstjean

samuelstjean Jan 28, 2016

Contributor

I see that you go back and forth with the gil stuff at a bunch of place, is it really needed? Seems like these four lines below should be fine, and the euler angle function seems simple enough also.

"""
The Spherical Deconvolution Transform is applied to sharpen the ODF field.

This comment has been minimized.

@samuelstjean

samuelstjean Jan 28, 2016

Contributor

It's actually Sharpening Deoncvolution Transform, the original function docstring had a (now fixed) mistake.

@stephanmeesters

This comment has been minimized.

Contributor

stephanmeesters commented Jan 29, 2016

Thanks @samuelstjean I've processed your comments. @Garyfallidis @arokem Could you let me know the current status? Do you think it's viable to include this PR in the next release?

The Sharpening Deconvolution Transform is applied to sharpen the ODF field.
"""
# Sharpen via the Spherical Deconvolution Transform

This comment has been minimized.

@samuelstjean

samuelstjean Jan 29, 2016

Contributor

missed this one

:align: center
The results after enhancements. Top-left: original noiseless data.
Bottom-left: original data with added Rician noise (SNR=2). Bottom-right:

This comment has been minimized.

@samuelstjean

samuelstjean Jan 29, 2016

Contributor

you actually added SNR 3 at line 9, also not sure where you get the reference S0=500 signal, max value at b0?

# Add Rician noise
np.random.seed = 1234
data_noisy = add_noise(data, 3, 500, noise_type='rician')

This comment has been minimized.

@samuelstjean

samuelstjean Jan 29, 2016

Contributor

See my comment far at the bottom, but usually we use SNR=S0 / std(noise) as the SNR, so it might not do what you actually think if you put an arbitrary value. Usually basing it on S0 = mean(b0) like in wm gives you a scaling proportionates to the values of your acquisition. Kind of like in this old example http://nipy.org/dipy/examples_built/snr_in_cc.html#example-snr-in-cc

@stephanmeesters

This comment has been minimized.

Contributor

stephanmeesters commented Jan 30, 2016

Thanks @samuelstjean I'll have a look at it on Tuesday.

@samuelstjean

This comment has been minimized.

Contributor

samuelstjean commented Feb 1, 2016

I did not went fully through it, but you might want to ask @ChantalTax advice as she has done similar stuff before.

@Garyfallidis

This comment has been minimized.

Member

Garyfallidis commented Feb 9, 2016

It's all good here. Merging! @stephanmeesters we have 4 days until the release. Stay alert in case something appears with the buildbots. But it shouldn't.

Garyfallidis added a commit that referenced this pull request Feb 9, 2016

Merge pull request #762 from stephanmeesters/enhance_5d_pr
Contextual enhancements of ODF/FOD fields

@Garyfallidis Garyfallidis merged commit a66d5a3 into nipy:master Feb 9, 2016

1 check passed

continuous-integration/travis-ci/pr The Travis CI build passed
Details
@arokem

This comment has been minimized.

Member

arokem commented Feb 9, 2016

What about the SNR issues in the example?

@Garyfallidis

This comment has been minimized.

Member

Garyfallidis commented Feb 9, 2016

Good point. I missed that one.
The tutorial can be updated later too. Not critical IMPOV.

@Garyfallidis

This comment has been minimized.

Member

Garyfallidis commented Feb 9, 2016

@stephanmeesters can you make a small PR with the fix on the documentation please?

@arokem

This comment has been minimized.

Member

arokem commented Feb 9, 2016

NP -- I'll post an issue, so we remember to get back to that after the release.

@Garyfallidis

This comment has been minimized.

Member

Garyfallidis commented Feb 9, 2016

Thx!

@stephanmeesters

This comment has been minimized.

Contributor

stephanmeesters commented Feb 9, 2016

Hey guys good to hear about the merge. I forgot to mention that I already looked at the SNR in 2947dfc. Let me know if there are still SNR issues in this example.

@samuelstjean

This comment has been minimized.

Contributor

samuelstjean commented Feb 9, 2016

Or at least remove the example, it does not make sense currently.

Le 2016-02-09 03:57, Eleftherios Garyfallidis a écrit :

Thx!


Reply to this email directly or view it on GitHub
#762 (comment).

from dipy.segment.mask import median_otsu
b0_mask, mask = median_otsu(data)
np.random.seed(1)
data_noisy = add_noise(data, 1, np.mean(data[mask]), noise_type='rician')

This comment has been minimized.

@samuelstjean

samuelstjean Feb 9, 2016

Contributor

It would make more sense to compute that only in the b0/wm, in the sense that the signal is higher than anywhere else and not influenced by the b-value. This way, your SNR will be higher and produce a larger variance for the noise, but still give a rough estimate between datasets, whereas if you want to compare a dataset with SNR computed at b1000 and b3000 it will be harder to relate the two together.

@Garyfallidis

This comment has been minimized.

Member

Garyfallidis commented Feb 9, 2016

@stephanmeesters congratulations! Your PR is in and no buildbots complained. 👯 So, this will be available for 0.11.

You should drink a beer today to celebrate. It's on me! Thank you for all your hard work and please inform Prof. Duits for the progress. Also, recheck the comment in this PR and update the example if needed in another PR. GGT!!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment