Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ENH: Add streamline deformation example and workflow #1900

Merged
merged 9 commits into from Sep 1, 2019

Conversation

jhlegarreta
Copy link
Contributor

Add streamline deformation example and workflow.

Resolves #1840.

@pep8speaks
Copy link

pep8speaks commented Jul 14, 2019

Hello @jhlegarreta, Thank you for updating !

Line 45:1: E402 module level import not at top of file
Line 46:1: E402 module level import not at top of file
Line 49:45: E251 unexpected spaces around keyword / parameter equals
Line 49:47: E251 unexpected spaces around keyword / parameter equals
Line 53:1: E128 continuation line under-indented for visual indent
Line 62:1: E402 module level import not at top of file
Line 72:1: E402 module level import not at top of file
Line 75:17: E128 continuation line under-indented for visual indent
Line 101:1: E402 module level import not at top of file
Line 103:1: E402 module level import not at top of file
Line 160:34: E128 continuation line under-indented for visual indent
Line 161:34: E128 continuation line under-indented for visual indent
Line 172:1: E402 module level import not at top of file
Line 173:1: E402 module level import not at top of file
Line 188:1: E402 module level import not at top of file
Line 214:1: E402 module level import not at top of file
Line 228:1: E402 module level import not at top of file
Line 279:1: E402 module level import not at top of file
Line 281:1: E302 expected 2 blank lines, found 1
Line 320:1: E402 module level import not at top of file
Line 321:1: E402 module level import not at top of file

Comment last updated at 2019-08-06 00:58:55 UTC

@jhlegarreta
Copy link
Contributor Author

jhlegarreta commented Jul 14, 2019

@Garyfallidis @arokem @dPys @skoudoro comments and reviews are welcome.

A few comments:

  • I am still missing the workflow. I still need to study better how to do this.
  • We may need to improve the naming of the registration* examples , or decide on a policy, for the sake of consistency and readability, as done for the tracking examples in in DOC - RF tracking examples #1881.
  • When the result is displayed, the scale and affine of the bundles are different. I ignore whether I am missing some step.
  • Not sure whether the process is correct. I was not able/it is hard to appreciate the deformation when looking at the streamlines, especially because their scales and affines are different when being displayed. Comments and suggestions to improve the script are welcome.
  • I followed @dPys's suggestion and wrote explicitly the rigid/non-rigid registration process completely. I ignore whether we are willing to start with a rough rigid registration, like in syn_registration_3d.py. May be hard-coding the affine matrix we get?
  • Not sure whether the show_both_bundles strategy is the best way to display results. Should we display the original bundle in the MNI space and then the deformed one?

@jhlegarreta
Copy link
Contributor Author

I guess we acknowledge that the E402 module level import not at top of file PEP8 warning is a side-effect that we are willing to accept to follow the style used in the DIPY examples.

@codecov-io
Copy link

codecov-io commented Jul 15, 2019

Codecov Report

❗ No coverage uploaded for pull request base (master@9bdaf97). Click here to learn what that means.
The diff coverage is n/a.

Impacted file tree graph

@@            Coverage Diff            @@
##             master    #1900   +/-   ##
=========================================
  Coverage          ?   83.09%           
=========================================
  Files             ?      118           
  Lines             ?    15016           
  Branches          ?     2375           
=========================================
  Hits              ?    12478           
  Misses            ?     1972           
  Partials          ?      566

@arokem
Copy link
Contributor

arokem commented Jul 16, 2019

Hey @dPys - could you take a look and try this out?

@jhlegarreta
Copy link
Contributor Author

As discussed in todays dev meeting, one of the things that needs some improvement is explaining the parameters of the deform_streamlinesmethod here.

@jhlegarreta jhlegarreta force-pushed the AddStreamlineDeformationTutorial branch from 1fa9380 to b7b2400 Compare July 17, 2019 00:29
@dPys
Copy link
Contributor

dPys commented Jul 17, 2019

Hi @arokem and @jhlegarreta ,

I have gone through the tutorial and made a number of modifications, but @jhlegarreta feel free to handle modifying the PR directly since you've already opened it:

  1. Switching to registering the FA estimation of hardi_img to an FA template. An example 2mm HCP FA template (recently released) such as the one used here could easily be added to dipy.data.fetcher?
    *NOTE: There were several reasons for the preference of using FA images instead: 1. the T2 template used previously was not skull stripped so was yielding suboptimal registration with the B0. 2. I've found that the high signal contrast of an FA image lends to dramatically better performance with SyN when dealing with white-matter images than when reducing the problem to the B0/T2-contrast domain. 3. The mean b0 brain mask is still useful for defining the edges of the brain with the FA image.
  1. The extents on the moving and static affines were causing issues with misalignment. They needed to be summed to be sure the transformations are happening across the entire voxel grid from the corner of the FOV. This should do it:
adjusted_affine = moving_affine.copy()
adjusted_affine[0][3] = -adjusted_affine[0][3] - static_affine[0][3]
adjusted_affine[1][3] = -adjusted_affine[1][3] + static_affine[1][3]
adjusted_affine[2][3] = -adjusted_affine[2][3] + static_affine[2][3]

from dipy.tracking.utils import move_streamlines
rotation, scale = np.linalg.qr(adjusted_affine)
streams = move_streamlines(streamlines, rotation)
scale[0:3, 0:3] = np.dot(scale[0:3, 0:3], np.diag(1. / hdr['voxel_sizes']))
scale[0:3, 3] = abs(scale[0:3, 3])*1. / hdr['voxel_sizes'][0]
affine_streamlines = Streamlines(move_streamlines(streamlines, scale))
  1. I swapped the plotting the display the warped streamlines overlaid with the template image (i.e. as opposed to the original bundle) since the former seemed more pertinent for evaluating the quality of the direct streamline normalization. That looks like:
"""
We display the original streamlines inside a 3D visualization of the template:

"""

from fury import actor, window, colormap, ui, have_fury
from time import sleep


def show_template_bundles(bundles, show=True, fname=None):
    renderer = window.Renderer()
    template_img_data = img_t2_mni.get_data().astype('bool')
    template_actor = actor.contour_from_roi(template_img_data,
    color=(50, 50, 50), opacity=0.05)
    renderer.add(template_actor)
    lines_actor = actor.streamtube(affine_streamlines, window.colors.orange,
    linewidth=0.3)
    renderer.add(lines_actor)
    if show:
        window.show(renderer)
    if fname is not None:
        sleep(1)
        window.record(renderer, n_frames=1, out_path=fname, size=(900, 900))


if have_fury:
    """
    .. figure:: streamline_registration.png
       :align: center

       Streamlines before and after registration.

    As it can be seen, the corpus callosum bundles have been deformed to adapt
    to the MNI template space.

    """

    show_template_bundles(bundles, show=False, fname='streamline_registration.png')

Screen Shot 2019-07-17 at 1 07 24 AM

Screen Shot 2019-07-17 at 1 07 14 AM

And here's the final script (i.e. without the fetch of the FA template image added yet):

"""
===============================
Direct Streamline Normalization
===============================
@dPys and @jhlegarreta 
This example shows how to register streamlines into a template space by
applying non-rigid deformations.

At times we will be interested in bringing a set of streamlines into some
common, reference space to compute statistics out of the registered
streamlines.

For brevity, we will include in this example only streamlines going through
the corpus callosum connecting left to right superior frontal
cortex. The process of tracking and finding these streamlines is fully
demonstrated in the :ref:`streamline_tools` example. If this example has been
run, we can read the streamlines from file. Otherwise, we'll run that example
first, by importing it. This provides us with all of the variables that were
created in that example.

In order to get the deformation field, we will first use two b0 volumes. The
first one will be the b0 from the Stanford HARDI dataset:

"""

import os.path as op

if not op.exists('lr-superiorfrontal.trk'):
    from streamline_tools import *
else:
    from dipy.data import read_stanford_hardi
    hardi_img, gtab = read_stanford_hardi()
    data = hardi_img.get_data()

"""
The second one will be the T2-contrast MNI template image.

"""

#from dipy.data.fetcher import (fetch_mni_template, read_mni_template)

#fetch_mni_template()
#img_t2_mni = read_mni_template("a", contrast="T2")
img_t2_mni = nib.load('FSL_HCP1065_FA_2mm.nii.gz')
[FSL_HCP1065_FA_2mm.nii.gz](https://github.com/nipy/dipy/files/3400354/FSL_HCP1065_FA_2mm.nii.gz)


"""
We filter the diffusion data from the Stanford HARDI dataset to find the b0
images.

"""

import numpy as np

b0_idx_stanford = np.where(gtab.b0s_mask)[0]
b0_data_stanford = data[..., b0_idx_stanford]

"""
We then remove the skull from the b0's.

"""

from dipy.segment.mask import median_otsu

b0_masked_stanford, _ = median_otsu(b0_data_stanford,
                vol_idx=[i for i in range(b0_data_stanford.shape[-1])],
                median_radius=4, numpass=4)

"""
And go on to compute the Stanford HARDI dataset mean b0 image.

"""

mean_b0_masked_stanford = np.mean(b0_masked_stanford, axis=3,
                                  dtype=data.dtype)

"""
Use the mean b0 image, along with the hardi data itself to estimate an FA image.
"""
from dipy.reconst.dti import TensorModel
from dipy.reconst.dti import fractional_anisotropy

print('Generating simple tensor FA image to use for registrations...')
B0_mask_data = mean_b0_masked_stanford.astype('bool')
hardi_img_affine = hardi_img.affine
model = TensorModel(gtab)
mod = model.fit(data, B0_mask_data)
FA = fractional_anisotropy(mod.evals)
FA[np.isnan(FA)] = 0

"""
We will register the mean b0-masked FA image to the MNI T2 image template
non-rigidly to obtain the deformation field that will be applied to the
streamlines. We will first perform an affine registration to roughly align the
two volumes:
"""

from dipy.align.imaffine import (AffineMap, MutualInformationMetric,
                                 AffineRegistration)
from dipy.align.transforms import (TranslationTransform3D, RigidTransform3D,
                                   AffineTransform3D)

static = img_t2_mni.get_data()
static_affine = img_t2_mni.affine
moving = FA.astype(np.float32)
moving_affine = hardi_img.affine

identity = np.eye(4)
affine_map = AffineMap(identity, static.shape, static_affine, moving.shape,
                       moving_affine)
resampled = affine_map.transform(moving)

"""
We specify the mismatch metric:

"""

nbins = 32
sampling_prop = None
metric = MutualInformationMetric(nbins, sampling_prop)

"""
As well as the optimization strategy:

"""

level_iters = [10, 10, 5]
sigmas = [3.0, 1.0, 0.0]
factors = [4, 2, 1]
affine_reg = AffineRegistration(metric=metric, level_iters=level_iters,
                                sigmas=sigmas, factors=factors)
transform = TranslationTransform3D()

params0 = None
translation = affine_reg.optimize(static, moving, transform, params0,
                                  static_affine, moving_affine)
transformed = translation.transform(moving)
transform = RigidTransform3D()

rigid_map = affine_reg.optimize(static, moving, transform, params0,
                                static_affine, moving_affine,
                                starting_affine=translation.affine)
transformed = rigid_map.transform(moving)
transform = AffineTransform3D()

"""
We bump up the iterations to get a more exact fit:

"""

affine_reg.level_iters = [1000, 1000, 100]
affine_map = affine_reg.optimize(static, moving, transform, params0,
                                 static_affine, moving_affine,
                                 starting_affine=rigid_map.affine)
transformed = affine_map.transform(moving)


"""
We now perform the non-rigid deformation using the Symmetric Diffeomorphic
Registration (SyN) Algorithm:

"""

from dipy.align.imwarp import SymmetricDiffeomorphicRegistration
from dipy.align.metrics import CCMetric

metric = CCMetric(3)
level_iters = [10, 10, 5]
sdr = SymmetricDiffeomorphicRegistration(metric, level_iters)

mapping = sdr.optimize(static, moving, static_affine, moving_affine,
                       affine_map.affine)
warped_moving = mapping.transform(moving)
warped_static = mapping.transform_inverse(moving)

"""
We show the registration result with:

"""
from dipy.viz import regtools

regtools.overlay_slices(static, warped_moving, None, 0, "Static", "Moving",
                        "transformed_sagittal.png")
regtools.overlay_slices(static, warped_moving, None, 1, "Static", "Moving",
                        "transformed_coronal.png")
regtools.overlay_slices(static, warped_moving, None, 2, "Static", "Moving",
                        "transformed_axial.png")

"""
.. figure:: transformed_sagittal.png
   :align: center
.. figure:: transformed_coronal.png
   :align: center
.. figure:: transformed_axial.png
   :align: center

   Deformable registration result.
"""

"""
We read the streamlines from file in voxel space:

"""

from dipy.io.streamline import load_trk

streamlines, hdr = load_trk('lr-superiorfrontal.trk')


"""
We then apply the obtained deformation field to the streamlines. We first
apply the non-rigid warping and then apply the computed affine transformation
whose extents must be corrected to account for the different voxel grids of the
moving and static images:

"""

from dipy.tracking.streamline import deform_streamlines, Streamlines
warped_streamlines = deform_streamlines(streamlines,
                       deform_field=mapping.get_forward_field(),
                       stream_to_current_grid=moving_affine,
                       current_grid_to_world=mapping.codomain_grid2world,
                       stream_to_ref_grid=static_affine,
                       ref_grid_to_world=mapping.domain_grid2world)


adjusted_affine = moving_affine.copy()
adjusted_affine[0][3] = -adjusted_affine[0][3] - static_affine[0][3]
adjusted_affine[1][3] = -adjusted_affine[1][3] + static_affine[1][3]
adjusted_affine[2][3] = -adjusted_affine[2][3] + static_affine[2][3]

from dipy.tracking.utils import move_streamlines
rotation, scale = np.linalg.qr(adjusted_affine)
streams = move_streamlines(streamlines, rotation)
scale[0:3, 0:3] = np.dot(scale[0:3, 0:3], np.diag(1. / hdr['voxel_sizes']))
scale[0:3, 3] = abs(scale[0:3, 3])*1. / hdr['voxel_sizes'][0]
affine_streamlines = Streamlines(move_streamlines(streamlines, scale))

"""
We display the original streamlines inside a 3D visualization of the template:

"""

from fury import actor, window, colormap, ui, have_fury
from time import sleep


def show_template_bundles(bundles, show=True, fname=None):
    renderer = window.Renderer()
    template_img_data = img_t2_mni.get_data().astype('bool')
    template_actor = actor.contour_from_roi(template_img_data,
    color=(50, 50, 50), opacity=0.05)
    renderer.add(template_actor)
    lines_actor = actor.streamtube(affine_streamlines, window.colors.orange,
    linewidth=0.3)
    renderer.add(lines_actor)
    if show:
        window.show(renderer)
    if fname is not None:
        sleep(1)
        window.record(renderer, n_frames=1, out_path=fname, size=(900, 900))


if have_fury:
    """
    .. figure:: streamline_registration.png
       :align: center

       Streamlines before and after registration.

    As it can be seen, the corpus callosum bundles have been deformed to adapt
    to the MNI template space.

    """

    show_template_bundles(bundles, show=False, fname='streamline_registration.png')

"""
Finally, we save the registered streamlines:

"""

from dipy.io.streamline import save_trk

save_trk("warped-lr-superiorfrontal.trk", warped_streamlines,
         np.eye(4))

"""
References
----------
Greene, C., Cieslak, M., & Grafton, S. T. (2017). Effect of different spatial normalization approaches on tractography and structural brain networks. Network Neuroscience, 1-19.
"""

@jhlegarreta
Copy link
Contributor Author

Thanks for the comments and suggestions @dPys. I will have another pass at it tonight.

@jhlegarreta
Copy link
Contributor Author

jhlegarreta commented Jul 18, 2019

I've gone through it. Thanks Derek !

I've seen that you have posted the FA file. Nevertheless, in order to to make the fetcher, I would need the original URL where the FA of HCP1065 dataset can be retrieved from. Or else, did you get it querying FSL? @arokem do you happen to know where this data is located in the HCP project? Or else, will we need to store the file Derek posted in some UW server so as to make the fetcher?

@dPys
Copy link
Contributor

dPys commented Jul 18, 2019

Hi @jhlegarreta , No problem. I'm not sure why I initially referred to the changes as "major" since they were actually minor and I just want you to know that you did a fantastic job of putting everything together.

As for the source of the HCP FA template, I'm afraid that this is something that I've created custom for exactly this application. That is, the file is a resampled version of the 1mm FA template released from HCP1065 and that is included in the latest FSL template collection. For this reason, it will probably need to be hosted on a server that dipy can access for the fetch to work...

One last thing-- the orientation and FOV of the moving image (the FA image/B0) has a big impact on the solution. I am working right now to determine what features of the affine are needed and how to make sure the code generalizes to weird formats.

Stay tuned,
@dPys

@jhlegarreta
Copy link
Contributor Author

Thanks Derek !

The HCP database requires a user/pwd as well, so hosting it on a server DIPY can access is the solution.

@dPys
Copy link
Contributor

dPys commented Jul 19, 2019

Good to know. Thanks much.

As an update, it also appears that in the case that the dwi data is stored in neurological format (as with the stanford hardi data), the registrations should just work as is. If, however the data is stored in RAS+ for some reason (this is enforced in ndmg and pynets for instance), then SyN + deformation appears to fail, and I'm not entirely sure why. One (band-aid) solution that I've found is to flip the FA image to neurological manually before running SyN (i.e. the flipped version becomes the moving image), and then use the accompanying flipped affine for the streamline deformation, but the original (i.e. unflipped) affine to transform the deformed streamlines to their final affine. In other words the non-linear warping and deformation all still occur in LAS, even thought the rigid-body transformations map the end result back to RAS.

Here's an example where the tractography was done in RAS+ (stored in streams={path to a .trk file}):

        fa_path_img = nib.load(fa_path)
        fa_flip_path = "%s%s" % (fa_path.split('.nii.gz')[0], '_flip.nii.gz')
        orig_aff = fa_path_img.affine
        orig_aff = orig_aff.copy()
        flipped_aff = orig_aff.copy()
        flipped_aff[0][0] = -orig_aff[0][0]
        fa_path_img.set_sform(flipped_aff)
        fa_path_img.set_qform(flipped_aff)
        fa_path_img.update_header()
        nib.save(fa_path_img, fa_flip_path)

        # SyN FA->Template (where wm_syn is all of the SyN routines)
        moving = fa_flip_path
        static = template_path
        [moving_affine, static_affine, mapping] = wm_syn(template_path, fa_flip_path)

        [streamlines, hdr] = load_trk(streams)

        # Warp streamlines
        mni_streamlines = deform_streamlines(streamlines, deform_field=mapping.get_forward_field(),
                                             stream_to_current_grid=moving_affine,
                                             current_grid_to_world=mapping.codomain_grid2world,
                                             stream_to_ref_grid=static_affine,
                                             ref_grid_to_world=mapping.domain_grid2world)

        # Adjust affine to account for template-relative extents (in the RAS case, offsets for y and z dims zero out)
        adjusted_affine = orig_aff.copy()
        adjusted_affine[0][3] = adjusted_affine[0][3] + static_affine[0][3]
        adjusted_affine[1][3] = 0
        adjusted_affine[2][3] = 0

        # Transform mni-space fibers with template-relative extents
        warped_streamlines = transform_to_affine(mni_streamlines, hdr, adjusted_affine)

@dPys
Copy link
Contributor

dPys commented Jul 19, 2019

Hi @jhlegarreta ,

After further tinkering this morning, it seems that all orientation issues could theoretically be addressed by setting the *_grid_to_world parameters for deform_streamlines to np.diag(np.array([2, 2, 2, 1])) where the signs along the diagonal of the affine are set adaptively based on the signs of the moving_affine and static_affine. I'm not sure that in all cases this will implicitly handle the offsets (which may have to be summed in some manner across moving and static affines regardless).

@arokem -- would you be able to shed some light on how we might go about generalizing deform_streamlines to better work with any input image orientation?

@jhlegarreta jhlegarreta force-pushed the AddStreamlineDeformationTutorial branch from b7b2400 to 127279f Compare July 20, 2019 16:47
@jhlegarreta
Copy link
Contributor Author

jhlegarreta commented Jul 20, 2019

Rebased on master to adapt to the change in #1902. Force pushed inadvertently, and all previous commits were pushed again. I'll squash once we finish the example.

@dPys
Copy link
Contributor

dPys commented Jul 20, 2019

Hi Jon,

Thanks for merging the changes into the example. After playing more with this today, one thing that has become fairly clear is that a key assumption will need to be, as you noted previously, that the dwi data is stored in RAS orientation. The template image can be in LAS, as in the example right now, but if it is in RAS as well, then "adjusted_affine[0][3] = -adjusted_affine[0][3] - static_affine[0][3]” would instead become "adjusted_affine[0][3] = -adjusted_affine[0][3] + static_affine[0][3]”

One other thing that I discovered is that instead of explicitly transforming the non-rigid deformed streamlines, another option is to perform the following:

vox_size = fa_img.get_header().get_zooms()[0]
target_isocenter = np.diag(np.array([-vox_size, vox_size, vox_size, 1]))
adjusted_affine = target_isocenter.copy()
adjusted_affine[0][3] = template_img.affine[0][3] - fa_img.affine[0][3]
adjusted_affine[1][3] = template_img.affine[1][3] - fa_img.affine[1][3]
adjusted_affine[2][3] = template_img.affine[2][3] - fa_img.affine[2][3]
mni_streamlines = deform_streamlines(streamlines, deform_field=mapping.get_forward_field(),
                                     stream_to_current_grid=target_isocenter,
                                     current_grid_to_world=target_isocenter,
                                     stream_to_ref_grid=target_isocenter,
                                     ref_grid_to_world=adjusted_affine)

The above approach should work, for instance, in the case that both the moving and static images are in RAS, and the offsets are scaled appropriately in the case that any image reslicing was performed.

@dPys

@jhlegarreta
Copy link
Contributor Author

Thanks for the hard work @dPys. Looks like a less complicated solution than the previous one, and so more appropriate for an example (?).

Let's see what others think.

@dPys
Copy link
Contributor

dPys commented Jul 22, 2019

Yes @jhlegarreta -- this should be en route to a less complicated (albeit still incomplete) solution and better use of the affine parameters in deform_streamlines. Would love to hear others' feedback as well.

Cheers,
@dPys

@arokem
Copy link
Contributor

arokem commented Jul 25, 2019

Hi everyone -- sorry for my slowness here. Unfortunately, I will continue to be rather slow to reply in the next few weeks, as I am organizing a summer school here in the next couple of weeks, and going on vacation after that.

Regarding HCP data: the problem with using HCP in examples is that though the data is openly available, it is provided under the condition that users agree to certain terms and conditions. This means that we cannot redistribute HCP data. That's the reason we have the CENIR dataset in the first place.

I am not sure whether this applies equally to population-level templates, though. If FSL is distributing it as part of their software distribution, maybe it's OK to distribute that?

@dPys
Copy link
Contributor

dPys commented Jul 25, 2019

I am not sure whether this applies equally to population-level templates, though. If FSL is distributing it as part of their software distribution, maybe it's OK to distribute that?

Hi Ariel--

No worries. Getting @jhlegarreta 's WIP to work smoothly has become a high priority of mine, and to be honest, it may take me a bit more time to tinker with the affines before we get there completely. In the long-run, these routines may eventually be more elegantly written out with their own class...

One caveat to consider RE: the HCP template-- the 2mm template was actually something I created by resampling the 1mm template which is openly distributed by FSL since their latest release. No HCP 2mm template previously existed as far as I'm aware. Let me know if you still think it would be best to go with a different MNI WM template (i.e. one already available with fetch).

Best of luck with hack school, teach em good!

@dPys

@jhlegarreta jhlegarreta force-pushed the AddStreamlineDeformationTutorial branch from 127279f to 299521e Compare July 26, 2019 13:03
@jhlegarreta
Copy link
Contributor Author

Rebased on master to fix conflicts and push forced.

This means that we cannot redistribute HCP data. That's the reason we have the CENIR dataset in the first place.

😟 In that case, and unless a formal/official reply is received from the HCP consortium, I would not rely on what another software is providing.

If we still want to go the FA way, although it may seem uncommon not to use the HCP as the reference, @dPys does any major drawback of using the CENIR dataset instead of the HCP dataset come into mind? Not sure that is an advantage, but the CENIR dataset's resolution is 2 mm isotropic.

@dPys
Copy link
Contributor

dPys commented Jul 29, 2019

@jhlegarreta -- I just PR-ed the solution into the AddStreamlineRegistrationTutorial branch on your dipy fork (also found here: https://github.com/dPys/dipy/blob/AddStreamlineDeformationTutorial/doc/examples/streamline_registration.py). Let me know what you think :-) It should now work with all RAS+ oriented images.

The CENIR dataset that you mentioned doesn't appear to have an FA template in MNI-space? For now, I'm still using the HCP 2mm FA template that I made, but when we find a suitable replacement that can be obtained with fetch, we could just swap out #L47-48.

@dPys

@jhlegarreta
Copy link
Contributor Author

jhlegarreta commented Jul 30, 2019

As always, thanks @dPys ! I've commented the PR you made.

The CENIR dataset that you mentioned doesn't appear to have an FA template in MNI-space?

Not that I know. If we finally need to use it instead of the HCP data, we'd need to compute it in the example itself I guess.

@arokem
Copy link
Contributor

arokem commented Jul 30, 2019

Please see #1909 (comment).

I think that it would be fine to have a less-than-perfect example here, and then explain in the accompanying text how you would improve upon it.

Copy link
Member

@skoudoro skoudoro left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you for this nice tutorial @jhlegarreta and @dPys. Here a quick review but I need to run it first.

doc/examples/streamline_registration.py Outdated Show resolved Hide resolved
doc/examples/streamline_registration.py Outdated Show resolved Hide resolved
doc/examples/streamline_registration.py Outdated Show resolved Hide resolved
doc/examples/streamline_registration.py Outdated Show resolved Hide resolved
doc/examples/streamline_registration.py Outdated Show resolved Hide resolved
@dPys
Copy link
Contributor

dPys commented Jul 30, 2019

Please see #1909 (comment).

I think that it would be fine to have a less-than-perfect example here, and then explain in the accompanying text how you would improve upon it.

Hi @arokem ,

Sounds good. Do you have another FA template in mind for us to use instead of the HCP one?

@dPys

@jhlegarreta
Copy link
Contributor Author

jhlegarreta commented Jul 30, 2019

I'll do the rebase.

In order to do that, a decision on the dataset issue would be welcome. As I see things, we have two choices:

  • Go for the original, simpler (may be less correct) approach of using the Stanford HARDI b0's as the template image, and explain its limitations.
  • Compute the FA template using the CENIR dataset within this example.

Not sure about the amount of code the latter would require, but I guess it fits within the purpose of the example.

I am fine with either approach.

@dPys
Copy link
Contributor

dPys commented Jul 30, 2019

Thanks @jhlegarreta !

What about using the FMRIB58_FA_1mm.nii.gz image from fsl/data/standard (also attached),
FMRIB58_FA_1mm.nii.gz
which we can just resample to 2mm?

It's pretty legacy, but at least it's an FA template (it's also the one typically used for tbss), and at least it's not from HCP...

@jhlegarreta
Copy link
Contributor Author

What about using the FMRIB58_FA_1mm.nii.gz image from fsl/data/standard

I guess the FSL license is again something to look at:

No part of the Software may be reproduced, modified, transmitted or transferred in any form or by any means, electronic or mechanical, without the express permission of the University. The permission of the University is not required if the said reproduction, modification, transmission or transference is done without financial return, the conditions of this Licence are imposed upon the receiver of the product, and all original and amended source code is included in any transmitted product. You may be held legally responsible for any copyright infringement that is caused or encouraged by your failure to abide by these terms and conditions.

Not sure if DIPY is comfortable with this as it seems to refer to the SW rather than the data ((...) and all original and amended source code is included in any transmitted product.).

@dPys
Copy link
Contributor

dPys commented Aug 1, 2019

Thanks for looking into these options with me @jhlegarreta .

I suppose the stanford B0 template will just have to work for the time-being just to get this thing out there like you and @arokem suggested (i.e. approach 1)?

Sounds like our next task may be to create a new FA template image for dipy :-) Although I don't have strong empirical justification for it for direct streamline normalization, I know that @clintg6 and @mattcieslak would probably agree that in the long-term, an FA template image would be ideal.

Great working with you @jhlegarreta, and thanks for helping me to refine the example so well!

@dPys

jhlegarreta and others added 5 commits August 2, 2019 13:46
Add streamline deformation example and workflow.

Resolves dipy#1840.
Address PEP8 warnings.
Add references to the SyN paper and ANTs software.
Incorporate @dPys' suggestions:
- Use the FA map as the template.
- First apply the non-rigid registration, then the affine transformation
  with corrected extents.
@jhlegarreta jhlegarreta force-pushed the AddStreamlineDeformationTutorial branch from 0eace82 to f66eb9c Compare August 2, 2019 21:41
Use the mean b0 template back and rebase on `master`.

Use the new `load_tractogram` and `save_tractogram` methods to encourage
their adoption and increase consistency for the future.

Complete the documentation.

Increase consistency when passing string literals to methods.
@jhlegarreta jhlegarreta force-pushed the AddStreamlineDeformationTutorial branch from f66eb9c to d614b88 Compare August 2, 2019 21:46
@jhlegarreta
Copy link
Contributor Author

Push-forced.

Rebased on master.

Not done, though. The streamlines are not located at the right place; I guess the affines part is not correct 😔.

@dPys
Copy link
Contributor

dPys commented Aug 4, 2019

Hi @jhlegarreta -- please see my PR into your forked branch which includes some revisions: jhlegarreta#3

Basically, the following line was missing:

"""
We estimate an affine that maps the origin of the moving image to that of the
static image. We can then use this later to account for the offsets of each
image.

"""
affine_map = transform_origins(static, static_affine, moving, moving_affine)

But also, the img_t2_mni data needed to be resliced to 2mm voxels to match that of the HARDI data:

and it appeared that the affine_map was instead being set from an identity matrix which won't work with the affine flipping code.

There was also a caveat with the img_t2_mni template image that was not a caveat previously with the HCP FA template image:

"""
The second one will be the T2-contrast MNI template image, which we'll need to
reslice to 2x2x2 mm isotropic voxel resolution to match the HARDI data.
"""
from dipy.data.fetcher import (fetch_mni_template, read_mni_template)
from dipy.align.reslice import reslice

fetch_mni_template()
img_t2_mni = read_mni_template("a", contrast = "T2")

new_zooms = (2., 2., 2.)
data2, affine2 = reslice(img_t2_mni.get_data(), img_t2_mni.affine,
img_t2_mni.header.get_zooms(), new_zooms)
img_t2_mni = nib.Nifti1Image(data2, affine=affine2)

Ultimately, there are three core assumptions for direct streamline normalization to work as expected with the routines now set up with dipy:

  1. both moving and static images are in RAS
  2. both moving and static images have the same voxel resolution (i.e. zooms)
  3. affines for both moving and static images are correct and uncorrupted

And viola:

Screen Shot 2019-08-04 at 5 56 29 PM

Also, it may be worth overtly stating the limitation of using a t2-contrast template image as the target for normalization. Namely, it does not provide optimal tissue contrast for maximal SyN performance (see SyN axial slices below). Thus, ideally, an FA template should be used as the static image, and an FA map of the moving image should be used as the moving image.

Screen Shot 2019-08-04 at 5 52 47 PM

@dPys

Update streamline_registration.py tutorial to work with B0 template data
@jhlegarreta jhlegarreta force-pushed the AddStreamlineDeformationTutorial branch from 143fa37 to cb661da Compare August 6, 2019 00:55
Make the example style be consistent.
@jhlegarreta
Copy link
Contributor Author

Thanks @dPys for enlightening the way !

both moving and static images are in RAS
both moving and static images have the same voxel resolution (i.e. zooms)

Also, it may be worth overtly stating the limitation of using a t2-contrast template image as the target for normalization. Namely, it does not provide optimal tissue contrast for maximal SyN performance (see SyN axial slices below). Thus, ideally, an FA template should be used as the static image, and an FA map of the moving image should be used as the moving image.

Added these in 4c0718d.

Importing nibabel and an edition due to the use of the new stateful_tractogram (as in
d614b88) were also added in that commit (not honoring the comment 🙃).

Thus, the example is working now 🎆 🎉 🍾 .

If we want to change the files' name to honor the title given, and/or change the axial slice view to a sagittal one, now it's the time.

Otherwise, if maintainers @arokem @skoudoro @Garyfallidis give it green light, it can be squashed (to avoid populating the history with some unnecessary comings and going commits) and merged !

@dPys
Copy link
Contributor

dPys commented Aug 6, 2019

I'd be supportive of renaming the file to streamline_normalization.py but I'm happy with whatever everyone thinks is best.

This was a fun one, and there is still plenty of room for refinement down the road. Great working with you @jhlegarreta and looking forward to working with you again!

@dPys

@dPys
Copy link
Contributor

dPys commented Aug 10, 2019

@jhlegarreta -- I've simplified the affine transformation code even further. @arokem may be interested in this solution as well.

Recall that in addition to running SyN, we need to account for the origin offsets in the FOV from both the template image (static) and B0/FA image (moving) using transform_origins. Let's use this origin transformation (that defaults to identity along the diagonal) as the starting point for the current_grid_to_world transformation that we can later feed into deform streamlines:

affine_map = transform_origins(static, static_affine, moving, moving_affine)
affine_map.affine
array([[  1.        ,   0.        ,   0.        , -14.        ],
[  0.        ,   1.        ,   0.        , -69.0411377 ],
[  0.        ,   0.        ,   1.        ,  14.57869339],
[  0.        ,   0.        ,   0.        ,   1.        ]])

Now, what we are going to do is make two small adjustments to y-dimension of the origin transformation above: 1) we need to flip the sign of the offset to get the mirror image of transformation (we don't need to do this for the x-offset because as you will see, we are going to 'ignore' this dimension later anyway); 2) we next need to divide the offset by the square of the voxel resolution (i.e. since the diagonal was merely an identity matrix and did not account for the image zooms-- here, 2mm). We use the square of the zoom (i.e. 4) because the origin transformation in the y- and z-dimensions both need to be corrected (again, we are ignoring the x-offset) and we are working in 3-dimensional space:

adjusted_affine = affine_map.affine.copy()
adjusted_affine[1][3] = -adjusted_affine[1][3]/vox_size**2

Once that's done, then all that's left is to deform the streamlines by applying the forward deformation field. Now, the deformation field is not aligned with the FOV of the streamlines, so the one way to handle this is to simply collapse the first axis of the field, which will ensure that it is anchored to the corner of the FOV in the x-dimension when it is applied (this is also why we can avoid having to adjust the x-offset mentioned earlier):

adjusted_field = mapping.get_forward_field()[-1:]
mni_streamlines = deform_streamlines(streamlines, deform_field=adjusted_field, 
stream_to_current_grid=template_img.affine, current_grid_to_world=adjusted_affine, stream_to_ref_grid=template_img.affine,  ref_grid_to_world=np.eye(4))

And that's it!

Hopefully this streamlines things a bit more ;)

@dPys

@dPys dPys mentioned this pull request Aug 10, 2019
@jhlegarreta
Copy link
Contributor Author

@dPys thanks for the endeavor to simplify things and explain them thoroughly !

I will not be able to test or edit the code for a few weeks, and will have limited/intermittent network access, so go ahead with adding commits to this PR if necessary.

I am fine with either of the last two versions, but it is true that your last proposal is more compact.

Otherwise, this is ready to be merged on my end !

@jhlegarreta jhlegarreta changed the title WIP: ENH: Add streamline deformation example and workflow ENH: Add streamline deformation example and workflow Aug 11, 2019
@jhlegarreta
Copy link
Contributor Author

Checks are passing so the code is ready to be merged if no further modifications are required.

@dPys
Copy link
Contributor

dPys commented Aug 31, 2019 via email

@arokem
Copy link
Contributor

arokem commented Sep 1, 2019

I think this is ready to go! Thanks for all your work @dPys and @jhlegarreta!

@arokem arokem merged commit ffe7637 into dipy:master Sep 1, 2019
@jhlegarreta jhlegarreta deleted the AddStreamlineDeformationTutorial branch September 1, 2019 14:37
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Tutorial showing how to apply deformations to streamlines
6 participants