Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

"Image based streamlines_registration: unable to warp streamlines into template" #2786

Closed
WilliamFCB opened this issue Apr 21, 2023 · 25 comments · Fixed by #2930
Closed

"Image based streamlines_registration: unable to warp streamlines into template" #2786

WilliamFCB opened this issue Apr 21, 2023 · 25 comments · Fixed by #2930

Comments

@WilliamFCB
Copy link

In light of the upcoming workshop, I was curious if the issue as reported and discussed in #2703 #2400 and #2369 has been resolved

Many thanks in advance!

@skoudoro
Copy link
Member

Hi @WilliamFCB,

It has been solved but not documented. Unfortunately, we will have time to document and update this tutorial only after the workshop. But we can discuss about it and help during the workshop!

Sorry for that

@WilliamFCB
Copy link
Author

Hi Serge,

Unfortunately, I will not be able to attend the workshop this year. However, I would not need the whole documentation I would think. Maybe you could share the "new" line of code for applying the warp at the end?
Have a great workshop!!

@skoudoro
Copy link
Member

Normally, mni_streamlines = mapping.transform_points(sft.streamlines) should be enough. Maybe some check if the transformation is in the right space. you do not need deform_streamlines anymore

@WilliamFCB
Copy link
Author

Hi Serge,

Many thanks for your quick response

Should anything be changed when saving the mni_streamlines to file?

I now use, from the "old" tutorial:

from dipy.io.stateful_tractogram import Space, StatefulTractogram
from dipy.io.streamline import save_tractogram

sft = StatefulTractogram(mni_streamlines, fa_temp_img, Space.RASMM)

save_tractogram(sft,'/mnt/scratch/VIA11/sub-via003ses01prisma/tractseg_output/MNI_CST_left6.tck', bbox_valid_check=False)

However, I get the following error:

bbox_valid_check=False)
Traceback (most recent call last):

File "/tmp/ipykernel_951197/346544094.py", line 6, in <cell line: 6>
save_tractogram(sft,'/mnt/scratch/VIA11/sub-via003ses01prisma/tractseg_output/MNI_CST_left6.tck', bbox_valid_check=False)

File "/mrhome/wimb/anaconda3/envs/tractseg2.7/lib/python3.8/site-packages/dipy/io/streamline.py", line 67, in save_tractogram
nib.streamlines.save(fileobj, filename)

File "/mrhome/wimb/anaconda3/envs/tractseg2.7/lib/python3.8/site-packages/nibabel/streamlines/init.py", line 137, in save
tractogram_file.save(filename)

File "/mrhome/wimb/anaconda3/envs/tractseg2.7/lib/python3.8/site-packages/nibabel/streamlines/tck.py", line 226, in save
data = np.r_[t.streamline, self.FIBER_DELIMITER]

File "/mrhome/wimb/anaconda3/envs/tractseg2.7/lib/python3.8/site-packages/numpy/lib/index_tricks.py", line 412, in getitem
res = self.concatenate(tuple(objs), axis=axis)

File "<array_function internals>", line 180, in concatenate

ValueError: all the input arrays must have same number of dimensions, but the array at index 0 has 1 dimension(s) and the array at index 1 has 2 dimension(s)

Any ideas about what might be amiss?

Many thanks in advance!

@skoudoro
Copy link
Member

I can not look into it right now, but for sure,I will give a look a bit later this week

@WilliamFCB
Copy link
Author

I fully understand... Good luck with the workshop! Many thanks

@WilliamFCB
Copy link
Author

I assume something is missing to correctly structure mni_streamlines after using mapping.transform_points.

Normally, the streamlines are organized as a list of size 10000 and organized in [Numpy array, Numpy array, .....]

when using mni_streamlines = mapping.transform_points(sft.streamlines)
mni_streamlines is an Array of float64 (294576,3)

So I clearly miss a an organizational operation

I guess it has something to do with data_per_point, data_per_streamline

@WilliamFCB
Copy link
Author

Please be patient with me, I am a newbie ....

What I am saying is that the resulting mni_streamlines are not organized correctly- Currently, it is just an array

Looking at the API I assume the organization of has something to do with "data_per_point" and "data_per_streamline"

@skoudoro
Copy link
Member

No worries, This is a good catch ! I can see in the function transform_points that we do not return a Streamline Object, we just return the points. We should. This need an update.

I recommend you to reuse/duplicate the input streamline and replace the data variable for now_. It should work

@WilliamFCB
Copy link
Author

Oke, thanks
Will try to figure out how to do that

@WilliamFCB
Copy link
Author

Reusing the input streamline would probably not be enough, I guess. How do I ensure including the right affine and non-linear warps to get the streamlines in MNI space?

@WilliamFCB
Copy link
Author

I tried
mni_streamlines = mapping.transform_points(sft.streamlines)
sft.streamlines._data=mni_streamlines
sft=StatefulTractogram(sft.streamlines, fa_temp_img, Space.RASMM)
save_tractogram(sft,'/mnt/scratch/VIA11/sub-via003ses01prisma/tractseg_output/MNI_CST_left6_new3.tck', bbox_valid_check=False)

Doesn't work ~ the streamlines are not in MNI space. The original streamlines seem actually to fit better.

This is over my pay grade :-). I will wait for you to look into it

The warps are fine. I warped the input FA to MNI, and it nicely fits the template
Cheers

@WilliamFCB
Copy link
Author

Hi Serge,

I did a bit more digging, and Ithink I figured it out.
The mapping.is_inverse was set to True .
So I had to circumvent the mapping.transform_point function and apply the forward transform directly (see below).
I have no idea of the internal workings of DIPY, so I did not want to set the mapping.is_inverse to false :-)

I have a lot of tracts I want to transfer to MNI (~50 per subject, ~370 subjects). I could do it the way below but would prefer a "valid"/"proper" DIPY implementation ( e.g., transform_point returning a streamline object and a proper dealing of the mapping.is_inverse). Any idea when this could be implemented realistically?

coord2world=None
world2coord=None
mni_streamlines = mapping._warp_coordinates_forward(sft.streamlines.get_data(), coord2world, world2coord)

from dipy.io.stateful_tractogram import Space, StatefulTractogram
from dipy.io.streamline import save_tractogram

sft.streamlines._data=mni_streamlines
sft=StatefulTractogram(sft.streamlines, fa_temp_img, Space.RASMM)

save_tractogram(sft,'/mnt/scratch/VIA11/sub-via003ses01prisma/tractseg_output/MNI_CST_right.tck', bbox_valid_check=False)

@WilliamFCB
Copy link
Author

PS, on a more practical note.
How do I go about saving the soure2MNI and MNI2source affine and warps? I would like to have them for possible use down the road.

Again, many thanks in advance

@skoudoro
Copy link
Member

Thank you for feedback @WilliamFCB ! Happy that you solve your issue.

Feel free to contribute and update the tutorial, it would be great and we will appreciate this contribution!

Concerning the update of transform_points, it should be done on May, but the next DIPY release should be only on September. I will keep you updated if we have to cut a release earlier like July.

How do I go about saving the soure2MNI and MNI2source affine and warps? I would like to have them for possible use down the road.

we save them in a nifti file. We have a small function for that: save_mapping you can look here for an example

@skoudoro
Copy link
Member

How do I go about saving the soure2MNI and MNI2source affine and warps? I would like to have them for possible use down the road.

also, more info on this thread: #2359

@WilliamFCB
Copy link
Author

Many thanks!
I will have to think about the tutorial. I am new to dipy (github) and currently swamped with work.
Also, not sure if an update now would only give "noise" as my solution is more of a hack. Of course, one could just be clear about the latter.
In any case, if I would update the tutorial, what is the procedure? Do I sent some lines to you? I can only update the part about applying the warp to the streamlines. But maybe that is what you meant?
For example, I warp a 1mm FA source to the 1 mm HCP FA template., so there is no need for all the stuff accounting for initial voxel size differences.

@WilliamFCB
Copy link
Author

Hi Serge,
How can I use DIPY functions to (non-linearly) warp a FA image to MNI if part of the cerebellum is missing? How to prevent "over" stretching?

@skoudoro
Copy link
Member

Maybe by using a mask and just registering a part of the brain.

tutorial: https://dipy.org/documentation/1.6.0./examples_built/affine_registration_masks/#example-affine-registration-masks

@WilliamFCB
Copy link
Author

Many thanks
My first attempt , loading associated 3d binary brain masks, and using the data obj in "optimize" were unsuccessful ..

I call it a day and see if I can figure it out in the coming days
Enjoy your weekend!

@WilliamFCB
Copy link
Author

Hi Serge,

I don't get it to work. It is not clear to me dipy's 3d masks requirements. I could not find any info on this in teh documentation or forum. Of course, might have missed it
Anyway, I created binary brain masks for the raw and template FA images, inverted them and loaded them into dipy.
As I understand form the documentation" Only pixels where the mask is non-zero in the (static) image will contribute to Mutual Information."

fa_temp_mask_data,fa_temp_mask_affine, fa_temp_mask_img = load_nifti(fa_temp_mask, return_img=True)
fa_raw_mask_data,fa_raw_mask_affine, fa_raw_mask_img = load_nifti(fa_raw_mask, return_img=True)

I then defined the masks using. Not sure if this is the right way to do it though
static_mask = np.asarray(fa_temp_mask_img.dataobj)
moving_mask = np.asarray(fa_raw_mask_img.dataobj)

I then do the following warping steps:
center of mass registration, translation, rigid
The results look good

I then try to apply the masks in the full affine:
transform = AffineTransform3D()
params0 = None
starting_affine = ridgid.affine
affine = affine_reg.optimize(static, moving, transform, params0,
static_grid2world, moving_grid2world,
starting_affine=starting_affine,
static_mask = static_mask,
moving_mask = moving_mask)

I get the following error:
/mrhome/wimb/anaconda3/envs/tractseg2.8/lib/python3.8/site-packages/dipy/align/imaffine.py:1053: RuntimeWarning: divide by zero encountered in divide
moving = (moving.astype(np.float64) - mmin) / (mmax - mmin)
/mrhome/wimb/anaconda3/envs/tractseg2.8/lib/python3.8/site-packages/dipy/align/imaffine.py:1053: RuntimeWarning: invalid value encountered in divide
moving = (moving.astype(np.float64) - mmin) / (mmax - mmin)
Optimizing level 2 [max iter: 10000]
Traceback (most recent call last):

Cell In[129], line 4
affine = affine_reg.optimize(static, moving, transform, params0,

File ~/anaconda3/envs/tractseg2.8/lib/python3.8/site-packages/dipy/align/imaffine.py:1177 in optimize
current_static_mask = current_affine_map.transform(

File ~/anaconda3/envs/tractseg2.8/lib/python3.8/site-packages/dipy/utils/deprecator.py:408 in wrapper
return function(*args, **kwargs)

File ~/anaconda3/envs/tractseg2.8/lib/python3.8/site-packages/dipy/align/imaffine.py:410 in transform
transformed = self._apply_transform(image, interpolation,

File ~/anaconda3/envs/tractseg2.8/lib/python3.8/site-packages/dipy/utils/deprecator.py:408 in wrapper
return function(*args, **kwargs)

File ~/anaconda3/envs/tractseg2.8/lib/python3.8/site-packages/dipy/align/imaffine.py:363 in _apply_transform
transformed = _transform_method[(dim, interpolation)](image, shape,

File dipy/align/vector_fields.pyx:1657 in dipy.align.vector_fields.__pyx_fused_cpdef

TypeError: No matching signature found

Any ideas?

@WilliamFCB
Copy link
Author

Hi Serge,

I am still interested in the latter.

However, I think I have found a general solution for the warping of individual FA images into MNI standard space that accounts for data in which part of the cerebellum is missing.
I implemented a procedure that automatically "cuts of" that part of the brain in the FA template that is not represented in the subject "moving" image.

Seems to work nicely. Al least on the two test datasets I ran :-)
Will run it on all data this week

Question, how to do I transform point/streamlines in case I only want to apply the affine of the warp?

Thanks in advance

@WilliamFCB
Copy link
Author

Hi Serge,

It's really great you finally had time to update the tutorial!

Unfortunately, I will not be able to try it out due to too many commitments.
For now, at least, I moved away from DIPY proper. Currently, I created pipelines, which include the warping of tracts and diffusivity images, using a combination of different tools (e.g., synthstrip, warping of WM FODs to a study WM FOD template, (and MNI if needed), MRtrix, FSL)

BW
William

@skoudoro
Copy link
Member

No problem, Thank you for your valuable feedback @WilliamFCB and again apologize for the delay.

@WilliamFCB
Copy link
Author

No need to apologise. I know you guys are very busy :-)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants