New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Free Water Elimination DTI #827

Open
etpeterson opened this Issue Jan 5, 2016 · 23 comments

Comments

Projects
None yet
8 participants
@etpeterson
Contributor

etpeterson commented Jan 5, 2016

I'm interested in doing some free water elimination DTI analysis and as far as I can tell dipy doesn't yet do this, but it probably provides a lot of what is needed. I'm thinking of something similar to this paper (http://www.ncbi.nlm.nih.gov/pubmed/25271843).

I'd be interested in hearing about any interest or thoughts about how best to approach this.

-Eric

@arokem

This comment has been minimized.

Member

arokem commented Jan 5, 2016

Hi Eric -- you are correct: dipy does not have a method for FWE yet. On the other hand, I also agree with you that many of the moving pieces are in place. Importantly, @RafaelNH told me that he is also interested in this method, so you should coordinate on this. Between the two of you, you could probably implement this pretty quickly :-)

@etpeterson

This comment has been minimized.

Contributor

etpeterson commented Jan 5, 2016

Thanks, quick is good :) I'll see what Rafael has to say and start looking at where the magic happens in dipy.

@RafaelNH

This comment has been minimized.

Contributor

RafaelNH commented Jan 11, 2016

Hi Eric @etpeterson!
I have exactly a version of the algorithm of the paper that you mentioned implemented! I only have to insert it in Dipy's architecture. I will start a new pull request where we can start working on it (will let you know ASAP).

Note: to use the algorithm of the mentioned paper, you will need data acquired with 2 shells.

@etpeterson

This comment has been minimized.

Contributor

etpeterson commented Jan 11, 2016

Thanks Rafael, that sounds great! I have 2 shell data and should have time this week so once it's up I'll start working on it.

@RafaelNH

This comment has been minimized.

Contributor

RafaelNH commented Jan 14, 2016

I just created the following pull request #835 so that you can start follow the implementation progress. At the moment, I just added the structures for the model's classes. Perhaps, you will find easy to give some input after further development, however feel free to give a first look to what was added and give feedback.

@RafaelNH

This comment has been minimized.

Contributor

RafaelNH commented Nov 4, 2016

Well - since the free water DTI is already implemented and #835 is already merged, I think we can close this issue, right @arokem and @etpeterson?

@RafaelNH RafaelNH closed this Nov 4, 2016

@arokem

This comment has been minimized.

Member

arokem commented Nov 4, 2016

@etpeterson : have you had a chance to try this out? Would be interesting to hear how it works for you.

@etpeterson

This comment has been minimized.

Contributor

etpeterson commented Nov 4, 2016

I haven't tried any recent version but as of maybe a few months ago it was working pretty well. I was doing a little cleanup afterwards but I think @RafaelNH improved it since then so it may not even need that anymore.

@anbai106

This comment has been minimized.

anbai106 commented Jan 4, 2018

Sorry to reopen this :)

Good work for FWE-DTI model, it would be great to fit FWE-DTI model for single-shell DWI.

I am not a diffusion expert, but it would be really nice to have a model like bi-tensor DTI model (Pasternak et al., 2009) based on single-shell DWIs. I knew some group does this based on the FWE-DTI model, but they do not wanna share it.

Geniuses, what do you think?

Hao

@RafaelNH

This comment has been minimized.

Contributor

RafaelNH commented Jan 23, 2018

@anbai106 - Due to some priorities, I will not be able to start working on a single-shell free water DTI code in the following weeks. However, I hope to have a student working on this asap! So keep posted!

@alecrimi

This comment has been minimized.

alecrimi commented May 14, 2018

Hi,
thank you for developing this. From the code it seems that the freewater elimination is only for the tensor model and spherical harmonics are not covered, is this correct?

@arokem

This comment has been minimized.

Member

arokem commented May 14, 2018

Hi @alecrimi : that's right. This is a DTI-based model, and there is currently no way to fit it with spherical harmonics as the non-water component.

@sameerd

This comment has been minimized.

sameerd commented May 30, 2018

Hi @RafaelNH and @anbai106 and anyone else interested..

I made an attempt to implement the single shell free water code based on the work of Pasternak et al. 2009. Please see the (jupyter notebook) (computed free water maps are near the end in cell num 53) and notes (pdf file).

I'd could use some help debugging this code, verifying if it is correct and cleaning it up. We could make it compatible with dipy's architecture and put it here if there is any interest. Also, if any of you are working on your own implementation I would be happy to collaborate with you.

@mvgolub

This comment has been minimized.

mvgolub commented Jul 30, 2018

Hello everyone, my name is Marc and I am an MSc student in Biomedical Engineering.

For my thesis, I was tasked with developing a single-shell free water DTI algorithm based on Pasternak et al. 2009 (https://onlinelibrary.wiley.com/doi/full/10.1002/mrm.22055), Rafael Neto Henriques @RafaelNH is one of my supervisors and I have been consulting with him while implementing this. I have just made a pull request #1603 with my implementation .

Since the comments I made for the code are sparse, here I provide a more detailed explanation of the algorithm (beltrami_notes.pdf), here is also a Wolfram Mathematica script (beltrami_nb.pdf)I used to evaluate some complicated expressions of the algorithm. I tested the algorithm with a synthetic dataset I created (sim_data.zip) and also with the stanford_hardi provided with dipy.

I appreciate any help and suggestions to make this code more efficient and solve some numerical problems I'm having with the parameterized diffusion tensor components, which explode if I don't clip them. I'm also not sure if the time step and number of iterations I'm using are the best ones. Special thanks to @sameerd for providing his code, which helped me solve some problems with the initialization, my code is a more complex version than his, because it uses a non-euclidean metric.

I would just like to say that I'm new to git and never collaborated on a project like this, so I apologize if I missed any guidelines for open-source collaboration.

Marc Golub

@sameerd

This comment has been minimized.

sameerd commented Jul 30, 2018

Hi Marc @mvgolub this is great stuff. Its great that you implemented this in dipy.

I made some errors in the initialization myself and haven't corrected my code online yet.

The exploding diffusion tensor coefficients are probably because the time steps are too large.

If you are interested, you could make a switch that changes between the log euclidean metric and the affine metric in the two Pasternak papers.

My email address is sameerdcosta@gmail.com if you would like to get in touch to discuss anything offline.

@mvgolub

This comment has been minimized.

mvgolub commented Jul 30, 2018

Hi Sameer @sameerd, thank you for your feedback.

I thought the same about the time step issue, but when I tried to lower dt, I would not see any change in the free water map, it was hard to find that "sweet spot" for dt, rescaling the data helped me find a working dt.

Regarding the switch you are proposing, seems interesting, I will discuss the idea with my supervisors. I might contact you to ask for some advice in the future. If you decide to take a better look at my code and have any tips for me, feel free to contact me on my e-mail: marc.golub93@gmail.com.

@sameerd

This comment has been minimized.

sameerd commented Jul 30, 2018

@mvgolub I will look at your code in a week or two but first here are a few suggestions.

  1. Did you try computing the loss functions? (equation 5 of Pasternak et al. 2009 paper) I don't see them in your PR. If you run the fidelity part of the code only then you should see the loss function decrease for every voxel at every time step. Likewise if you run the beltrami smoothing only - then you should see the pullback metric (g) decrease for every voxel at every time step. If it doesn't decrease then you know there is a bug and you should be able to figure out exactly where the gradient is exploding. As an example see cell 23 and 24 here. In cell 23 you can see the loss_vol (loss of the beltrami smoothing) and loss_fid decreasing. In cell 24 you can see the total loss functions plotted. By turning the alpha coefficients on and off (lower part of cell 22) we can make sure that all the gradients are descending smoothly and have greater confidence that the algorithm is converging.

  2. Did you verify the computation of equation A8 in Pasternak et al 2009 paper? I'm not 100% convinced it is correct as I could not re-derive it exactly. Perhaps this is just an error on my part.

Regarding the simpler log euclidean metric - it would be greatly appreciated if you could implement a way to have the user choose the metric. Would you have time to take a look at Pasternak 2014 et al - The Estimation of Free Water Corrected Diffusion Tensors ? In it, they say

Selecting a Euclidean metric with the canonical tensor representation for the feature metric (distances between tensors), simplifies the analysis considerably, and is also preferred over other types of global tensor metrics [21].

This effects of metric selection are also written out in a paper here.

I hope the suggestions are helpful. Thanks again for all your work!

@mvgolub

This comment has been minimized.

mvgolub commented Jul 30, 2018

@sameerd ,

  1. I did not compute the loss functions yet, I was so focused on tweaking the time step and other parts of the code that I forgot about that. Now that you mention it, I remember that in Pasternak et al 2009, they turned off the fidelity term halfway through the iterations. At the time I did not understand why they did that, now it has become more clear. I will compute the loss functions just like you did and investigate this issue in the next days.

  2. I did not derive the entire computation of A8, but I cross-checked it with other papers by Pasternak. In the paper of Gur et al 2009: (https://link.springer.com/article/10.1007/s11263-008-0196-7), a similar approach is used to regularize tensor fields with the classic single tensor model (only the fidelity expression changes), looking at equation 17 of this paper, I believe there is an error in the pasternak et al 2009 version, more precisely in the last term (Levi-Civita), the last index of X should be k and not i.

I will read the papers you linked, from my thesis perspective it would be interesting to compare the results of different metrics, it all comes down to the amount of time I will have in the end. At the later stages of my thesis, I will have to prioritize writing the text because unfortunately, my code will not count for my grade (I think...). However, I am interested in improving the quality of the code and keep patching it even after I finish my thesis, so implementing an alternative metric for the user to choose is a possibility for me.

Thank you for your helpful suggestions!

@sameerd

This comment has been minimized.

sameerd commented Aug 22, 2018

Hi @mvgolub I wanted to share a link to this github repo which has a C++ implementation of the original 2009 Pasternak paper. I haven’t tested it out but it takes a similar approach to your code (i.e. affine metric). Hopefully you find it useful.

@mvgolub

This comment has been minimized.

mvgolub commented Aug 23, 2018

Hi @sameerd , great find! I will take a look into it. Meanwhile, I have implemented the euclidean metric and got reasonable results for my synthetic dataset, later I might integrate it into my original code so that the user can choose a metric like you suggested.

I also tried to monitor the loss function for a single random voxel, when both the fidelity and beltrami smoothing terms are turned on, the total loss function decreases, when only the beltrami smoothing term is on, the FA map becomes smoother but the loss function increases and the pullback metric g stays constant, contrary to what you suggested, so there might be a bug in my code indeed. I will keep looking into this matter.

@sameerd

This comment has been minimized.

sameerd commented Aug 23, 2018

That is great progress @mvgolub Awesome!

Regarding the decline of the loss functions for the beltrami smoothing. Are both the fidelity part of the code and the updating of the free water maps turned off? Only then will you see a decline in loss_vol at each step.

I have a feeling that you are close because if g is staying constant then your loss function loss_vol cannot be changing. (loss_vol is just the sum over all voxels of $\sqrt(det(g))$ ) So if you fix a pixel and g isn't changing then loss_vol shouldn't be changing. This probably means that other parts of your code aren't turned off yet. Hope this helps!

@mvgolub

This comment has been minimized.

mvgolub commented Aug 23, 2018

Oh! In fact, I did forget to turn off the water increment when turning off the fidelity, thanks @sameerd ! Since in the original article of Paternak there was no alpha multiplying by the water increment term, I thought there was no need to turn that off (maybe another error in the article?).

When reading your code, I was not certain of the meaning of loss_vol, so I did not sum the loss over all voxels. I will now do that and keep doing some tests.

@arokem

This comment has been minimized.

Member

arokem commented Aug 23, 2018

Hello @mvgolub and @sameerd! Sorry for the relative silence here: I am delighted to see this discussion, and would love to help integrate these methods into our code-base. Please do keep the discussion going here (rather than going to private email conversations). I think that it would be hugely beneficial to be able to refer back to this conversation in the future. I'll reopen this issue, so that we can keep track of things, and go take a look at your PR as well. Thanks!

@arokem arokem reopened this Aug 23, 2018

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment