Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Thoughts about using the SyN algorithm #706

Open
arokem opened this issue Aug 29, 2015 · 6 comments
Open

Thoughts about using the SyN algorithm #706

arokem opened this issue Aug 29, 2015 · 6 comments

Comments

@arokem
Copy link
Contributor

arokem commented Aug 29, 2015

@satra says (over here nipy/nireg#2 ): From a user perspective our current ants implementation works together with freesurfer, and uses a set of parameters that we feel quite comfortable with - tested in hundreds of cases (https://github.com/nipy/nipype/blob/master/examples/fmri_ants_openfmri.py#L200). as far as i can tell the SyN implementation in dipy is just for SyN, which means i have to do the rest of the pieces with something else and test that it delivers similar output (we will have resources for this in the near future, but not right now). for this bit, it will require work to ensure the transforms between these packages are compatible with each other (i have seen no convincing alternative to freesurfer's bbregister yet - especially for partial coverage of brains).

@arokem
Copy link
Contributor Author

arokem commented Aug 29, 2015

@satra: What exactly do you mean by "rest of the pieces"?

@satra
Copy link

satra commented Aug 29, 2015

@arokem - sorry for being unclear and hasty.

for the ants bit in that particular code the optimization is over three transformation steps [Rigid, Affine, Syn] with metrics being 'Mattes' and 'CC'. clicking together some other pieces to do the rigid and affine will not be hard. the question will really be a matter of demonstration that the replacement pieces work well.

[btw, even affine registration can have different levels of quality, and the ants folks did some nice validaton. computed: T(aff1) : im1 -> im2 and T(aff2): im2 -> im1, then tested with im1_hat = inv(T(aff2)) * T(aff1) * im1 and looked at the difference between im1 and im1_hat. turned out many existing affine registration algorithm did much more poorly than ants on this.]

in other areas of use, we have relied on ants support for lesion masks. i don't know if that's available in dipy.

@arokem
Copy link
Contributor Author

arokem commented Aug 29, 2015

Have you seen the comparisons/benchmarks that @omarocegueda has run? See the top of this PR: #654

@satra
Copy link

satra commented Aug 29, 2015

@arokem - thanks a lot! that definitely was useful to skim through that PR and code. and yes, based on that implementation the particular use case i pointed to still requires one more feature (1 below).

it appears from the code that the registration:

  1. can use only one metric and not a joint metric yet (for e.g., CC + Mattes as used in the nipype example)
  2. implementation support single fixed and moving images (i.e. joint information registration not available. e.g., T1 + T2, T1 + T2 + labels)
  3. does not yet support masks (i.e brains with lesions cannot be warped)

our usage of ants 1 >> 2 >> 3 (3 happens in the rarest cases but ants supports it).

@omarocegueda - a few questions about the evaluation.

  • which dataset was used? are the data available? were these manual labels?
  • is the jaccard coefficient computed on round-trip overlap (im1 -> im2 -> im1) or pairwise (im1 -> im2). if the latter, do you have the same info for roundtrip?
  • is there a script/notebook to rerun the evaluation?

@omarocegueda
Copy link
Contributor

Hi @satra!, =)
you are right, our implementation can use only one metric, works with only one moving and one static image and does not support masks.
Regarding the evaluation, we used the IBSR18 database, available in NITRC, which has manual annotations. The Jaccard index was pairwise, but the round-trip variation sounds very interesting!, we definitely need to do that evaluation too!. The evaluation scripts are there in github:

https://github.com/omarocegueda/experiments/tree/master/experiments/registration

it contains the validation scripts for both affine and syn (the graph you saw is for the affine case, but we have similar results for syn as well). Unfortunately, I haven't taken the time to write a clean documentation to help people easily run the evaluation themselves. The main problem is that we send the 306 registrations to a cluster, otherwise it would take too much time to run the full experiment on one single computer. We have thought about the possibility to make these tools more accessible to run the validation periodically (maybe Azure), but we don't have a place to put the data and the scripts yet.

@arokem
Copy link
Contributor Author

arokem commented Aug 29, 2015

Thanks so much for taking the time to provide all the feedback! We really
appreciate it.

On Sat, Aug 29, 2015 at 8:13 AM, Omar Ocegueda notifications@github.com
wrote:

Hi @satra https://github.com/satra!, =)
you are right, our implementation can use only one metric, works with only
one moving and one static image and does not support masks.
Regarding the evaluation, we used the IBSR18 database, available in NITRC,
which has manual annotations. The Jaccard index was pairwise, but the
round-trip variation sounds very interesting!, we definitely need to do
that evaluation too!. The evaluation scripts are there in github:

https://github.com/omarocegueda/experiments/tree/master/experiments/registration

it contains the validation scripts for both affine and syn (the graph you
saw is for the affine case, but we have similar results for syn as well).
Unfortunately, I haven't taken the time to write a clean documentation to
help people easily run the evaluation themselves. The main problem is that
we send the 306 registrations to a cluster, otherwise it would take too
much time to run the full experiment on one single computer. We have
thought about the possibility to make these tools more accessible to run
the validation periodically (maybe Azure), but we don't have a place to put
the data and the scripts yet.


Reply to this email directly or view it on GitHub
#706 (comment).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

4 participants