Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Lddmm registration #5323

Open
wants to merge 153 commits into
base: main
Choose a base branch
from

Conversation

DevinCrowley
Copy link

@DevinCrowley DevinCrowley commented Apr 8, 2021

Description

Add Large Deformation Diffeomorphic Metric Mapping (LDDMM) to the registration module.

This implements the algorithm in:
Computing Large Deformation Metric Mappings via Geodesic Flows of Diffeomorphisms -
https://doi.org/10.1023/B:VISI.0000043755.93987.aa

Some features which are not enabled by default implement the work in:
3D Mapping of Serial Histology Sections with Anomalies Using a Novel Robust Deformable Registration Algorithm -
https://doi.org/10.1007/978-3-030-33226-6_18

Checklist

For reviewers

  • Check that the PR title is short, concise, and will make sense 1 year
    later.
  • Check that new functions are imported in corresponding __init__.py.
  • Check that new features, API changes, and deprecations are mentioned in
    doc/release/release_dev.rst.

…corresponding tests in the tests folder, update the __init__.py file
…m.py, write a test for it, and import it in skimage/registration/__init__.py
Add newline at end of file
Remove unused import, skimage.transform.resize
…ly_varying_contrast_map feature and untested n-dimensional behavior
…and update the example plot_lddmm.py, code works, tests pass, this is not a drill
@DevinCrowley
Copy link
Author

@grlee77 @jni @emmanuelle
Per @grlee77's suggestion I've recreated the Lddmm registration PR #4390 here. This seems to appease the GitHub as I can now allow edits by maintainers.

@grlee77
Copy link
Contributor

grlee77 commented Apr 9, 2021

Okay, I pushed some minor fixes to verify that it is working now. I will revisit again soon with some other changes I had been unable to push previously.

Comment on lines 59 to 64
assert np.allclose(
deformed_reference_image, moving_image, rtol=rtol, atol=atol
)
assert np.allclose(
deformed_moving_image, reference_image, rtol=rtol, atol=atol
)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This np.allclose method with atol=1 - 1e-9 seems strange to me. The input images are in the range [0, 1] so the tolerance is nearly the full intensity range?

Two other possibilities for comparison are skimage.metrics.normalized_mutual_information or skimage.metrics.normalized_root_mse. normalized_root_mse seems okay as a comparison for these datasets where the intensities of the moving and reference images match.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In c773086 I changed these to use normalized_root_mse. I printed the actual values and they are extremely close to 0 for the identity transforms and less than 0.25 or so for the affine/non-rigid cases. I'm not sure if that is "good" or what would be expected, though.

@grlee77
Copy link
Contributor

grlee77 commented Apr 12, 2021

In 31d4bfc I changed the implementation to use scipy.ndimage.map_coordinates rather than scipy.interpolation.interpn (it seems to be a bit faster) . As far as I can tell, the example image registration looks the same before and after the change. One motivation for this is potential implementation on the GPU later (we have scipy.ndimage.map_coordinates implemented in CuPy).

@grlee77
Copy link
Contributor

grlee77 commented Apr 12, 2021

My primary concern is in playing with the plot_lddmm.py demo, the result can converge on unreasonable results if run to more iterations. I didn't spend much time to try and tune settings, but this is a little concerning.

One thing I noticed visually is that there are some very bright isolated pixels in the moving image. I tried modifying the example to crop outliers using:

def _clip_outliers(img, n_iqr=1.5, nonnegative=True):
    # clip outliers based on a multiple of the interquartile range
    p25, p75 = np.percentile(img, q=[25, 75])
    iqr = p75 - p25
    imin = p25 - n_iqr * iqr
    if nonnegative:
        imin = max(imin, 0)
    imax = p75 + n_iqr * iqr
    return np.clip(img, imin, imax)

reference_image = _clip_outliers(reference_image)
moving_image = _clip_outliers(moving_image)

This has not been committed here, but running the example with and without this clipping gave substantially different results.

@anorak94
Copy link

Hey i think this is a great initiative. Would it be possible to participate in it in some way. There are very few tested pythonic implementations of LDDMM out there. It would be really cool to add SyN and also a way to do landmarks and intensity based registrations together in LDDMM. I know there are ways to do LDDMM using intensity only and using landmarks only but not something that can do both together. Avants et. al. propose a method in their paper Lagrangian frame diffeomorphic image registration: Morphometric comparison of human and chimpanzee cortex. Or something even simpler like including loss term from both intensity and landmarks in the LDDMM algorithm. Please let me know what you think

@grlee77
Copy link
Contributor

grlee77 commented Apr 30, 2021

Hey i think this is a great initiative. Would it be possible to participate in it in some way.

Welcome @anorak94. I think there is general interest in having non-linear warping algorithms in scikit-image, but we are a bit short on maintainers with expertise in this area. If you are able to play around with the demo here and/or help review the code and API here from a user perspective that would be useful. I am not personally an expert with LDDMM so any additional feedback from others with expertise in this area is appreciated. Let me know if you need help with how to build scikit-image from this branch so that you can play with the demo locally.

As you may be aware, there is a SyN implementation in DIPY. While reviewing this PR, I had adapted that example to run using the same data as proposed here. For this example, the overlap looks much closer with SyN than with the LDDMM code in the example, but that might just be that the LDDMM parameters are not tuned as well as they could be?

For the SyN algorithm, last year I implemented a CUDA-based version for the GPU that gives an order of magnitude acceleration over the version in DIPY.

@vikramc1
Copy link

@grlee77 Thank you for your time in reviewing this PR! I've been working with @DevinCrowley and am happy to provide some additional example code registering some example datasets, if you'd like. Please let me know what else is needed to proceed on this PR.

Thank you!

@grlee77
Copy link
Contributor

grlee77 commented Jul 23, 2021

Hi @vikramc1, my main concerns are:

  • the default settings in the current demo do not produce particularly close overlap (see prior comment: Lddmm registration #5323 (comment)). I am not suggesting that it needs to be as good as the SyN result mentioned there, but would hope it can be improved over where it is currently with some tuning.

  • There are a ton of parameters, which is nice for power users, but likely confusing/intimidating to the average user. It would help to have a bit more guidance either in the docstrings, example or some new documentation page on how to go about adapting these in real-world use cases. Specifically, if registration is not working as expected which parameters should I start with when tuning?

@DevinCrowley
Copy link
Author

Hi @vikramc1, my main concerns are:

  • the default settings in the current demo do not produce particularly close overlap (see prior comment: Lddmm registration #5323 (comment)). I am not suggesting that it needs to be as good as the SyN result mentioned there, but would hope it can be improved over where it is currently with some tuning.
  • There are a ton of parameters, which is nice for power users, but likely confusing/intimidating to the average user. It would help to have a bit more guidance either in the docstrings, example or some new documentation page on how to go about adapting these in real-world use cases. Specifically, if registration is not working as expected which parameters should I start with when tuning?

Hey @grlee77, sorry I've been out of touch with this PR (closed I see) for quite some time. I did want to mention though that I had written up a somewhat more extended demo on using this code from scratch, with the fewest number of parameters that a non-power-user might reasonably expect to touch without any deep knowledge. I don't know if this is of interest anymore but I wanted to at least attach the tutorial I made to this.

If this PR might be revived I'd be glad to hear what that would take. I know there are a few people besides myself with an interest in seeing it completed and who may be able contribute to that end.

@grlee77
Copy link
Contributor

grlee77 commented Feb 4, 2022

Hey @grlee77, sorry I've been out of touch with this PR (closed I see) for quite some time.

It is still open! I am happy to continue reviewing if you or your colleagues can resume here

@anorak94
Copy link

anorak94 commented Feb 4, 2022

Just a small thing maybe this is already known but the SyN version implemented in ANTs is not the time varying velocity field parameterized version. They optimize for the displacement field transform. Id be happy to share a notebook with a toy implementation of SyN in python.

The version implemented in ANTs is the displacement field parameterized greedy version and that is the one which has been successful in bechmark studies. and this one is different from the one mentioned in the original SyN paper.

ANTsX/ANTsPy#313

Please take a look at this issue

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

5 participants