Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Affine registration of similar images #1833

Closed
chrisfoulon opened this issue May 14, 2019 · 7 comments

Comments

@chrisfoulon
Copy link

commented May 14, 2019

Description

Hi dipy team and users,

I am having some troubles with the dipy.align.imaffine functions' results when I try to register images with very little to no differences.
First, when I try to register an image to itself the translation, rigid and affine transformations give me results quite different from the identity. I just tested by attempting to register the MNI to itself with the different combination of transformations and when I calculate the norm of the translations or the distance of the other transformation to the identity the norm varies between around 0.0001 and 0.01.

I know that anyway it's not very common to register an image to itself. However, I am trying to write a function to create a template of longitudinal MRI data with an algorithm similar to what freesurfer does but entirely in python.
I basically have to average a dataset of images of the same subject and then register the images to this average. Then I average the newly registered images and register them to the new average and so on until it reaches convergence. Therefore, at some point, the images are almost identical. And when I used dipy registration functions, the similarity between the images never reached a convergence. It was oscillating, and sometimes the distance even increased for several rounds. I could choose a way bigger threshold, but it would be way too big to trust it.
I tried the same algorithm using the fsl FLIRT function, and I reached convergence with a threshold of 10-15, so this is a huge difference.

Do you have any idea of how I could obtain a transformation closer to none when the images are very similar?

Maybe it's because I used the default parameters?

nbins=32,
sampling_prop=None,
level_iters=[10000, 1000, 100],
sigmas=[3.0, 1.0, 0.0],
factors=[4, 2, 1]

Thank you in advance and have a great day,
Chris Foulon

Way to reproduce

Tested with both python 2 and 3 and dipy version 0.16.0

import nibabel as nib
import numpy as np
from dipy.align.imaffine import (MutualInformationMetric, AffineRegistration)
from dipy.align.transforms import (TranslationTransform3D,
                                   RigidTransform3D,
                                   AffineTransform3D)
mni = '/Users/cf27246/test/MNI152_T1_3mm_brain.nii.gz'

img = nib.load(mni)
moving = img.get_data()
static_grid2world = img.affine
metric = MutualInformationMetric(32, None)

affreg = AffineRegistration(metric=metric,
                            level_iters=[10000, 1000, 100],
                            sigmas=[3.0, 1.0, 1.0],
                            factors=[4, 2, 1])

transform = TranslationTransform3D()
params0 = None
translation = affreg.optimize(moving, moving, transform, params0,
                              static_grid2world, static_grid2world)
transformation_matrix = translation.affine
#transformed = translation.transform(moving, interp='linear')

transform = RigidTransform3D()
params0 = None
rigid = affreg.optimize(moving, moving, transform, params0,
                        static_grid2world, static_grid2world,
                        starting_affine=transformation_matrix)
transformation_matrix = rigid.affine
#transformed = rigid.transform(moving, interp='linear')

transform = AffineTransform3D()
params0 = None
affine = affreg.optimize(moving, moving, transform, params0,
                         static_grid2world, static_grid2world,
                         starting_affine=transformation_matrix)
transformation_matrix = affine.affine
#transformed = affine.transform(moving, interp='linear')

# Translation vector
translation = transformation_matrix[0:3, 3]
# 3x3 matrice of rotation and other rigid and affine transformations 
oth_affine_transform = transformation_matrix[0:3, 0:3]
tr_norm = np.linalg.norm(translation)
affine_norm = np.linalg.norm(oth_affine_transform - np.identity(3), 'fro')
# The criterion to test convergence in freesurfer (If I understood correctly of course)
print(pow(tr_norm, 2) + pow(affine_norm, 2))

In this example is the final norm is around 0.009294 with the translation + rigid + affine transformations.

I followed the examples but maybe the default parameters are not adapted to this particular case, I don't know.

@skoudoro

This comment has been minimized.

Copy link
Member

commented May 15, 2019

Hi @chrisfoulon,

Thank you for this complete report. For this particular case, I suppose you do not really need the multiscale registration so I will use these parameters:

level_iters=[100],
sigmas=[0.0],
factors=[1]

it should give you a better result for this case.
I need to this play a bit with what you are pointing, but like most of the DIPY team, we are currently at ISMRM and quite busy ...
any opinion @omarocegueda? @Borda?

@Borda

This comment has been minimized.

Copy link
Contributor

commented May 16, 2019

@skoudoro could you please reformat the initial question, allow code so it is easier to read
I do not think that using multi-level would cause any problem, at least in theory it should not
@chrisfoulon could you please share the sample image (probably some minimal version) or could it be also shown on a synthetic image generated with simple geometric shapes?
https://github.com/Borda/BIRL/blob/master/data_images/images/artificial_reference.jpg

@skoudoro

This comment has been minimized.

Copy link
Member

commented May 16, 2019

could you please reformat the initial question

done

I do not think that using multi-level would cause any problem, at least in theory it should not

In theory, it should not, I agree.

@chrisfoulon, After a bit more of thinking, I know that the default condition value for the optimizer to stop is 1e-4 (opt_tol). I suppose you need to increase the convergence for this specific case. You can see below how to do it. As @Borda suggested, it will be good if you could share sample data.

options = {'gtol': 1e-15, 'disp': False}
affreg = AffineRegistration(metric=metric,
                            level_iters=[10000, 1000, 100],
                            sigmas=[3.0, 1.0, 1.0],
                            factors=[4, 2, 1]
                            options=options)
@chrisfoulon

This comment has been minimized.

Copy link
Author

commented May 16, 2019

Thank you for your answers, I attached some sample data I used for this example. The first one is the MNI152 in 3mm and the second the MNI152 in 1mm from FSL.
MNI152_T1_3mm_brain.nii.gz
MNI152.nii.gz
I had the same kind of results with both images, and with several other T1 and mean EPI images.

I tried with your suggestion with the gtol, but the results are still the same :S

@skoudoro

This comment has been minimized.

Copy link
Member

commented May 18, 2019

@chrisfoulon in your comparison with fsl did you use mutual information or correlation ratio?

@chrisfoulon

This comment has been minimized.

Copy link
Author

commented May 20, 2019

OH! Good call, I tried with FSL using the mutual information instead of the correlation ratio and I also have a big distance between the same images! (I'm sorry I'm not yet familiar with all the parameters and what they imply).
Is it possible to use the correlation ratio with dipy affine registration?

Thank you!

@skoudoro

This comment has been minimized.

Copy link
Member

commented May 20, 2019

Not yet, we need to implement it (we only have it for SyN registration). All new contributions are welcomed! Feel free to create a new PR if you want to add it! 😄

@skoudoro skoudoro closed this May 20, 2019

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
3 participants
You can’t perform that action at this time.