Join GitHub today
GitHub is home to over 36 million developers working together to host and review code, manage projects, and build software together.Sign up
[REVIEW] Fallback to initial registration, if BBR fails #694
2 times, most recently
Sep 8, 2017
referenced this pull request
Sep 13, 2017
This is ready for a review. It's working locally on @jdkent's dataset. Both FreeSurfer and FSL are falling back to
The only thing left to do is fix cost parsing for non-BBR FLIRT so that we can compare registration performance to determine phase-encoding direction for SyN-SDC.
The translation differences are large, and I believe it's a function of the .mat file format.
It may be that we just need to drop the translation check altogether.
On Sep 20, 2017 07:43, "Chris Markiewicz" ***@***.***> wrote: The translation differences are large, and I believe it's a function of the .mat file format <https://fsl.fmrib.ox.ac.uk/fsl/fslwiki/FLIRT/FAQ#What_are_FLIRT_schedule_files.3F> . The FSL convention for transformation matrices uses an implicit centre of transformation - that is, a point that will be unmoved by that transformation, which is an arbitrary choice in general. This arbitrary centre of the transformation for FSL is at the mm origin (0,0,0) which is at the centre of the corner voxel of the image. When using the transformation parameters from FLIRT, there is an additional complication in that the parameters are calculated in a way that uses a different centre convention: the centre of mass of the volume. The effect of this is that each of the three matrices above end up with an adjustment in the fourth column (top three elements only) that represents a shift between the corner origin and the centre of mass, while the rest of the matrix (first three columns) is unaffected. Once that is done the matrices are multiplied together, as indicated above, and you get your final matrix. It may be that we just need to drop the translation check altogether. — You are receiving this because you commented. Reply to this email directly, view it on GitHub <#694 (comment)>, or mute the thread <https://github.com/notifications/unsubscribe-auth/AAkhxsmkMbs3C5Jvmxw3KsXZVO3kTaUjks5skSSNgaJpZM4PQSXp> .
Looking at a lot of these, I'm not sure that there's a clear scaling threshold. Some are mostly rotations/translations, and a small rotation with a very large translation can cause as much distortion as a small(ish) translation with a large rotation. One possibility could be to have a threshold on some function of rotation and translation...
I'm going to try converting to LTA, which I believe uses the RAS=(0,0,0) origin, which should at least be no worse than FSL's corner origin, and may result in more obvious thresholds.
I think there's a possibility we'll need override options, no matter how we decide this, allowing users to force BBR or fallback when our heuristics fail.
Just as an update on this, I'm definitely not getting the level of distortion @jdkent was in his original post. For the most part, the bbregister results look reasonable, while the FLIRT+BBR results are being rejected.
@jdkent Do you think you could re-run that subject with the latest release? I'm a little concerned that I'm no longer replicating the issue and getting bad metrics to find cut-offs for.
I think I'm converging on using the "norm" from ArtifactDetect, which is a kind of compromise between @chrisfilo's idea of using framewise displacement and my earlier approach of using a differential affine matrix. The norm approach took a series of motion parameters, constructed affines, and finds the norm of the displacement, which is the maximum displacement of the centers of a bounding cube, and thus takes into account rotations, translations and scales (and shears, but we're only using 9-dof transforms).
With nipy/nipype#2198, I separate the norm calculation from constructing affines from the motion parameters, allowing us to directly pass the affine transform matrices.
The affines in LTA format (RAS2RAS) seem to produce differences of norms that correspond intelligibly with distortion, with norm > 20 being a pretty reliable indicator of a bad BBR, and < 10 indicating good. The exact threshold isn't obvious, with some that I marked "accept" and "reject" falling on either side as I moved between 15 and 20. For now I've put 15, and I'll see what you all think.
I'll package up the spreadsheet and visualizations first thing in the morning.
Given that I still haven't found a clear threshold, I'm inclined to add
Alright, I ran the subject with --force-bbr and without the flag, and the results are much improved, just not identical to @effigies. I ran it with a clear PYTHONPATH and in different working directories, one thing that isn't clean is the fact that I'm using an existing freesurfer directory, and not re-rerunning recon-all. However, the function did work and improved my results, so I don't think I should be holding up this pull request any longer.
@jdkent Your "without the flag" one looks very similar to mine, so I'm glad we're in a similar place. How do you feel about that threshold, by the way? We can make it more likely to reject that first run. The more I look at these results, the more I think we might want to make the threshold a little more stringent...
Thanks for testing this out for us, by the way.
oh, I thought the bbregister result was rejected for all the registrations. Then yes, a more stringent threshold could catch that case and improve the registration. I just don't know how much optimization for my case would sacrifice the generalization of the threshold to other datasets. Also, thank you for making this great tool, the work you are doing is awesome.