New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
MRIQC Anat pipeline: differences with MRIQC v22.06 #16
Comments
Please, can you add in your .bashrc the following lines:
Then open a new shell or do |
Without ANTS_RANDOM_SEED, ITK_GLOBAL_DEFAULT_NUMBER_OF_THREADS, OMP_NUM_THREADS
=> Func, reproducibility
Concusion: difficult to identify the variation that will be induced by the pipelines themselves because of the variances obtained for each pipeline by running them several times. We must first fix the intra-pipeline variance = > ANTS_RANDOM_SEED, ITK_GLOBAL_DEFAULT_NUMBER_OF_THREADS, OMP_NUM_THREADS=1 environment variables should fix this first issue (I'm not sure it's mandatory to use all these variables, but this way we should be sure to handle the problem...) with ANTS_RANDOM_SEED, ITK_GLOBAL_DEFAULT_NUMBER_OF_THREADS, OMP_NUM_THREADS
=> Func, reproducibility
Concusion: the differences observed between mia and host are not due to variance. There is a difference between the two pipelines. I am investigating this. withoutANTS_RANDOM_SEED-ITK_GLOBAL_DEFAULT_NUMBER_OF_THREADS-OMP_NUM_THREADS.zip withANTS_RANDOM_SEED-ITK_GLOBAL_DEFAULT_NUMBER_OF_THREADS-OMP_NUM_THREADS.zip |
Since the last commit, we observe:
Finally there are 4 extremely small differences (<1E-05 %):
and a big difference: For the report generation, I had worked in September 22 on the curve plot of q2 and I had observed that this were very strange. I think that the q2 is miscalculated and it would be worthwhile to see if there is an error in the script for q2. But for If it's ok for you @manuegrx , I'll let you close this ticket. |
@servoz I get exactly the same result as you in MIA using the env variable ! |
Thanks for not closing yet this ticket! |
|
|
Careful investigation shows that:
So, I try to fix |
|
There were many differences between MRIQC implemented in MIA and MRIQC v22.06 (for anat and func pipeline).
These differences were due to workflow and metrics computation differences.
MIA pipeline has been modified to be up to date with MRIQC v22.06 (see commit 2f540b1 ).
For functional data, MRIQC v22.06 results and results from MRIQC implemented in MIA are the same (see first sheet of the attached file compa_indices_alej.xlsx).
For anat data, there are still some differences for almost all indices (but slight differences, see the second sheet of the attached file).
It seems that for some of them (cnr, qi_1, qi_2, snrd, summary_bg, tpm_overlap) , if MRIQC v22.06 is launch a second time these indices are also slightly different.
It seems to be explain by differences in the "inverse composite transform " obtained in the Registration bricks (ANTs registration) of the spatial normalization pipeline.
As this transform is slightly different each time, the airmask obtained and the template registered are slightly different and so are the indices computed using theses images.
For the other indices, the differences came from a difference in the result obtained with N4BiasFieldCorrection (ANTs) in MIA and in MRIQC 22.06. This command is used 2 time in the Skull stripping pipeline.
This issue has been investigated without success for now but it will be great to understand why the N4BiasFieldCorrection command give different result in MIA.
The file "MRIQC_anat_issues.docx" resume tests already done
If we are using output images from MRIQC v22.06 Skull stripping pipeline as input in MIA (and so skipping the skull stripping pipeline in MIA) we obtain similar results to when MRIQC v22.06 is launch a second time
TO DO : understand why N4BiasFieldCorrection command give different results in MIA.
MRIQC_anat_issues.docx
compa_indices_alej.xlsx
The text was updated successfully, but these errors were encountered: