Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[ENH] Template-based masking of EPI boldrefs #1321

Merged
merged 21 commits into from Oct 25, 2018
Merged

Conversation

oesteban
Copy link
Member

Changes proposed in this pull request

This PR explores template-based masking of boldrefs to initialize the skull-stripping workflow (see issues related on #1000).

This PR is also highly related to #1050.

Documentation that should be reviewed

This is WIP, will update documentation when it is proven to work out.

@oesteban
Copy link
Member Author

oesteban commented Oct 15, 2018

So these are the results:

I still need to investigate what happened here: https://2488-53608443-gh.circle-artifacts.com/0/tmp/data/reports/ds000216/sub-03_task-rest_echo-1_bold_masks.svg

@oesteban
Copy link
Member Author

oesteban commented Oct 15, 2018

The problem with echo 1 of ds216 seems to have been introduced here (this PR vs latest master)

@oesteban
Copy link
Member Author

The weird result in https://2488-53608443-gh.circle-artifacts.com/0/tmp/data/reports/ds000216/sub-03_task-rest_echo-1_bold_masks.svg does not happen when I test locally (???)

@oesteban
Copy link
Member Author

Seems like the weird problem on ds216 is now fixed. At this point I would ask: do you think it is maybe worth changing the whole algorithm to the following?:

  1. Initialization: ANTs' AI, using the template as target and the boldref as moving
  2. Gross mask, applyTransform on the inverted transform of 1)
  3. N4 on the boldref with the gross mask
  4. First registration: very fast refinement (Rigid) of the gross mask using the template as target and the boldref as moving and initialized with 1)
  5. N4 on the boldref with the less-gross mask
  6. Second registration (Affine) initialized with 4)
  7. Remaining boldref refinement with the final mask

@effigies
Copy link
Member

Let's test on the subject from #1181, as well.

@oesteban oesteban changed the title [ENH,WIP] Template-based masking of EPI boldrefs [ENH] Template-based masking of EPI boldrefs Oct 16, 2018
@oesteban
Copy link
Member Author

oesteban commented Oct 16, 2018

Still testing on additional datasets, but this seems ready for review.
Artifacts in the latest commit without [skip ci] -> https://circleci.com/gh/oesteban/fmriprep/2525#artifacts/containers/0

oesteban added a commit that referenced this pull request Oct 17, 2018
Refs. #1000, #1050, #1321.

Also includes a new filter of branches so builds other than tests
are skipped if the name of the branch starts with ``tests?/``.
@zhifangy
Copy link
Contributor

Hi, @oesteban. I did a quick test using this PR on my previous failed data. Generally, it gives pretty good results. But I think there are several issues we need to address.

  1. The pre_mask's header should be fixed before N4 correction. I encountered Inputs do not occupy the same physical space! error on one of my test image.

  2. The N4 correction has no effect on some of the unwrapped images (eg. sub-26 rest) in bold_bold_trans_wf. I tried to run N4 correction manually on the ref_image_valid.nii.gz and it yielded the same incorrect result.

  3. What if the BOLD image only covers part of brain volume? I think the initial affine transformation would generate a quite large mask for N4 correction. I think we should include some partial volume scan (especially for scans only covers 30-40 slices parallels to the long axis of hippocampus) in the test battery to see how well the method works.

@oesteban
Copy link
Member Author

Thanks a lot for your precious feedback! @ZhifangYe

  1. The pre_mask's header should be fixed before N4 correction. I encountered Inputs do not occupy the same physical space! error on one of my test image.

Could you provide more information about the error (full trace) or even the dataset (privately is fine) where this is happening? This is a new caching procedure we are testing and it does not surprise me you got this kind of problem.

  1. The N4 correction has no effect on some of the unwrapped images (eg. sub-26 rest) in bold_bold_trans_wf. I tried to run N4 correction manually on the ref_image_valid.nii.gz and it yielded the same incorrect result.

Cool, sub-26_task-rest is now within the regression tests, so I'll work on this one particularly.

  1. What if the BOLD image only covers part of brain volume? I think the initial affine transformation would generate a quite large mask for N4 correction. I think we should include some partial volume scan (especially for scans only covers 30-40 slices parallels to the long axis of hippocampus) in the test battery to see how well the method works.

This question is also very pertinent. I'll dig up some partial FoV datasets from OpenNeuro and add them to the test battery.

Thanks a lot for your input, very much appreciated!

@oesteban
Copy link
Member Author

Fixes #1000
Fixes #1181

@oesteban
Copy link
Member Author

Ref #900

@zhifangy
Copy link
Contributor

Here's the fMRIprep error log and the verbose log of N4 correction command.
image
image

I think it's due to the very small discrepancy between the qform matrix of pre_mask and ref_image.

The following link is the raw data to replicate this error. https://www.dropbox.com/s/akfyadrih5lgcbd/sub-30.zip?dl=0

@oesteban
Copy link
Member Author

I think it's due to the very small discrepancy between the qform matrix of pre_mask and ref_image

Correct, there is a minimal difference in the qform.

@oesteban
Copy link
Member Author

Alright, this ready for review (@ZhifangYe, @effigies). I've just fixed the issue of mismatched headers before N4.

Since we don't make any promises about very limited FoV (https://github.com/poldracklab/fmriprep/labels/partial%20FOV), I propose to merge this in and come back later to the FoV issue.

Regarding sub-26_task-rest, please check out the regression tests under the test_pytest build of Circle.

Copy link
Member

@effigies effigies left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure about the new interface.

imghdr['dim_info'] = refhdr['dim_info'] # dim_info is lost sometimes

# Set qform
qform, qcode = refhdr.get_qform(coded=True)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Are all images guaranteed to have non-zero codes? If qcode == 0, qform == None.

https://github.com/nipy/nibabel/blob/d5494f3350967b65bd6de1abefd95ab2899044e0/nibabel/nifti1.py#L916-L917

imghdr.set_qform(qform, int(qcode))

# Set sform
sform, scode = refhdr.get_sform(coded=True)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

newpath=runtime.cwd)

imgnii.__class__(imgnii.get_data(), imgnii.affine, imghdr).to_filename(
out_file)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think this will work how you want. You may want to add tests.

If imgnii.affine does not match the result of imgnii.header.get_best_affine(), it will be pushed into the header, probably as the sform. If you've just put refhdr.get_sform() into the imghdr sform matrix, then these are unlikely to match, and you'll be pushing the original affine right back in.

A better approach would be:

imgnii.__class__(imgnii.get_data(), imghdr.get_best_affine(), imghdr).to_filename(
    out_file)

@oesteban
Copy link
Member Author

This is sub-26_task-rest: https://2591-53608443-gh.circle-artifacts.com/0/tmp/data/reports/ds001240/sub-26_task-rest_bold_masks.svg which looks okay.

We are seeing ds005 fail due to a problem checking filenames that is also happening in master. I'm guessing that would be a trivial change to do in master.

Thumbs up?

@oesteban
Copy link
Member Author

ds005 problems addressed here - #1344

If there are no further comments or voices against, I'll be merging later today and close #1050 as this PR supersedes that one.

Thanks to @ZhifangYe, this PR has been only possible with all the new data we have now for regression tests.

Copy link
Member

@effigies effigies left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This looks good. I took another read through, and only saw one thing, which should have no impact one way or another, but would reduce the number of branches.

if pre_mask:
workflow.connect([
(inputnode, enhance_and_skullstrip_bold_wf, [('bold_mask', 'inputnode.pre_mask')]),
])
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this should be okay to set unconditionally.

@effigies effigies added this to the 1.2.0 milestone Oct 25, 2018
Copy link
Contributor

@zhifangy zhifangy left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The test results look good to me. I'm really happy to see the issue is resolved. 👍

I read through the code and only find the doc in init_enhance_and_skullstrip_bold_wf line 185 doesn't match. I think it should be '7. Calculate a final mask as the intersection of 4) and 6).'.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants