Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Enable antsApplyTransforms to select one volume in a 4D file to resample #1555

Open
effigies opened this issue Jun 20, 2023 · 5 comments
Open
Labels
enhancement Enhancements under development - feel free to join discussion if you'd like to contribute feature request Ideas for new or improved ANTs features

Comments

@effigies
Copy link

Is your feature request related to a problem?

To apply a concatenated sequence of transforms to each volume in a 4D series (for example, head motion + susceptibility warp + coregistration), either the transforms must be the same for each volume or you must split the series and apply the desired transforms.

Proposed new features
I suggest adding either a flag or syntax for specifying specific volumes of 4D input. For example using array notation or an --index flag:

antsApplyTransforms -i raw_bold.nii[0] -r mni.nii -o mni_vol0.nii \
    --transform vol0_motion.txt \
    --transform susceptibility_warp.h5 \
    --transform coreg.txt
antsApplyTransforms -i raw_bold.nii --index 0 -r mni.nii -o mni_vol0.nii \
    --transform vol0_motion.txt \
    --transform susceptibility_warp.h5 \
    --transform coreg.txt

Alternatives you've considered
The current approach is to split the series into individual volumes, apply the transformations, and then concatenate the resulting series. It works, but generates 2N volumes for an N time point series.

An ideal approach would be to be able to specify a per-volume transform as one in a sequence of transforms such as

antsApplyTransforms -i raw_bold.nii -r mni.nii -o mni_bold.nii \
    --transform [vol0_motion.txt, vol1_motion.txt, ...] \
    --transform susceptibility_warp.h5 \
    --transform coreg.txt

However, that seems like a more significant lift. Reading from one volume in a series will at least cut the number of files generated in half, which would be helpful in itself.

Additional context

As mentioned above, splitting files before applying transforms creates 2N output files. In the context of a nipype pipeline, add in an additional factor of 5-10 inodes for working directories and their contents. This can become problematic when running a pipeline on many subjects on shared network filesystems.

@cookpa cookpa added the feature request Ideas for new or improved ANTs features label Jun 21, 2023
@cookpa
Copy link
Member

cookpa commented Jun 21, 2023

The --index option could work, but would involve reading the 4D input file N times. Will that be better than doing the split?

The second option, using a vector of transforms, is more difficult. The ITK class does not support swapping out elements of the transform stack, we'd have to keep track of the component transforms and build the composite transform for each moving volume.

@effigies
Copy link
Author

effigies commented Jul 5, 2023

The --index option could work, but would involve reading the 4D input file N times. Will that be better than doing the split?

I suppose it's a question of whether the entire image is loaded into memory each time. In general, it is not difficult to load a single volume by calculating the offset in the byte stream, and this is not made much worse by compression, since NIfTIs are column-major and one volume is a contiguous segment. That said, I don't know how flexible ITK is on this.

@cookpa
Copy link
Member

cookpa commented Jul 5, 2023

The default is to read the entire thing, I asked on Discourse if it's possible to do it more efficiently

https://discourse.itk.org/t/random-access-to-image-data/6034

@cookpa cookpa added the enhancement Enhancements under development - feel free to join discussion if you'd like to contribute label Jul 5, 2023
@cookpa
Copy link
Member

cookpa commented Jul 5, 2023

Looks like we can do it!

https://itk.org/Doxygen/html/classitk_1_1ExtractImageFilter.html

@effigies
Copy link
Author

As a follow-up: It would be very cool if I could write to a specific volume in an existing file. I think compression would cause problems here, but if I had an uncompressed NIfTI, the problem of writing just becomes checking the header and writing to an offset with memory mapping or in append+read mode.

In case code is clearer, I'd pre-create the image:

import nibabel as nb

moving_image = nb.load('moving.nii.gz')
target_image = nb.load('target.nii.gz')

empty_output_image = nb.Nifti1Image(
    np.zeros(target_image.shape[:3] + moving_image.shape[3:]),
    target_image.affine,
    target_image.header,
)
# No scaling
empty_output_image.header.set_data_dtype('float32')
empty_output_image.header.set_slope_inter(1, 0)

empty_output_image.to_filename('output.nii')

Then, I would have ANTs target corresponding offsets in the input and output images:

for IDX in {0..240}; do
    antsApplyTransforms -i moving.nii.gz -r target.nii.gz -o output.nii -u float \
        --index $IDX --out-index $IDX \
        --transform vol${IDX}_motion.txt \
        --transform susceptibility_warp.h5 \
        --transform coreg.txt
done

It looks like ITK 5.3 added features for working with Dask, to take advantage of parallel processing, and the ImageFileWriter::setIORegion is probably the bit that would be needed.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement Enhancements under development - feel free to join discussion if you'd like to contribute feature request Ideas for new or improved ANTs features
Projects
None yet
Development

No branches or pull requests

2 participants