Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Different transforms applied to CT and label #1071

Closed
1 task done
simonebonato opened this issue Apr 17, 2023 · 11 comments
Closed
1 task done

Different transforms applied to CT and label #1071

simonebonato opened this issue Apr 17, 2023 · 11 comments

Comments

@simonebonato
Copy link

simonebonato commented Apr 17, 2023

Is there an existing issue for this?

  • I have searched the existing issues

Problem summary

I have a CT scan and the corresponding label mask and I want to perform some augmentations on them.

Initially they have a different spacing but after using ToCanonical and Resample(1.0) they become the same; however when I perform some other augmentations and I make a plot where I overlay the label on the image (the same slice ofc), they are not overlapping and the label goes partially on the background.

Code for reproduction

from utils.plotting import plot_all
import matplotlib.pyplot as plt
import numpy as np
import torchio as tio

subject = tio.Subject(
    CT=tio.ScalarImage(CT_path),
    segmentation=tio.LabelMap(segmentation_path),
)

# Rescale the Intensity of the image
rescaler = tio.transforms.RescaleIntensity((0, 1), percentiles=(0.5, 99.5))
scaled_subject = rescaler(subject)

# Change voxels spacing and bring to canonical Orientation
canonical_spacing_transform = tio.Compose(
    [
        tio.transforms.Resample(1.0),
        tio.transforms.ToCanonical(),
    ]
)
can_spac_subsject = canonical_spacing_transform(scaled_subject)

# Apply data augmentation
augmentation_transform = tio.Compose(
    [
        tio.transforms.RandomAnisotropy(
            p=0.2, include=("CT", "segmentation"), label_keys=("segmentation",)
        ),
        tio.transforms.RandomFlip(
            include=("CT", "segmentation"), label_keys=("segmentation",)
        ),
        tio.transforms.RandomAffine(
            include=("CT", "segmentation"), label_keys=("segmentation",)
        ),
        tio.transforms.RandomNoise(include=("CT"), label_keys=("segmentation",)),
        tio.transforms.RandomBlur(include=("CT"), label_keys=("segmentation",)),
        tio.transforms.RandomBiasField(
            p=0.2, include=("CT"), label_keys=("segmentation",)
        ),
        tio.transforms.RandomElasticDeformation(
            p=0.2, include=("CT", "segmentation"), label_keys=("segmentation",)
        ),
        tio.transforms.RandomGamma(p=0.5, include=("CT"), label_keys=("segmentation",)),
    ]
)

augmented_subject = augmentation_transform(scaled_subject)

Actual outcome

When I plot the label on the CT, this is the result:
bad_augmentation

While before the transforms it looks like this:
pre_augmentation

The image and the segmentation are the following.

And the slice I printed is CT[500, :, :], same for the label.

Error messages

No response

Expected outcome

What I would like to obtain is the label being on top of the CT, and not shifted.

System info

No response

@romainVala
Copy link
Contributor

note that in the code you apply other augmentation on scaled_subject but not on can_spac_subsject
not sure if it make any difference

@simonebonato
Copy link
Author

simonebonato commented Apr 17, 2023

Thank you for the suggestion.

I tried applying the transforms RandomFlip and RandomAffine individually, since they affect the geometry of the CT and the label mask, and not the pixel intensities.

I found out that the issue comes with RandomAffine. I tried changing the hyperparameter "center" (image or origin) and "isotropic" (True or False), in all their combinations, but the misalignment still persists.

I then checked the spacing of the CT and the segmentation mask and I got the following:

image

So could it be that the reason could be that they have different spacings? And if I try to give them both the same spacing using Resample(1.0) it's also going to change the resolution of the image and at that point I lose the alignment anyway...

Any idea on how to solve this? Otherwise I will see what spatial transformation cause this issue and which don't and that's it.

@romainVala
Copy link
Contributor

ok difficult to figure out, without an example, (if you can share on volume)

it is weird to have the exact same matrix size but a very different voxel size .. :
this mean that you CT volume has a FOV twice bigger (in the z direction) compare to the segmentation ...

testing the superposition with different voxel size, may be handing, and results may depend of the used viewer

in python I am not sure but it may display the matrix (without taking into acount the voxel size for instance) With mrview (viewer from mrtrix,) or Slicer you will get more correct comparison

If the two are well alligned in Slicer (or mrview) then I would test to use
tresample = tio.Resample(target='CT', include = 'segmentation')
or the opposite depending on the desire final resolution
tresample = tio.Resample(target='segmentation', include = 'CT')

but I suspect that the affine (and thus the voxel size) of the segmentation is not correct and it should be the same as the CT volume ... (was the segmentation not defined from the CT scan ??? if yes, then it should have the exact same voxel size (and affine) )

in this case you should use to tio.CopyAffine('CT') transform to correct the segmentation

@fepegar
Copy link
Owner

fepegar commented Apr 17, 2023

Hi, @simonebonato. As @romainVala suggested, you can use CopyAffine as the firstr transform, as it seems that your segmentations is simply lacking the spatial metadata:

copy_metadata = tio.CopyAffine('CT')

However, I just tried that and the transform did nothing in my example! I'll debug this and come back to you.

@fepegar
Copy link
Owner

fepegar commented Apr 17, 2023

For visibility, this is somehow not working!

import torch
import torchio as tio
ct = tio.ScalarImage('/tmp/upload_folder/case_01_IMG_CT.nrrd')
seg = tio.LabelMap('/tmp/upload_folder/case_01_segmentation.nrrd')
seg.affine = ct.affine
print(seg.affine)
array([[-1.,  0.,  0.,  0.],
       [ 0., -1.,  0.,  0.],
       [ 0.,  0.,  1.,  0.],
       [ 0.,  0.,  0.,  1.]])

@fepegar
Copy link
Owner

fepegar commented Apr 17, 2023

Update: seg[tio.AFFINE] gives me the expected output. Maybe Image shouldn't inherit from dict.

@fepegar
Copy link
Owner

fepegar commented Apr 17, 2023

Ok, this happens because the affine is not being read correctly as the segmentation hasn't been loaded. That will be addressed.

if self._loaded or self._is_dir() or self._is_multipath():
affine = self[AFFINE]
else:
assert self.path is not None
assert isinstance(self.path, (str, Path))
affine = read_affine(self.path)

Back to this issue.

@fepegar
Copy link
Owner

fepegar commented Apr 17, 2023

import torch
import torchio as tio
ct = tio.ScalarImage('/tmp/upload_folder/case_01_IMG_CT.nrrd')
seg = tio.LabelMap('/tmp/upload_folder/case_01_segmentation.nrrd')

seg.data;  # just to temporarily overcome the bug we mentioned

subject = tio.Subject(ct=ct, seg=seg)

copy_metadata = tio.CopyAffine('ct')
fixed = copy_metadata(subject)
fixed.seg.save('/tmp/upload_folder/case_01_segmentation_fixed.nrrd')

The image seems aligned after copying the affine.

image

All looking good after applying RandomAffine:

transform = tio.RandomAffine()

torch.manual_seed(0)
transformed = transform(fixed)
transformed.ct.save('/tmp/upload_folder/case_01_IMG_CT_transformed.nrrd')
transformed.seg.save('/tmp/upload_folder/case_01_segmentation_fixed_transformed.nrrd')

image

@fepegar
Copy link
Owner

fepegar commented Apr 17, 2023

For visibility, this is somehow not working!

import torch
import torchio as tio
ct = tio.ScalarImage('/tmp/upload_folder/case_01_IMG_CT.nrrd')
seg = tio.LabelMap('/tmp/upload_folder/case_01_segmentation.nrrd')
seg.affine = ct.affine
print(seg.affine)
array([[-1.,  0.,  0.,  0.],
       [ 0., -1.,  0.,  0.],
       [ 0.,  0.,  1.,  0.],
       [ 0.,  0.,  0.,  1.]])

This has been fixed in v0.18.91.

@romainVala
Copy link
Contributor

oh I missed the link of the data
thanks @fepegar for the fix

I think this is a typical errors which is not easy to understand when one is not use to the 3D affine
@simonebonato to better understand
try the torchio ploting utility

compare
subject.plot()
and
fixed.plot()

taking into account the affine (and thus the voxel size) is important

@simonebonato
Copy link
Author

Thank you both @fepegar and @romainVala for solving my problem.

And thanks @romainVala I did what you told me and seen the difference.

compare subject.plot() and fixed.plot()

Next time I will check directly the spacing first before making any operation.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants