Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We鈥檒l occasionally send you account related emails.

Already on GitHub? Sign in to your account

Patch based augmentations/transformations #954

Closed
Paddy-Xu opened this issue Sep 4, 2022 · 8 comments
Closed

Patch based augmentations/transformations #954

Paddy-Xu opened this issue Sep 4, 2022 · 8 comments
Labels
enhancement New feature or request

Comments

@Paddy-Xu
Copy link

Paddy-Xu commented Sep 4, 2022

馃殌 Feature
patch based augmentations

Motivation

I think those augmentations are very useful, but as far as I understand, those are based on the whole image.
Is it possible to do patch based augmentations?

In theory it is equivalent as whether to do transformation or to extract patch first. However, I just noticed that image-wise transforms like elastic deformation and rotation is extremely memory consuming, so it makes more sense to first extract patch and then do the transformation.

@Paddy-Xu Paddy-Xu added the enhancement New feature or request label Sep 4, 2022
@Paddy-Xu Paddy-Xu changed the title Patch based augmentations Patch based augmentations/transformations Sep 4, 2022
@romainVala
Copy link
Contributor

romainVala commented Sep 7, 2022

No this is not equivalent :

the transformations are much more realist if you perform them on the all volume. Rotation for instance will add padding value in the border of the FOV. So performing a rotation on small patches may not be a good idea (definitively not the same as doing it on the whole volume before taking a patch)

I never tested but I think you can already do it.
if you take the torchio.queue (without transform) you can apply transform on each items ... no ?

May be an other alternative would be to have a torchio transform RandomCrop (with a fix size of patch) similar to what the torchio.Queue is doing. So that you can reduce your input size before applying other augmentations. As proposed by #847

@Paddy-Xu
Copy link
Author

Paddy-Xu commented Sep 7, 2022

No this is not equivalent :

the transformations are much more realist if you perform them on the all volume. Rotation for instance will add padding value in the border of the FOV. So performing a rotation on small patches may not be a good idea (definitively not the same as doing it on the whole volume before taking a patch)

I never tested but I think you can already do it. if you take the torchio.queue (without transform) you can apply transform on each items ... no ?

May be an other alternative would be to have a torchio transform RandomCrop (with a fix size of patch) similar to what the torchio.Queue is doing. So that you can reduce your input size before applying other augmentations. As proposed by #847

Hi,

Thanks a lot for your reply! Ok I understand the borders will not be equivalent, but besides that they should be quite same, especially let's say if I only do around k * 90 degree rotations. My input volume is really huge and I simply cannot do these operations on-the-fly. I will have to store them to the disk beforehand if doing a volume-wise transformation.

However, the input of transforms can be one of torchio.Subject, torchio.Image, numpy.ndarray, torch.Tensor, SimpleITK.Image, it cannot take tio.Queue type. Maybe I will have to manually modify the __getitem__ function defined in tio.Queue, or I might need to define another custom dataclass with tio.Queue as input and define transformations inside the new __getitem__ function before passing to pytorch DataLoader?

@romainVala
Copy link
Contributor

Yes you are rigth it is not that easy. I forgot that torch Dataloader, when instantiate with a tio.Queue dataset, returns Dict and not torchio.Subject. But there is all the necessary information to create a Subject ...

An other argument is speed, since with the Queue you can take several patches per volume but you perform the transform one.

If the input image is really to big, I would go for a compromise,
let say you want a patch of 128^3 then I would go for volume transformation but starting with a random crop with a target shape of for instance 212^3 . (doing so so have a smaller volume size and can perform as we do with 3D MRI)

out of curiosity which modality are you working with ?

@romainVala
Copy link
Contributor

I did not test this PR #847 but I thinks it should solve your issue.
May be there is a need to improve this transform so that it takes as input argument a patch sampler. It would be necessary if you need to weight the location of the chosen patches. (I guess the current implementation is a random uniform "patch" distribution)

@Paddy-Xu
Copy link
Author

Paddy-Xu commented Sep 8, 2022

Yes you are rigth it is not that easy. I forgot that torch Dataloader, when instantiate with a tio.Queue dataset, returns Dict and not torchio.Subject. But there is all the necessary information to create a Subject ...

An other argument is speed, since with the Queue you can take several patches per volume but you perform the transform one.

If the input image is really to big, I would go for a compromise, let say you want a patch of 128^3 then I would go for volume transformation but starting with a random crop with a target shape of for instance 212^3 . (doing so so have a smaller volume size and can perform as we do with 3D MRI)

out of curiosity which modality are you working with ?

Thanks! That is a good idea. Just to be sure, the patch is generated after the volume transformation, so the cropping size does need to be larger than the patch size, otherwise the patches will all be from the same location. But after each batch, the volume will be cropped from a different center?

I am working on a special kind of CT scan with around 800 * 800 * 800 dimension.

@romainVala
Copy link
Contributor

yes the target_shape of the RandomCropOrPad need to be larger if you use the Queue and several patch per volumes

(which is a good idea to gain speed) and it will also help for the "border efect" you may affine with affine and elastic deformation

but yes after the queue has selected x samples_per_volume, a new volume will be taken and a new RandomCropOrPad will pick a different center

@Paddy-Xu
Copy link
Author

Paddy-Xu commented Sep 8, 2022

Thanks!

Actually this small trick seems to work馃槀. I will try to see if it is better to crop first when I have non-90 degree rotations.


class CustomQueue(tio.Queue):
    def __int__(self, *args, **kwargs):
        super().__init__(*args, **kwargs)

    def __getitem__(self, _):
        sample_patch = super().__getitem__(_)

        augment = tio.Compose([
            tio.RandomAffine(scales=(1, 1), degrees=(90, 90, 0, 0, 0, 0), translation=0, p=0.6),
            tio.RandomAffine(scales=(1, 1), degrees=(0, 0, 90, 90, 0, 0), translation=0, p=0.6),
            tio.RandomAffine(scales=(1, 1), degrees=(0, 0, 0, 0, 90, 90), translation=0, p=0.6),
        ])

        sample_patch = augment(sample_patch)

        return sample_patch

@fepegar
Copy link
Owner

fepegar commented Oct 9, 2022

Thank you both for sharing!

@fepegar fepegar closed this as completed Oct 9, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

3 participants