-
Notifications
You must be signed in to change notification settings - Fork 231
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We鈥檒l occasionally send you account related emails.
Already on GitHub? Sign in to your account
Patch based augmentations/transformations #954
Comments
No this is not equivalent : the transformations are much more realist if you perform them on the all volume. Rotation for instance will add padding value in the border of the FOV. So performing a rotation on small patches may not be a good idea (definitively not the same as doing it on the whole volume before taking a patch) I never tested but I think you can already do it. May be an other alternative would be to have a torchio transform RandomCrop (with a fix size of patch) similar to what the torchio.Queue is doing. So that you can reduce your input size before applying other augmentations. As proposed by #847 |
Hi, Thanks a lot for your reply! Ok I understand the borders will not be equivalent, but besides that they should be quite same, especially let's say if I only do around k * 90 degree rotations. My input volume is really huge and I simply cannot do these operations on-the-fly. I will have to store them to the disk beforehand if doing a volume-wise transformation. However, the input of transforms can be one of torchio.Subject, torchio.Image, numpy.ndarray, torch.Tensor, SimpleITK.Image, it cannot take tio.Queue type. Maybe I will have to manually modify the |
Yes you are rigth it is not that easy. I forgot that torch Dataloader, when instantiate with a tio.Queue dataset, returns Dict and not torchio.Subject. But there is all the necessary information to create a Subject ... An other argument is speed, since with the Queue you can take several patches per volume but you perform the transform one. If the input image is really to big, I would go for a compromise, out of curiosity which modality are you working with ? |
I did not test this PR #847 but I thinks it should solve your issue. |
Thanks! That is a good idea. Just to be sure, the patch is generated after the volume transformation, so the cropping size does need to be larger than the patch size, otherwise the patches will all be from the same location. But after each batch, the volume will be cropped from a different center? I am working on a special kind of CT scan with around 800 * 800 * 800 dimension. |
yes the target_shape of the RandomCropOrPad need to be larger if you use the Queue and several patch per volumes (which is a good idea to gain speed) and it will also help for the "border efect" you may affine with affine and elastic deformation but yes after the queue has selected x samples_per_volume, a new volume will be taken and a new RandomCropOrPad will pick a different center |
Thanks! Actually this small trick seems to work馃槀. I will try to see if it is better to crop first when I have non-90 degree rotations.
|
Thank you both for sharing! |
馃殌 Feature
patch based augmentations
Motivation
I think those augmentations are very useful, but as far as I understand, those are based on the whole image.
Is it possible to do patch based augmentations?
In theory it is equivalent as whether to do transformation or to extract patch first. However, I just noticed that image-wise transforms like elastic deformation and rotation is extremely memory consuming, so it makes more sense to first extract patch and then do the transformation.
The text was updated successfully, but these errors were encountered: