Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Transforms and samplers order in queue / transformrs after sampling #445

Closed
PeterMcGor opened this issue Feb 4, 2021 · 4 comments
Closed
Labels
enhancement New feature or request

Comments

@PeterMcGor
Copy link

🚀 Feature

Hey again!

In the current version, the order of transforms and sampling in the queue is fixed as is clearly explained here.
Sometimes, transformations need to be done (or obviously present different results when are applied on...) on the patches instead of the original image, although, there some workarounds, it could be nice to have some built-in option within the Queue object to specify some transformation after the patch creation.

Motivation
Allow defining patch-based transformations

Alternatives

I have been looking through old issues but I could not find something related, probably I am wrong so please let me know in that case.

Best!

@PeterMcGor PeterMcGor added the enhancement New feature or request label Feb 4, 2021
@fepegar
Copy link
Owner

fepegar commented Feb 4, 2021

Hi, @PeterMcGor.

Which transforms would you like to use? I suppose a patches_transform could be passed to the Queue. However, it wouldn't be applied using multiprocessing (I think).

@PeterMcGor
Copy link
Author

Hi!

For example, intensity transformations based on the patch stats, or spatial ones. I guess a best example could be an 2D architecture which employ 2D patches. Depending on the setup, some works standardise/normalise the values taking the whole datasets, the volume or the slice.
Since right now torchio pop patches from a list obviously the multiprocessing is not possible (by just adding a set of transformation after patch extraction). There are some ways to workaround that are closely related with the hot topic about Queue and GPU utilisation #393. Looks like we are all employing dirty workarounds but none of us have the time to properly solve it... I will try to keep tabs on this.

Cheers!

@fepegar
Copy link
Owner

fepegar commented Feb 5, 2021

If it's for 2D patches and speed is not super critical, you might get away with applying the transform with a single process:

In [1]: import torchio as tio

In [2]: transform = tio.Compose([tio.RescaleIntensity(), tio.RandomFlip((0, 1))])

In [3]: import torch

In [4]: image = tio.ScalarImage(tensor=torch.rand(1, 256, 256, 1))

In [5]: %timeit transform(image)
1.32 ms ± 54.7 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)

In [6]: transform = tio.Compose([tio.RescaleIntensity(), tio.RandomFlip((0, 1)), tio.RandomElasticDeformation()])

In [7]: %timeit transform(image)
54.5 ms ± 3.03 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)

In [8]: transform = tio.Compose([tio.RescaleIntensity(), tio.RandomFlip((0, 1)), tio.RandomAffine()])

In [9]: %timeit transform(image)
4.54 ms ± 426 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)

@fepegar
Copy link
Owner

fepegar commented Feb 10, 2021

Closing this unless there is a specific request. Feel free to reopen.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

2 participants