Skip to content

Conversation

@rijobro
Copy link
Contributor

@rijobro rijobro commented Apr 14, 2022

Add collation and decollation for MetaTensor. Also add support for out= used as a kwarg, e.g., torch.add(a,b,out=c).

Non-breaking, so going into dev.

Status

Ready

Types of changes

  • Non-breaking change (fix or new feature that would not break existing functionality).
  • New tests added to cover the changes.

Signed-off-by: Richard Brown <33289025+rijobro@users.noreply.github.com>
@rijobro rijobro requested review from Nic-Ma and wyli April 14, 2022 14:38
rijobro added 4 commits April 14, 2022 15:39
Signed-off-by: Richard Brown <33289025+rijobro@users.noreply.github.com>
Signed-off-by: Richard Brown <33289025+rijobro@users.noreply.github.com>
Signed-off-by: Richard Brown <33289025+rijobro@users.noreply.github.com>
Signed-off-by: Richard Brown <33289025+rijobro@users.noreply.github.com>
Copy link
Contributor

@Nic-Ma Nic-Ma left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi @rijobro ,

Thanks for the quick update.
I am wondering whether it's possible to give a warning if user doesn't use our list_data_collate? Otherwise, I feel it is easily ignored and lead to unknown errors..

Thanks.

@Nic-Ma
Copy link
Contributor

Nic-Ma commented Apr 19, 2022

/build

rijobro added 2 commits April 20, 2022 14:20
Signed-off-by: Richard Brown <33289025+rijobro@users.noreply.github.com>
@rijobro
Copy link
Contributor Author

rijobro commented Apr 20, 2022

@ericspod @wyli @Nic-Ma I upgraded the logic so that a MetaTensor knows if it is a batch of data or not. This is so that indexing and iterating with batches of data return a subset of the metadata, as expected:

dl = DataLoader(ds)
batch = next(iter(dl))
batch[0]  # should only return the 0th image and 0th metadata
batch[:, -1]  # should return all metadata
batch[..., -1]  # should return all metadata
next(iter(batch))  # should return 0th image and 0th metadata.

rijobro added 5 commits April 20, 2022 14:30
Signed-off-by: Richard Brown <33289025+rijobro@users.noreply.github.com>
Signed-off-by: Richard Brown <33289025+rijobro@users.noreply.github.com>
Signed-off-by: Richard Brown <33289025+rijobro@users.noreply.github.com>
Signed-off-by: Richard Brown <33289025+rijobro@users.noreply.github.com>
Signed-off-by: Richard Brown <33289025+rijobro@users.noreply.github.com>
@rijobro
Copy link
Contributor Author

rijobro commented Apr 20, 2022

@Nic-Ma could you run /build?

@rijobro rijobro enabled auto-merge (squash) April 20, 2022 14:38
@rijobro rijobro changed the title MetaTensor: collate , decollate, dataset, dataloader, out= MetaTensor: collate; decollate; dataset; dataloader; out=; indexing and iterating across batches Apr 20, 2022
@Nic-Ma
Copy link
Contributor

Nic-Ma commented Apr 20, 2022

Hi @rijobro ,

Thanks for the quick update. I am wondering whether it's possible to give a warning if user doesn't use our list_data_collate? Otherwise, I feel it is easily ignored and lead to unknown errors..

Thanks.

Hi @rijobro ,

Thanks for your update.
I am wondering whether it's possible to give a warning if user doesn't use our list_data_collate?
If can't, let's merge this PR for now.

Thanks.

@Nic-Ma Nic-Ma disabled auto-merge April 20, 2022 14:46
@Nic-Ma
Copy link
Contributor

Nic-Ma commented Apr 20, 2022

/build

@rijobro
Copy link
Contributor Author

rijobro commented Apr 20, 2022

From the following code snippet, the first is good (uses our collation fn). The middle case can have a warning. But how would we do the last case? Or are you just hoping for a warning in the middle case?

Edit: lambda x: x would also be ok as a collation function, as the data will be a list, each element is kept independent.

import torch
from monai.data.meta_tensor import MetaTensor
from monai.data.dataset import Dataset
from monai.data.dataloader import DataLoader
from torch.utils.data import DataLoader as TorchDataLoader

data = [MetaTensor(torch.rand((1, 20, 20))) for _ in range(5)]
ds = Dataset(data)

# good
dl = DataLoader(ds)
next(iter(dl))

# can report warning
dl_no_collate = DataLoader(ds, collate_fn=lambda x: x)
next(iter(dl_no_collate))

# can't report warning
torch_dl = TorchDataLoader(ds)
next(iter(torch_dl))

@Nic-Ma
Copy link
Contributor

Nic-Ma commented Apr 20, 2022

Hi @rijobro ,

Yeah, I was asking about the last case, I understand it's hard to give a warning there. Just worried about user experience.
If you don't have any ideas, I am OK to merge this PR first.

Thanks.

@rijobro
Copy link
Contributor Author

rijobro commented Apr 20, 2022

I don't have any better ideas at the moment, but will certainly reflect on it. The MetaTensor docstring contains this info:

When creating a batch with this class, use monai.data.DataLoader as opposed to torch.utils.data.DataLoader, as this will take care of collating the metadata properly.

@rijobro rijobro merged commit 6dfb6a8 into Project-MONAI:dev Apr 20, 2022
@rijobro rijobro deleted the MetaTensor_collate_decollate branch April 20, 2022 16:12
Can-Zhao added a commit to Can-Zhao/MONAI that referenced this pull request May 10, 2022
Add padding to filter to ensure same size after anti-aliasing

Use replicate padding insteadof zero padding to avoid artifacts for non-zero boundary

Reuse GaussianSmooth

4073 Enhance DynUNet doc-strings (Project-MONAI#4102)

* Fix doc strings error

Signed-off-by: Yiheng Wang <vennw@nvidia.com>

* remove duplicate places

Signed-off-by: Yiheng Wang <vennw@nvidia.com>

4105 drops pt16 support (Project-MONAI#4106)

* update sys req

Signed-off-by: Wenqi Li <wenqil@nvidia.com>

* temp test

Signed-off-by: Wenqi Li <wenqil@nvidia.com>

* update code for torch>=1.7

Signed-off-by: Wenqi Li <wenqil@nvidia.com>

* temp tests

Signed-off-by: Wenqi Li <wenqil@nvidia.com>

* fixes tests

Signed-off-by: Wenqi Li <wenqil@nvidia.com>

* autofix

Signed-off-by: Wenqi Li <wenqil@nvidia.com>

* fixes import

Signed-off-by: Wenqi Li <wenqil@nvidia.com>

* clear cache

Signed-off-by: Wenqi Li <wenqil@nvidia.com>

* update based on comments

Signed-off-by: Wenqi Li <wenqil@nvidia.com>

* remove temp cmd

Signed-off-by: Wenqi Li <wenqil@nvidia.com>

Make `pixelshuffle` scriptable (Project-MONAI#4109)

* Update the existing functionality to comply with the `torchscript.jit.script` function.

Signed-off-by: Ramon Emiliani <ramon@afxmedical.com>

meta tensor (Project-MONAI#4077)

* meta tensor

Signed-off-by: Richard Brown <33289025+rijobro@users.noreply.github.com>

4084 Add kwargs for `Tensor.to()` in engines (Project-MONAI#4112)

* [DLMED] add kwargs for to() API

Signed-off-by: Nic Ma <nma@nvidia.com>

* [MONAI] python code formatting

Signed-off-by: monai-bot <monai.miccai2019@gmail.com>

* [DLMED] fix typo

Signed-off-by: Nic Ma <nma@nvidia.com>

* [DLMED] fix flake8

Signed-off-by: Nic Ma <nma@nvidia.com>

* [DLMED] update according to comments

Signed-off-by: Nic Ma <nma@nvidia.com>

Co-authored-by: monai-bot <monai.miccai2019@gmail.com>

fixes pytorch version tests (Project-MONAI#4127)

Signed-off-by: Wenqi Li <wenqil@nvidia.com>

update meta tensor api (Project-MONAI#4131)

* update meta tensor api

Signed-off-by: Wenqi Li <wenqil@nvidia.com>

* update based on comments

Signed-off-by: Wenqi Li <wenqil@nvidia.com>

runtests.sh isort (Project-MONAI#4134)

Signed-off-by: Richard Brown <33289025+rijobro@users.noreply.github.com>

update citation (Project-MONAI#4133)

Signed-off-by: Wenqi Li <wenqil@nvidia.com>

`ToMetaTensor` and `FromMetaTensor` transforms (Project-MONAI#4115)

to and from meta

no skip if before pytorch 1.7 (Project-MONAI#4139)

* no skip if before pytorch 1.7

Signed-off-by: Richard Brown <33289025+rijobro@users.noreply.github.com>

* fix

Signed-off-by: Richard Brown <33289025+rijobro@users.noreply.github.com>

* fix

Signed-off-by: Richard Brown <33289025+rijobro@users.noreply.github.com>

[DLMED] fix file name in meta (Project-MONAI#4145)

Signed-off-by: Nic Ma <nma@nvidia.com>

4116 Add support for advanced args of AMP (Project-MONAI#4132)

* [DLMED] fix typo in bundle scripts

Signed-off-by: Nic Ma <nma@nvidia.com>

* [DLMED] add support for AMP args

Signed-off-by: Nic Ma <nma@nvidia.com>

* [MONAI] python code formatting

Signed-off-by: monai-bot <monai.miccai2019@gmail.com>

* [DLMED] fix flake8

Signed-off-by: Nic Ma <nma@nvidia.com>

Co-authored-by: monai-bot <monai.miccai2019@gmail.com>

New wsireader (Project-MONAI#4147)

`MetaTensor`: collate; decollate; dataset; dataloader; out=; indexing and iterating across batches (Project-MONAI#4137)

`MetaTensor`: collate; decollate; dataset; dataloader; out=; indexing and iterating across batches (Project-MONAI#4137)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants