Skip to content

Commit

Permalink
909 Drop deprecated APIs in IO transforms (#1448)
Browse files Browse the repository at this point in the history
* [DLMED] remove deprecated APIs in IO transforms

Signed-off-by: Nic Ma <nma@nvidia.com>
  • Loading branch information
Nic-Ma authored Jan 14, 2021
1 parent 6003c36 commit 8084394
Show file tree
Hide file tree
Showing 17 changed files with 54 additions and 830 deletions.
2 changes: 1 addition & 1 deletion docs/source/highlights.md
Original file line number Diff line number Diff line change
Expand Up @@ -39,7 +39,7 @@ There is a rich set of transforms in six categories: Crop & Pad, Intensity, IO,
### 2. Medical specific transforms
MONAI aims at providing a comprehensive medical image specific
transformations. These currently include, for example:
- `LoadNifti`: Load Nifti format file from provided path
- `LoadImage`: Load medical specific formats file from provided path
- `Spacing`: Resample input image into the specified `pixdim`
- `Orientation`: Change the image's orientation into the specified `axcodes`
- `RandGaussianNoise`: Perturb image intensities by adding statistical noises
Expand Down
42 changes: 0 additions & 42 deletions docs/source/transforms.rst
Original file line number Diff line number Diff line change
Expand Up @@ -231,24 +231,6 @@ IO
:members:
:special-members: __call__

`LoadNifti`
"""""""""""
.. autoclass:: LoadNifti
:members:
:special-members: __call__

`LoadPNG`
"""""""""
.. autoclass:: LoadPNG
:members:
:special-members: __call__

`LoadNumpy`
"""""""""""
.. autoclass:: LoadNumpy
:members:
:special-members: __call__

Post-processing
^^^^^^^^^^^^^^^

Expand Down Expand Up @@ -708,36 +690,12 @@ Instensity (Dict)
IO (Dict)
^^^^^^^^^

`LoadDatad`
"""""""""""
.. autoclass:: LoadDatad
:members:
:special-members: __call__

`LoadImaged`
""""""""""""
.. autoclass:: LoadImaged
:members:
:special-members: __call__

`LoadNiftid`
""""""""""""
.. autoclass:: LoadNiftid
:members:
:special-members: __call__

`LoadPNGd`
""""""""""
.. autoclass:: LoadPNGd
:members:
:special-members: __call__

`LoadNumpyd`
""""""""""""
.. autoclass:: LoadNumpyd
:members:
:special-members: __call__

Post-processing (Dict)
^^^^^^^^^^^^^^^^^^^^^^

Expand Down
9 changes: 3 additions & 6 deletions monai/apps/datasets.py
Original file line number Diff line number Diff line change
Expand Up @@ -37,9 +37,7 @@ class MedNISTDataset(Randomizable, CacheDataset):
Args:
root_dir: target directory to download and load MedNIST dataset.
section: expected data section, can be: `training`, `validation` or `test`.
transform: transforms to execute operations on input data. the default transform is `LoadPNGd`,
which can load data into numpy array with [H, W] shape. for further usage, use `AddChanneld`
to convert the shape to [C, H, W, D].
transform: transforms to execute operations on input data.
download: whether to download and extract the MedNIST from resource link, default is False.
if expected file already exists, skip downloading even set it to True.
user can manually copy `MedNIST.tar.gz` file or `MedNIST` folder to root directory.
Expand Down Expand Up @@ -158,8 +156,7 @@ class DecathlonDataset(Randomizable, CacheDataset):
"Task03_Liver", "Task04_Hippocampus", "Task05_Prostate", "Task06_Lung", "Task07_Pancreas",
"Task08_HepaticVessel", "Task09_Spleen", "Task10_Colon").
section: expected data section, can be: `training`, `validation` or `test`.
transform: transforms to execute operations on input data. the default transform is `LoadNiftid`,
which can load Nifti format data into numpy array with [H, W, D] or [H, W, D, C] shape.
transform: transforms to execute operations on input data.
for further usage, use `AddChanneld` or `AsChannelFirstd` to convert the shape to [C, H, W, D].
download: whether to download and extract the Decathlon from resource link, default is False.
if expected file already exists, skip downloading even set it to True.
Expand All @@ -185,7 +182,7 @@ class DecathlonDataset(Randomizable, CacheDataset):
transform = Compose(
[
LoadNiftid(keys=["image", "label"]),
LoadImaged(keys=["image", "label"]),
AddChanneld(keys=["image", "label"]),
ScaleIntensityd(keys="image"),
ToTensord(keys=["image", "label"]),
Expand Down
14 changes: 7 additions & 7 deletions monai/data/dataset.py
Original file line number Diff line number Diff line change
Expand Up @@ -89,15 +89,15 @@ class PersistentDataset(Dataset):
.. code-block:: python
[ LoadNiftid(keys=['image', 'label']),
[ LoadImaged(keys=['image', 'label']),
Orientationd(keys=['image', 'label'], axcodes='RAS'),
ScaleIntensityRanged(keys=['image'], a_min=-57, a_max=164, b_min=0.0, b_max=1.0, clip=True),
RandCropByPosNegLabeld(keys=['image', 'label'], label_key='label', spatial_size=(96, 96, 96),
pos=1, neg=1, num_samples=4, image_key='image', image_threshold=0),
ToTensord(keys=['image', 'label'])]
Upon first use a filename based dataset will be processed by the transform for the
[LoadNiftid, Orientationd, ScaleIntensityRanged] and the resulting tensor written to
[LoadImaged, Orientationd, ScaleIntensityRanged] and the resulting tensor written to
the `cache_dir` before applying the remaining random dependant transforms
[RandCropByPosNegLabeld, ToTensord] elements for use in the analysis.
Expand Down Expand Up @@ -446,7 +446,7 @@ class CacheDataset(Dataset):
For example, if the transform is a `Compose` of::
transforms = Compose([
LoadNiftid(),
LoadImaged(),
AddChanneld(),
Spacingd(),
Orientationd(),
Expand All @@ -457,7 +457,7 @@ class CacheDataset(Dataset):
when `transforms` is used in a multi-epoch training pipeline, before the first training epoch,
this dataset will cache the results up to ``ScaleIntensityRanged``, as
all non-random transforms `LoadNiftid`, `AddChanneld`, `Spacingd`, `Orientationd`, `ScaleIntensityRanged`
all non-random transforms `LoadImaged`, `AddChanneld`, `Spacingd`, `Orientationd`, `ScaleIntensityRanged`
can be cached. During training, the dataset will load the cached results and run
``RandCropByPosNegLabeld`` and ``ToTensord``, as ``RandCropByPosNegLabeld`` is a randomized transform
and the outcome not cached.
Expand Down Expand Up @@ -825,7 +825,7 @@ class ArrayDataset(Randomizable, _TorchDataset):
img_transform = Compose(
[
LoadNifti(image_only=True),
LoadImage(image_only=True),
AddChannel(),
RandAdjustContrast()
]
Expand All @@ -834,7 +834,7 @@ class ArrayDataset(Randomizable, _TorchDataset):
If training based on images and the metadata, the array transforms can not be composed
because several transforms receives multiple parameters or return multiple values. Then Users need
to define their own callable method to parse metadata from `LoadNifti` or set `affine` matrix
to define their own callable method to parse metadata from `LoadImage` or set `affine` matrix
to `Spacing` transform::
class TestCompose(Compose):
Expand All @@ -845,7 +845,7 @@ def __call__(self, input_):
return self.transforms[3](img), metadata
img_transform = TestCompose(
[
LoadNifti(image_only=False),
LoadImage(image_only=False),
AddChannel(),
Spacing(pixdim=(1.5, 1.5, 3.0)),
RandAdjustContrast()
Expand Down
11 changes: 7 additions & 4 deletions monai/data/nifti_reader.py
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@
import numpy as np
from torch.utils.data import Dataset

from monai.transforms import LoadNifti, Randomizable, apply_transform
from monai.transforms import LoadImage, Randomizable, apply_transform
from monai.utils import MAX_SEED, get_seed


Expand Down Expand Up @@ -81,16 +81,19 @@ def randomize(self, data: Optional[Any] = None) -> None:
def __getitem__(self, index: int):
self.randomize()
meta_data = None
img_loader = LoadNifti(
as_closest_canonical=self.as_closest_canonical, image_only=self.image_only, dtype=self.dtype
img_loader = LoadImage(
reader="NibabelReader",
image_only=self.image_only,
dtype=self.dtype,
as_closest_canonical=self.as_closest_canonical,
)
if self.image_only:
img = img_loader(self.image_files[index])
else:
img, meta_data = img_loader(self.image_files[index])
seg = None
if self.seg_files is not None:
seg_loader = LoadNifti(image_only=True)
seg_loader = LoadImage(image_only=True)
seg = seg_loader(self.seg_files[index])
label = None
if self.labels is not None:
Expand Down
18 changes: 2 additions & 16 deletions monai/transforms/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -137,22 +137,8 @@
ThresholdIntensityD,
ThresholdIntensityDict,
)
from .io.array import LoadImage, LoadNifti, LoadNumpy, LoadPNG
from .io.dictionary import (
LoadDatad,
LoadImaged,
LoadImageD,
LoadImageDict,
LoadNiftid,
LoadNiftiD,
LoadNiftiDict,
LoadNumpyd,
LoadNumpyD,
LoadNumpyDict,
LoadPNGd,
LoadPNGD,
LoadPNGDict,
)
from .io.array import LoadImage
from .io.dictionary import LoadImaged, LoadImageD, LoadImageDict
from .post.array import (
Activations,
AsDiscrete,
Expand Down
4 changes: 2 additions & 2 deletions monai/transforms/compose.py
Original file line number Diff line number Diff line change
Expand Up @@ -55,7 +55,7 @@ def __call__(self, data: Any):
- ``data`` is a Numpy ndarray, PyTorch Tensor or string
- the data shape can be:
#. string data without shape, `LoadNifti` and `LoadPNG` transforms expect file paths
#. string data without shape, `LoadImage` transform expects file paths
#. most of the pre-processing transforms expect: ``(num_channels, spatial_dim_1[, spatial_dim_2, ...])``,
except that `AddChannel` expects (spatial_dim_1[, spatial_dim_2, ...]) and
`AsChannelFirst` expects (spatial_dim_1[, spatial_dim_2, ...], num_channels)
Expand Down Expand Up @@ -282,7 +282,7 @@ def __call__(self, data):
- ``data[key]`` is a Numpy ndarray, PyTorch Tensor or string, where ``key`` is an element
of ``self.keys``, the data shape can be:
#. string data without shape, `LoadNiftid` and `LoadPNGd` transforms expect file paths
#. string data without shape, `LoadImaged` transform expects file paths
#. most of the pre-processing transforms expect: ``(num_channels, spatial_dim_1[, spatial_dim_2, ...])``,
except that `AddChanneld` expects (spatial_dim_1[, spatial_dim_2, ...]) and
`AsChannelFirstd` expects (spatial_dim_1[, spatial_dim_2, ...], num_channels)
Expand Down
Loading

0 comments on commit 8084394

Please sign in to comment.