From 0155ac6b6f9be75a98e1d7400c6da912c1d694ea Mon Sep 17 00:00:00 2001 From: Nic Ma Date: Fri, 24 Sep 2021 12:55:30 +0800 Subject: [PATCH 1/5] [DLMED] add more doc Signed-off-by: Nic Ma --- docs/source/whatsnew_0_7.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/docs/source/whatsnew_0_7.md b/docs/source/whatsnew_0_7.md index 5a0a82130d..8ed2881565 100644 --- a/docs/source/whatsnew_0_7.md +++ b/docs/source/whatsnew_0_7.md @@ -60,3 +60,5 @@ users have the flexibility to alter the architecture by varying the number of vision, language and mixed-modality layers and customizing the classification head. In addition, the model can be initialized from pre-trained BERT language models for fine-tuning. + +This release also addes support for Vision Transformer (ViT) model for both 2D and 3D applications of both classification and segmentation (encoder), and addes 2D support to the existing 3D UNETR network. From 97041d0882146da30940110d24c5eebb1acbe20d Mon Sep 17 00:00:00 2001 From: Nic Ma Date: Fri, 24 Sep 2021 14:14:50 +0800 Subject: [PATCH 2/5] [DLMED] edit change log Signed-off-by: Nic Ma --- CHANGELOG.md | 2 +- docs/source/whatsnew_0_7.md | 2 -- 2 files changed, 1 insertion(+), 3 deletions(-) diff --git a/CHANGELOG.md b/CHANGELOG.md index db2a29aeed..7dea15cd0a 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -32,7 +32,7 @@ and this project adheres to [Semantic Versioning](http://semver.org/spec/v2.0.0. * Deprecated input argument `dimensions` and `ndims`, in favor of `spatial_dims` * Updated the Sphinx-based documentation theme for better readability * `NdarrayTensor` type is replaced by `NdarrayOrTensor` for simpler annotations -* Attention-based network blocks now support both 2D and 3D inputs +* Self-attention-based network blocks now support both 2D and 3D inputs ### Removed * The deprecated `TransformInverter`, in favor of `monai.transforms.InvertD` diff --git a/docs/source/whatsnew_0_7.md b/docs/source/whatsnew_0_7.md index 8ed2881565..5a0a82130d 100644 --- a/docs/source/whatsnew_0_7.md +++ b/docs/source/whatsnew_0_7.md @@ -60,5 +60,3 @@ users have the flexibility to alter the architecture by varying the number of vision, language and mixed-modality layers and customizing the classification head. In addition, the model can be initialized from pre-trained BERT language models for fine-tuning. - -This release also addes support for Vision Transformer (ViT) model for both 2D and 3D applications of both classification and segmentation (encoder), and addes 2D support to the existing 3D UNETR network. From 2367ed67b97ad98481625684fca0810ccb52c1c6 Mon Sep 17 00:00:00 2001 From: Nic Ma Date: Fri, 24 Sep 2021 14:21:21 +0800 Subject: [PATCH 3/5] [DLMED] add command to get transform backend Signed-off-by: Nic Ma --- docs/source/highlights.md | 2 +- docs/source/whatsnew_0_7.md | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/docs/source/highlights.md b/docs/source/highlights.md index b84e93ff2d..a430113886 100644 --- a/docs/source/highlights.md +++ b/docs/source/highlights.md @@ -58,7 +58,7 @@ transformations. These currently include, for example: ### 3. Transforms support both NumPy array and PyTorch Tensor (CPU or GPU accelerated) -From MONAI v0.7 we introduced PyTorch `Tensor` based computation in transforms, many transforms already support both `numpy array` and `Tensor` data. +From MONAI v0.7 we introduced PyTorch `Tensor` based computation in transforms, many transforms already support both `numpy array` and `Tensor` data. To get the supported backends of every transform, please execute: `python monai/transforms/utils.py`. To accelerate the transforms, a common approach is to leverage GPU parallel-computation. Users can first convert input data into GPU Tensor by `ToTensor` or `EnsureType` transform, then the following transforms can execute on GPU based on PyTorch `Tensor` APIs. GPU transform tutorial is available at [Spleen fast training tutorial](https://github.com/Project-MONAI/tutorials/blob/master/acceleration/fast_training_tutorial.ipynb). diff --git a/docs/source/whatsnew_0_7.md b/docs/source/whatsnew_0_7.md index 5a0a82130d..010a62824b 100644 --- a/docs/source/whatsnew_0_7.md +++ b/docs/source/whatsnew_0_7.md @@ -29,7 +29,7 @@ more](https://github.com/Project-MONAI/tutorials/blob/master/acceleration/fast_t MONAI starts to roll out major usability enhancements for the `monai.transforms` module. Many transforms are now supporting both NumPy and - PyTorch, as input types and computational backends. + PyTorch, as input types and computational backends. To get the supported backends of every transform, please execute: `python monai/transforms/utils.py`. One benefit of these enhancements is that the users can now better leverage the GPUs for preprocessing. By transferring the input data onto GPU using From f3a8c1f94fde760eed15912afcd796018707a6b7 Mon Sep 17 00:00:00 2001 From: Nic Ma Date: Fri, 24 Sep 2021 14:32:06 +0800 Subject: [PATCH 4/5] [DLMED] enhance the doc Signed-off-by: Nic Ma --- docs/source/highlights.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/source/highlights.md b/docs/source/highlights.md index a430113886..e1443169a7 100644 --- a/docs/source/highlights.md +++ b/docs/source/highlights.md @@ -58,7 +58,7 @@ transformations. These currently include, for example: ### 3. Transforms support both NumPy array and PyTorch Tensor (CPU or GPU accelerated) -From MONAI v0.7 we introduced PyTorch `Tensor` based computation in transforms, many transforms already support both `numpy array` and `Tensor` data. To get the supported backends of every transform, please execute: `python monai/transforms/utils.py`. +From MONAI v0.7 we introduced PyTorch `Tensor` based computation in transforms, many transforms already support both `numpy array` and `Tensor` as input types and computational backends. To get the supported backends of every transform, please execute: `python monai/transforms/utils.py`. To accelerate the transforms, a common approach is to leverage GPU parallel-computation. Users can first convert input data into GPU Tensor by `ToTensor` or `EnsureType` transform, then the following transforms can execute on GPU based on PyTorch `Tensor` APIs. GPU transform tutorial is available at [Spleen fast training tutorial](https://github.com/Project-MONAI/tutorials/blob/master/acceleration/fast_training_tutorial.ipynb). From aa00ed8486a6e4ddb0e412c81553e792bdb85208 Mon Sep 17 00:00:00 2001 From: Nic Ma Date: Fri, 24 Sep 2021 14:43:10 +0800 Subject: [PATCH 5/5] [DLMED] fix typo Signed-off-by: Nic Ma --- docs/source/highlights.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/source/highlights.md b/docs/source/highlights.md index e1443169a7..44ab949a95 100644 --- a/docs/source/highlights.md +++ b/docs/source/highlights.md @@ -58,7 +58,7 @@ transformations. These currently include, for example: ### 3. Transforms support both NumPy array and PyTorch Tensor (CPU or GPU accelerated) -From MONAI v0.7 we introduced PyTorch `Tensor` based computation in transforms, many transforms already support both `numpy array` and `Tensor` as input types and computational backends. To get the supported backends of every transform, please execute: `python monai/transforms/utils.py`. +From MONAI v0.7 we introduced PyTorch `Tensor` based computation in transforms, many transforms already support both `NumPy array` and `Tensor` as input types and computational backends. To get the supported backends of every transform, please execute: `python monai/transforms/utils.py`. To accelerate the transforms, a common approach is to leverage GPU parallel-computation. Users can first convert input data into GPU Tensor by `ToTensor` or `EnsureType` transform, then the following transforms can execute on GPU based on PyTorch `Tensor` APIs. GPU transform tutorial is available at [Spleen fast training tutorial](https://github.com/Project-MONAI/tutorials/blob/master/acceleration/fast_training_tutorial.ipynb).