diff --git a/3d_segmentation/challenge_baseline/README.md b/3d_segmentation/challenge_baseline/README.md index fbef7c4a6b..3822197724 100644 --- a/3d_segmentation/challenge_baseline/README.md +++ b/3d_segmentation/challenge_baseline/README.md @@ -64,7 +64,7 @@ During training, the top three models will be selected based on the per-epoch va The training uses convenient file loading modules and a few intensity and spatial random augmentations using [MONAI](https://github.com/Project-MONAI/MONAI): -- `LoadImaged`, `AddChanneld`, `Orientationd`, `Spacingd`, `ScaleIntensityRanged` +- `LoadImaged`, `Orientationd`, `Spacingd`, `ScaleIntensityRanged` Load the image data into the LPS orientation (Left to right, Posterior to anterior, Superior to inferior), with a resolution of 1.25mm x 1.25mm x 5.00mm, and intensity between [-1000.0, 500.0] scaled to [0.0, 1.0]. diff --git a/3d_segmentation/swin_unetr_brats21_segmentation_3d.ipynb b/3d_segmentation/swin_unetr_brats21_segmentation_3d.ipynb index c3963ef59c..ae2bb39fb0 100644 --- a/3d_segmentation/swin_unetr_brats21_segmentation_3d.ipynb +++ b/3d_segmentation/swin_unetr_brats21_segmentation_3d.ipynb @@ -479,7 +479,7 @@ "torch.backends.cudnn.benchmark = True\n", "dice_loss = DiceLoss(to_onehot_y=False, sigmoid=True)\n", "post_sigmoid = Activations(sigmoid=True)\n", - "post_pred = AsDiscrete(argmax=False, logit_thresh=0.5)\n", + "post_pred = AsDiscrete(argmax=False, threshold=0.5)\n", "dice_acc = DiceMetric(\n", " include_background=True, reduction=MetricReduction.MEAN_BATCH, get_not_nans=True\n", ")\n",