From 81ba9cf9edfbcd99debae29b03087a2114b6a32d Mon Sep 17 00:00:00 2001 From: Prabhat Roy Date: Thu, 7 Oct 2021 23:54:16 +0100 Subject: [PATCH] Updated classification README to refer to torch.cuda.amp --- references/classification/README.md | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/references/classification/README.md b/references/classification/README.md index cc328f0f259..bae563c31c5 100644 --- a/references/classification/README.md +++ b/references/classification/README.md @@ -110,13 +110,13 @@ torchrun --nproc_per_node=8 train.py\ Here `$MODEL` is one of `regnet_x_32gf`, `regnet_y_16gf` and `regnet_y_32gf`. ## Mixed precision training -Automatic Mixed Precision (AMP) training on GPU for Pytorch can be enabled with the [NVIDIA Apex extension](https://github.com/NVIDIA/apex). +Automatic Mixed Precision (AMP) training on GPU for Pytorch can be enabled with the [torch.cuda.amp](https://pytorch.org/docs/stable/amp.html?highlight=amp#module-torch.cuda.amp). -Mixed precision training makes use of both FP32 and FP16 precisions where appropriate. FP16 operations can leverage the Tensor cores on NVIDIA GPUs (Volta, Turing or newer architectures) for improved throughput, generally without loss in model accuracy. Mixed precision training also often allows larger batch sizes. GPU automatic mixed precision training for Pytorch Vision can be enabled via the flag value `--apex=True`. +Mixed precision training makes use of both FP32 and FP16 precisions where appropriate. FP16 operations can leverage the Tensor cores on NVIDIA GPUs (Volta, Turing or newer architectures) for improved throughput, generally without loss in model accuracy. Mixed precision training also often allows larger batch sizes. GPU automatic mixed precision training for Pytorch Vision can be enabled via the flag value `--amp=True`. ``` torchrun --nproc_per_node=8 train.py\ - --model resnext50_32x4d --epochs 100 --apex + --model resnext50_32x4d --epochs 100 --amp ``` ## Quantized