You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The NativeMixedPrecision has been available in fastai since v2's release, which uses torch.cuda.amp to handle mixed precision training. MixedPrecision, on the other hand, previously used a mix of fastai's own code and code from NVIDIA and Pytorch.
We've now improved and carefully tested NativeMixedPrecision, and believe it is now generally faster and more reliable than the fastai version, and we've also checked it works well when combined with other callbacks (including gradient clipping, gradient accumulation, and distributed training).
Therefore, now when you use the MixedPrecision callback, you're actually getting the native version. To get the fastai version, you should instead use NonNativeMixedPrecision.
Similarly, if you use the Learner.to_fp16 method, you'll get the native version of the callback. Use Learner.to_non_native_fp16 for the fastai version.
If you find any issues where the old callback works better than the new one, please tell us (with a reproducible example) so we can fix it!
The text was updated successfully, but these errors were encountered:
The
NativeMixedPrecision
has been available in fastai since v2's release, which uses torch.cuda.amp to handle mixed precision training.MixedPrecision
, on the other hand, previously used a mix of fastai's own code and code from NVIDIA and Pytorch.We've now improved and carefully tested
NativeMixedPrecision
, and believe it is now generally faster and more reliable than the fastai version, and we've also checked it works well when combined with other callbacks (including gradient clipping, gradient accumulation, and distributed training).Therefore, now when you use the
MixedPrecision
callback, you're actually getting the native version. To get the fastai version, you should instead useNonNativeMixedPrecision
.Similarly, if you use the
Learner.to_fp16
method, you'll get the native version of the callback. UseLearner.to_non_native_fp16
for the fastai version.If you find any issues where the old callback works better than the new one, please tell us (with a reproducible example) so we can fix it!
The text was updated successfully, but these errors were encountered: