You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
As it turns out, PyTorch is actually slower when running FP16 compared to FP32 (I think it just converts everything back to FP32 to do the processing).
Using JIT has a slight performance improvement and I believe is the recommended way to run inferencing in PyTorch. I believe it would be useful to show the JIT as a comparison for both FP16 and FP32.
The text was updated successfully, but these errors were encountered:
As it turns out, PyTorch is actually slower when running FP16 compared to FP32 (I think it just converts everything back to FP32 to do the processing).
Using JIT has a slight performance improvement and I believe is the recommended way to run inferencing in PyTorch. I believe it would be useful to show the JIT as a comparison for both FP16 and FP32.
The text was updated successfully, but these errors were encountered: