Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Show PyTorch JIT in Performance Benchmarks #99

Closed
xsacha opened this issue Oct 5, 2019 · 0 comments
Closed

Show PyTorch JIT in Performance Benchmarks #99

xsacha opened this issue Oct 5, 2019 · 0 comments

Comments

@xsacha
Copy link

xsacha commented Oct 5, 2019

As it turns out, PyTorch is actually slower when running FP16 compared to FP32 (I think it just converts everything back to FP32 to do the processing).
Using JIT has a slight performance improvement and I believe is the recommended way to run inferencing in PyTorch. I believe it would be useful to show the JIT as a comparison for both FP16 and FP32.

@jaybdub jaybdub closed this as completed Jul 18, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants