-
Notifications
You must be signed in to change notification settings - Fork 422
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
torch.autograd.grad slow on tpu #2702
Comments
There's Are you using nightly? |
@taylanbil no I'm not. I'm installing using I'm going to test now using nightly. |
using the nightly, things are slower, about 62 seconds. when looking at the metrics report I see
to request those calls be lowered to xla, do I need to file a separate issue? |
urgh, you can ignore the conj and view as real, those are tracked in #2688 but yeah it looks like the upsample ops need to be lowered. you can close this issue and open a separate one about those lowerings. |
I am using grad for calculating PL lengths for stylegan, but besides checking tensor shapes before calling
grad
, I'm not sure if I am doing something wrong.on a gpu, for my
grad
function, it's taking about .02 seconds on calls after the firstcolab (calc_pl_lengths section):
https://colab.research.google.com/drive/1Pg-kKt6qhXz39PjiHDHTRhq5ViCcttra?usp=sharing
where as on a tpu, it's taking about 9 seconds on calls after the first
colab (calc_pl_lengths section):
https://colab.research.google.com/drive/1MEyQ2KMDn1IjxJ2FLHcEgmySBlWNuR5q?usp=sharing
The text was updated successfully, but these errors were encountered: