-
Notifications
You must be signed in to change notification settings - Fork 25.6k
remove unnecessary import introduced in PR 106535 #107440
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
[ghstack-poisoned]
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/107440
Note: Links to docs will display an error until the docs builds have been completed. ✅ No FailuresAs of commit 88af451 with merge base 884c03d ( This comment was automatically generated by Dr. CI and updates every 15 minutes. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Wondering why this was not captured by the lint rule (mypy for example)..
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you add the randn
too (either in this PR or in follow up PRs)?
will do in another PR |
@pytorchbot merge |
Merge startedYour change will be merged once all checks pass (ETA 0-4 Hours). Learn more about merging in the wiki. Questions? Feedback? Please reach out to the PyTorch DevX Team |
…ytorch#105021) I found that the upsample bicubic lowering was generating this line ```python ops.index_expr(0.244094488188976*x0, torch.float32) ``` which is not good because triton's `ops.index_expr` expects integer expressions and dtypes. Pull Request resolved: pytorch#105021 Approved by: https://github.com/lezcano [Compiled Autograd] Improve nyi error messages (pytorch#106176) Pull Request resolved: pytorch#106176 Approved by: https://github.com/eellison benchmark: convert output of fp64 to torch.float64 (pytorch#107375) This PR adds converting the output of fp64 to torch.float64 before checking for accuracy. Why we need this change? For llama of torchbench, it converts output to float before returning it. https://github.com/pytorch/benchmark/blob/bad4e9ac19852f320c0d21e97f526e0c2838633e/torchbenchmark/models/llama/model.py#L241 While in the correctness checker, it will not compare the res results with fp64_ref if the fp64_ref.dtype is not torch.float64. So llama fails the accuracy check in the low-precision case, even though res is closer to fp64_ref than ref. https://github.com/pytorch/pytorch/blob/e108f33299e4ea8fd39a1a81cf5ba6f3b509b6cb/torch/_dynamo/utils.py#L1025 Pull Request resolved: pytorch#107375 Approved by: https://github.com/jgong5, https://github.com/XiaobingSuper, https://github.com/jansel benchmark: higher tolerance for RobertaForQuestionAnswering (pytorch#107376) Pull Request resolved: pytorch#107376 Approved by: https://github.com/kit1980, https://github.com/XiaobingSuper, https://github.com/jansel ghstack dependencies: pytorch#107375 remove unnecessary import introduced in PR 106535 (pytorch#107440) Pull Request resolved: pytorch#107440 Approved by: https://github.com/fduwjj ghstack dependencies: pytorch#106535 Don't use thrust::log(complex) in CUDA as it takes a FOREVER to compile (pytorch#107559) As per title Pull Request resolved: pytorch#107559 Approved by: https://github.com/peterbell10
Stack from ghstack (oldest at bottom):