New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[numpy] torch.exp: promote integer inputs to float #50093
[numpy] torch.exp: promote integer inputs to float #50093
Conversation
💊 CI failures summary and remediationsAs of commit 9134f8c (more details on the Dr. CI page): 💚 💚 Looks good so far! There are no failures yet. 💚 💚 This comment was automatically generated by Dr. CI (expand for details).Follow this link to opt-out of these comments for your Pull Requests.Please report bugs/suggestions to the (internal) Dr. CI Users group. This comment has been revised 4 times. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Reason for skipping BFloat16
>>> torch.tensor(8.8750, device='cuda:0', dtype=torch.bfloat16)
tensor(8.8750, device='cuda:0', dtype=torch.bfloat16)
>>> t = torch.tensor(8.8750, device='cuda:0', dtype=torch.bfloat16)
>>> torch.exp(t)
tensor(7136., device='cuda:0', dtype=torch.bfloat16)
>>> torch.exp(t.to(torch.float32))
tensor(7150.9463, device='cuda:0')
Codecov Report
@@ Coverage Diff @@
## master #50093 +/- ##
==========================================
- Coverage 80.67% 80.66% -0.02%
==========================================
Files 1899 1899
Lines 206066 206066
==========================================
- Hits 166241 166219 -22
- Misses 39825 39847 +22 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Awesome! Thanks @kshitij12345!
I don't want to interrupt your flow because the progress on these PRs has been great, but would you take a look at creating an OpInfo for torch.tile?
We need to be very careful about its performance. See #49962. A recent PR reduced the size of the tensors generated by its method_test entries to reduce the time those tests took. We should probably respect those sizes.
While creating the OpInfo for tile, would you also review the following:
- was torch.tile implemented correctly? that is, is it really like np.tile? Note that torch.tile is not a unary ufunc, so while we can create an OpInfo for jit and autograd testing it will still need its own forward tests to validate its behavior. These tests exist, but are they complete?
- can torch.repeat be implemented as a call to torch.tile? I understand that torch.tile is actually implemented as a call to repeat currently, but from a UX standpoint, could we alias torch.repeat to torch.tile? It's true that torch.tile can accept more inputs than torch.repeat, but will every valid input to torch.repeat produce the same output when given to torch.tile?
I'm especially interested in this because of #50013. I'm not suggesting that we proceed with the deprecation in that issue, but I'd like more data about torch.tile and torch.repeat to make an informed decision.
Bonus points if the PR also creates an OpInfo for torch.repeat and/or merges their "forward" tests together.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@mruberry has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.
Sure. Thanks for the pointers! |
Summary: Reference: pytorch#42515 Pull Request resolved: pytorch#50093 Reviewed By: H-Huang Differential Revision: D25803549 Pulled By: mruberry fbshipit-source-id: e6f245b5e728f2dca6072f8c359f03dff63aa14d
Reference: #42515