-
Notifications
You must be signed in to change notification settings - Fork 25.2k
Implements cpu_kernel_multiple_outputs and torch.frexp #51097
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Implements cpu_kernel_multiple_outputs and torch.frexp #51097
Conversation
💊 CI failures summary and remediationsAs of commit c8198e1 (more details on the Dr. CI page): 💚 💚 Looks good so far! There are no failures yet. 💚 💚 This comment was automatically generated by Dr. CI (expand for details).Follow this link to opt-out of these comments for your Pull Requests.Please report bugs/suggestions to the (internal) Dr. CI Users group. |
91dbe3f
to
c9a493a
Compare
Codecov Report
@@ Coverage Diff @@
## master #51097 +/- ##
==========================================
+ Coverage 76.36% 77.35% +0.98%
==========================================
Files 1886 1887 +1
Lines 184699 184804 +105
==========================================
+ Hits 141048 142946 +1898
+ Misses 43651 41858 -1793 |
c9a493a
to
a76fe8d
Compare
We should add a function that exercises this new functionality to test it, too. What about Numpy's frexp? (https://numpy.org/doc/stable/reference/generated/numpy.frexp.html). |
…kernel_mutiple_outputs
@mruberry Thanks for the kind suggestion, I will update this PR by adding |
…kernel_mutiple_outputs
…kernel_mutiple_outputs
…kernel_mutiple_outputs
Thanks @RockingJavaBean! I'm still catching up from being out on vacation, but I'll take a look ASAP! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi @RockingJavaBean , @mruberry and I took another look at your updates together. Overall, this PR is looking pretty good and close to ready. We really appreciate all the work you put into implementing frexp and also taking extra time to fix bugs in our test suite such as separating the float values.
There are still a few suggestion we have for your review:
- We think we can reuse test_reference_numerics with a tweak, and while we appreciate you demonstrating that the other test_unary_ufunc.py tests can be adapted to work with frexp(), we're a little worried that change is too big. For the final PR, we'd like to suggest skipping those tests and leaving them unmodified.
- For the test_frexp_out, add cases for incorrectly sized and noncontiguous inputs
- Add supports_tensor_out=False to the UnaryUfuncInfo and fix the test_out_arg... to correctly query for the metadata instead of skipping the test.
We look forward to your next updates!
…kernel_mutiple_outputs
123a787
to
95f662e
Compare
95f662e
to
52abf28
Compare
de5c23e
to
742a99f
Compare
I'm really grateful for the thorough review and invaluable suggestions throughout this PR. This PR has been updated with the following changes:
Please kindly take a look. @heitorschueroff @mruberry |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@RockingJavaBean This last version looks ready, great work! Just one last change before I land this, it looks like the PR picked up some changes from another PR which I commented on, could you confirm this and fix it please, I'll land it then.
@heitorschueroff I'm truly thankful for your kind review. |
You're correct. I'm landing the PR now, thank you for this great PR. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@heitorschueroff has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice job, @RockingJavaBean! This is a very technically complicated PR. I appreciate your thoughtfulness on both the technical challenges and the test architecture, too.
And thanks @heitorschueroff for reviewing this!
It is my honor to contribute to the PyTorch project and it cannot be done without your generous help and guidance. |
…kernel_mutiple_outputs
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@heitorschueroff has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.
@heitorschueroff merged this pull request in da10ccd. |
Close #51108
Related #38349
This PR implements the
cpu_kernel_multiple_outputs
to support returning multiple values in a CPU kernel.The
out1
will equal totorch.add(in1, in2)
, while the result ofout2
will betorch.mul(in1, in2)
.It helps developers implement new torch functions that return two tensors more conveniently, such as NumPy-like functions divmod and frexp.
This PR adds
torch.frexp
function to exercise the new functionality provided bycpu_kernel_multiple_outputs
.