-
Notifications
You must be signed in to change notification settings - Fork 24.8k
Support __rmod__
#58476
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support __rmod__
#58476
Conversation
💊 CI failures summary and remediationsAs of commit b669975 (more details on the Dr. CI page):
ci.pytorch.org: 1 failedThis comment was automatically generated by Dr. CI (expand for details).Follow this link to opt-out of these comments for your Pull Requests.Please report bugs/suggestions to the (internal) Dr. CI Users group. |
Codecov Report
@@ Coverage Diff @@
## master #58476 +/- ##
==========================================
+ Coverage 76.44% 76.47% +0.03%
==========================================
Files 1990 1992 +2
Lines 199690 199895 +205
==========================================
+ Hits 152651 152868 +217
+ Misses 47039 47027 -12 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Cool! This is great Python Array API compatibility, @asi1024
This is going to need a "forward" test to validate that the operation computes what it should when give a scalar x tensor input.
We have some tests for remainder here:
pytorch/test/test_binary_ufuncs.py
Line 1638 in 470cd64
def test_fmod_remainder(self, device, dtype): |
That can probably be extended
cc @rgommers, looks like the Python Array API doesn't elaborate on rmod, but it might want to be more explicit about whether it should call fmod or remainder. NumPy's rmod looks like it calls remainder, so I assume that's what's intended?
sample_inputs_func=sample_inputs_rbinops, | ||
supports_out=False, | ||
skips=(SkipInfo('TestCommon', 'test_variant_consistency_jit',),), | ||
supports_autograd=False, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Interesting... why doesn't this support autograd? torch.remainder supports autograd typically, right?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
torch.remainder
seems not to support autograd of the second argument currently.
>>> x1 = torch.tensor([3.], requires_grad=True)
>>> x2 = torch.tensor([5.], requires_grad=True)
>>> y = x1 % x2
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
RuntimeError: the derivative for 'other' is not implemented
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Interesting. Would you add a comment to that effect here, too?
|
@mruberry I added torch.remainder(Scalar, Tensor) forward test now! |
for fn, inplace_fn, ref_fn in fns_list: | ||
np_x = x.cpu().numpy() | ||
np_x = x.cpu().numpy() if torch.is_tensor(x) else x |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice update
((torch.fmod, torch.Tensor.fmod_, np.fmod), | ||
(torch.remainder, torch.Tensor.remainder_, np.remainder),)) | ||
|
||
for dividend, mod in product([5, 3.14], mods): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add a comment here that this tests the scalar x tensor case
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is looking good, @asi1024!; I have some small suggestions to add comments
Also, it looks like the tests didn't run, which is odd. Maybe pushing the comment updates will trigger them properly? Once we see the tests passing I think this is good to go!
@mruberry CI seems to have failed in
|
Before bothering to reproduce it, what about doing what it suggests and adding dunder rmod to the list here? pytorch/test/test_jit_fuser_te.py Line 1811 in 6093161
|
@mruberry Thanks! I fixed it as you commented and confirmed the test has passed! |
Thank you, @asi1024. And don't worry about the remaining failure: it's unrelated. Also, if I don't respond to you in two business days please ping me. Sometimes notifications are lost. I'm sorry you waited so long for a response. I'll review the PR now, too. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Great work, @asi1024! And PyTorch is one step closer to supporting the Python Array API!
@mruberry has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator. |
@mruberry Thanks! 😃 |
Interestingly this hit an internal error:
maybe change the namespace to |
Fixed the namespace and rebase, but the CI failed with rocm build failure. The message |
@mruberry has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator. |
No worries about the ROCm failure, it's in the base. We fixed it yesterday. Running this through internal tests now. |
Internal tests look good; landing this now |
@mruberry Thanks for your review! |
Summary: Fixes pytorch#58035. This PR implements `torch.Tensor.__rmod__` and `torch.remainder(scalar, tensor)` for the compatibility with NumPy’s interface. (cc: mruberry, rgommers, emcastillo, kmaehashi) TODO: - [x] Update `tensor_binary_op` in test/test_binary_ufuncs.py after pytorch#58216 is merged. Pull Request resolved: pytorch#58476 Reviewed By: ngimel Differential Revision: D28776810 Pulled By: mruberry fbshipit-source-id: 74f8aea80f439ef2cc370333524e39971eeb7bf4
Fixes #58035.
This PR implements
torch.Tensor.__rmod__
andtorch.remainder(scalar, tensor)
for the compatibility with NumPy’s interface.(cc: @mruberry, @rgommers, @emcastillo, @kmaehashi)
TODO:
tensor_binary_op
in test/test_binary_ufuncs.py after Fix some tensor operators to returnNotImplemented
for invalid inputs #58216 is merged.