New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[numpy] Add torch.xlogy
#48777
[numpy] Add torch.xlogy
#48777
Conversation
@@ -7277,6 +7277,54 @@ def test_atleast(self, device): | |||
self._test_atleast(device, torch.atleast_2d) | |||
self._test_atleast(device, torch.atleast_3d) | |||
|
|||
def test_xlogy(self, device): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
grad and gradgrad checks are performed for every OpInfo in test_ops.py. Are those tests not sufficient for some reason? Is it because of xlogy's unique behavior at (0, 0)?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes. This is to particularly verify the behaviour at zero.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is really interesting, cc @albanD. I wonder if we can better support this use case in the future by extending OpInfos with metadata like "special_autograd_values."
The OpInfo test may also generate zeros, although it's not very likely. If it did, would those cause the regular autograd check to fail? I'm trying to understand what the OpInfo-based autograd tests can cover vs. what this extra test needs to validate.
@@ -647,6 +647,16 @@ | |||
self: grad / (1 + pow(2, other - self)) | |||
other: grad / (1 + pow(2, self - other)) | |||
|
|||
- name: xlogy.Tensor(Tensor self, Tensor other) -> Tensor | |||
self: grad * at::xlogy((self != 0), other) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
cc @albanD for this derivative
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hey @kshitij12345! Thanks for submitting this PR adding xlogy. The issue it fixes has been open for a long time.
I made some comments/suggestions, especially about extending this to include method, inplace, and out= variants and using the OpInfo pattern.
Eventually we'll want to add a BinaryUfuncInfo class to the OpInfo architecture. But I think we could add xlogy as a regular OpInfo for now.
@mruberry PTAL :) |
torch/_torch_docs.py
Outdated
r""" | ||
xlogy(self, other, *, out=None) -> Tensor | ||
|
||
Computes ``self * log(other)`` except it treats ``0 * log(0)`` as ``0``. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we looked at this briefly before, but I don't think we can say "it treats..." because if self
is zero then all values of log
, including negative values, will return 0. And PyTorch treats the logarithm of negative numbers as nan, too:
t = torch.tensor((-2., -1, 0, 1, 2))
torch.log(t)
: tensor([ nan, nan, -inf, 0.0000, 0.6931])
The problem with the "it treats" clause here is that this implies it's the ONLY exception from the normal computation.
What about saying, "Computes ``self * log(other)`` but with the following special cases:"
Then putting the mathematical cases below that line, then the sentence about this being similar to SciPy's xlogy after that?
torch/_torch_docs.py
Outdated
self (Number or Tensor) | ||
other (Number or Tensor) | ||
|
||
.. note:: Both attr:`self` and attr:`other` cannot be number. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
"Both attr:self
and attr:other
cannot be number." -> "At least one of :attr:`self` or :attr:`other` must be a tensor."
dtypesIfCUDA=all_types_and(torch.bool, torch.half, torch.bfloat16), | ||
test_inplace_grad=True, | ||
supports_tensor_out=True, | ||
skips=( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This skip should be able to removed after a rebase
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hey @kshitij12345!
Overall this looks very good and thorough. Just a few small comments inline.
The current coverage_test2 failure is unrelated to this PR, sorry about that. Just let me know when this is ready for another review! |
@mruberry This is ready for review. Have addressed comments. |
torch/_torch_docs.py
Outdated
\text{out}_{i} = \begin{cases} | ||
\text{NaN} & \text{if } \text{other}_{i} = \text{NaN} \\ | ||
0 & \text{if } \text{self}_{i} = 0.0 \\ | ||
self_i * \log{(other_i)} & \text{otherwise} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
self_i
-> \text{self}_i
and other_i
-> \text{other}_i
on this line
And one last change is that "self" should be renamed to "input" in this doc, sorry I forgot to mention that earlier. Just like how the parameters are named in add:
https://pytorch.org/docs/master/generated/torch.add.html?highlight=add#torch.add
torch/_torch_docs.py
Outdated
""" + r""" | ||
|
||
Args: | ||
self (Number or Tensor) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
"self" -> "input" here, too
torch/_torch_docs.py
Outdated
self (Number or Tensor) | ||
other (Number or Tensor) | ||
|
||
.. note:: At least one of :attr:`self` or :attr:`other` must be a tensor. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
"self" -> "input" here as well
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks great, thanks @kshitij12345!
One last nit is the naming of the params (should be 'input', not 'self') and a bit of Latex formatting. Just ping me when that's done and I'll merge this.
* self -> input * update math formatting
@mruberry PTAL :) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Awesome! Thanks @kshitij12345!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@mruberry has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.
Reference #38349
Fixes #22656
TODO: