-
Notifications
You must be signed in to change notification settings - Fork 25k
[special] Alias for special.expm1 and special.exp2 #54670
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[special] Alias for special.expm1 and special.exp2 #54670
Conversation
💊 CI failures summary and remediationsAs of commit 33d7264 (more details on the Dr. CI page): 💚 💚 Looks good so far! There are no failures yet. 💚 💚 This comment was automatically generated by Dr. CI (expand for details).Follow this link to opt-out of these comments for your Pull Requests.Please report bugs/suggestions to the (internal) Dr. CI Users group. |
Codecov Report
@@ Coverage Diff @@
## master #54670 +/- ##
=======================================
Coverage 77.45% 77.45%
=======================================
Files 1893 1893
Lines 186023 186029 +6
=======================================
+ Hits 144086 144097 +11
+ Misses 41937 41932 -5 |
@@ -3177,12 +3177,14 @@ def reference_lgamma(x): | |||
dtypesIfCUDA=floating_types_and(torch.float16), | |||
assert_autodiffed=True), | |||
UnaryUfuncInfo('exp2', | |||
aliases=('special.exp2', ), | |||
ref=np_unary_ufunc_integer_promotion_wrapper(np.exp2), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Have kept np.exp2
as reference as it matches Pytorch promotion behavior for floats (preserves the input float type) (similar for expm1).
scipy.special.exp2
promotes float16 to float32 (thus there is a mismatch in result)
>>> import torch
>>> import numpy as np
>>> import scipy.special
>>> import math
>>> t = torch.tensor([math.pi/2], dtype=torch.float16)
>>> torch.exp2(t)
tensor([2.9688], dtype=torch.float16)
>>> np.exp2(t.numpy())
array([2.969], dtype=float16)
>>> scipy.special.exp2(t.numpy())
array([2.9696903], dtype=float32)
@mruberry Gentle Ping :) |
@@ -3082,7 +3082,10 @@ def merge_dicts(*dicts): | |||
|
|||
.. math:: | |||
y_{i} = 2^{x_{i}} | |||
|
|||
.. note:: Alias for :func:`torch.special.exp2`. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is OK but eventually we should just replace these doc strings with the pointer to what they alias, like the arccos docs: https://pytorch.org/docs/master/generated/torch.arccos.html?highlight=alias.
/// auto t = torch::randn(128, dtype=kDouble); | ||
/// torch::special::exp2(t); | ||
/// ``` | ||
inline Tensor exp2(const Tensor& self) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What about the out variants?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Cool!
Some follow-up tasks:
- support out variants in C++ API
- aliasing should just have the docs of one function point to the other to avoid redundancy
@mruberry has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator. |
Reference: #50345