New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix torch.pow when the scalar base is a complex number #45259
Conversation
💊 CI failures summary and remediationsAs of commit 9c9101e (more details on the Dr. CI page):
ci.pytorch.org: 1 failedThis comment was automatically generated by Dr. CI (expand for details).Follow this link to opt-out of these comments for your Pull Requests.Please report bugs/suggestions on the GitHub issue tracker or post in the (internal) Dr. CI Users group. This comment has been revised 25 times. |
bffd0af
to
e387d43
Compare
@JackCaoG This PR adds tests to check complex base numbers of the
I noticed the testes for float bases are skipped in the xla CI job.
Please kindly confirm whether I could annotate complex base tests with |
@RockingJavaBean Thanks for the heads up, I think you can mark test to be |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM once the complex test is skipped for xla
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@anjali411 has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.
Codecov Report
@@ Coverage Diff @@
## master #45259 +/- ##
==========================================
- Coverage 68.15% 68.15% -0.01%
==========================================
Files 396 396
Lines 51133 51133
==========================================
- Hits 34851 34850 -1
- Misses 16282 16283 +1
Continue to review full report at Codecov.
|
@anjali411 merged this pull request in 0c8a600. |
@dtypes(*(torch.testing.get_all_dtypes(include_bool=False, include_bfloat16=False))) | ||
def test_complex_scalar_pow_tensor(self, device, dtype): | ||
complexes = [0.5j, 1. + 1.j, -1.5j, 2.2 - 1.6j] | ||
tensor = torch.rand(100).to(dtype=dtype, device=device) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For integer dtypes this resolves to a tensor full of zeros, which may not be the most interesting test case. We have a make_tensor function to generate a random tensor that would be nicer to use:
pytorch/torch/testing/_internal/common_utils.py
Lines 1441 to 1442 in ecdbea7
def make_tensor(size, device: torch.device, dtype: torch.dtype, *, | |
low, high, requires_grad: bool = False) -> torch.Tensor: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks so much for pointing out this issue, and offering the kind tip of using the make_tensor
function.
#47101 has been created to address this comment and special cases of zero exponents
and the 1 + 0j
base are added as well, please kindly help review.
Summary: Related #45259 This PR is to address the #45259 (comment) - leverage the `make_tensor` function to generate a random tensor as the exponent, preventing the full zeros for the integer exponent. - add some special cases for the zero exponents and the `1 + 0j` base. Pull Request resolved: #47101 Reviewed By: mruberry Differential Revision: D24682430 Pulled By: zou3519 fbshipit-source-id: f559dc0ba08f37ae070036fb25a52ede17a24149
Fixes #43829