Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix torch.pow when the scalar base is a complex number #45259

Closed

Conversation

RockingJavaBean
Copy link
Contributor

Fixes #43829

@RockingJavaBean RockingJavaBean marked this pull request as draft September 24, 2020 04:01
@dr-ci
Copy link

dr-ci bot commented Sep 24, 2020

💊 CI failures summary and remediations

As of commit 9c9101e (more details on the Dr. CI page):


  • 1/1 failures possibly* introduced in this PR
    • 1/1 non-CircleCI failure(s)

ci.pytorch.org: 1 failed


This comment was automatically generated by Dr. CI (expand for details).Follow this link to opt-out of these comments for your Pull Requests.

Please report bugs/suggestions on the GitHub issue tracker or post in the (internal) Dr. CI Users group.

See how this bot performed.

This comment has been revised 25 times.

@RockingJavaBean RockingJavaBean marked this pull request as ready for review September 24, 2020 15:34
@RockingJavaBean RockingJavaBean changed the title [WIP] Fix torch.pow when self is complex scalar Fix torch.pow when self is complex scalar Sep 24, 2020
@RockingJavaBean RockingJavaBean changed the title Fix torch.pow when self is complex scalar Fix torch.pow when the scalar base is a complex number Sep 24, 2020
@zou3519 zou3519 added the triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module label Sep 25, 2020
@RockingJavaBean
Copy link
Contributor Author

@JackCaoG This PR adds tests to check complex base numbers of the torch.pow function.
The xla tests failed due to precision issues.

Sep 25 09:34:10 FAIL [0.067s]: test_complex_scalar_pow_tensor_xla_float64 (__main__.TestTorchDeviceTypeXLA)
...
Sep 25 09:34:10 AssertionError: False is not true : Tensors failed to compare as equal! 
Imaginary parts failed to compare as equal! With rtol=0.001 and atol=0.001, 
found 100 element(s) (out of 100) whose difference(s) exceeded the margin of error (including 0 nan comparisons). 
The greatest difference was 0.5494986167779923 (0.5494986167779923 vs. 0.0), which occurred at index 19.

I noticed the testes for float bases are skipped in the xla CI job.

Sep 25 09:21:16   test_float_scalar_pow_float_tensor_xla (__main__.TestTorchDeviceTypeXLA) ... skip (0.002s)

Please kindly confirm whether I could annotate complex base tests with onlyOnCPUAndCUDA for now, and later we may add test_complex_scalar_pow_tensor to pytorch_test_base.py to skip the tests.

@JackCaoG
Copy link
Collaborator

@RockingJavaBean Thanks for the heads up, I think you can mark test to be onlyOnCPUAndCUDA for now.

Copy link
Contributor

@anjali411 anjali411 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM once the complex test is skipped for xla

Copy link
Contributor

@facebook-github-bot facebook-github-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@anjali411 has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

@codecov
Copy link

codecov bot commented Sep 28, 2020

Codecov Report

Merging #45259 into master will decrease coverage by 0.00%.
The diff coverage is n/a.

Impacted file tree graph

@@            Coverage Diff             @@
##           master   #45259      +/-   ##
==========================================
- Coverage   68.15%   68.15%   -0.01%     
==========================================
  Files         396      396              
  Lines       51133    51133              
==========================================
- Hits        34851    34850       -1     
- Misses      16282    16283       +1     
Impacted Files Coverage Δ
torch/testing/_internal/expecttest.py 77.55% <0.00%> (-1.03%) ⬇️

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 95a97e5...9c9101e. Read the comment docs.

@facebook-github-bot
Copy link
Contributor

@anjali411 merged this pull request in 0c8a600.

@RockingJavaBean RockingJavaBean deleted the fix_pow_complex branch September 29, 2020 03:09
@dtypes(*(torch.testing.get_all_dtypes(include_bool=False, include_bfloat16=False)))
def test_complex_scalar_pow_tensor(self, device, dtype):
complexes = [0.5j, 1. + 1.j, -1.5j, 2.2 - 1.6j]
tensor = torch.rand(100).to(dtype=dtype, device=device)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For integer dtypes this resolves to a tensor full of zeros, which may not be the most interesting test case. We have a make_tensor function to generate a random tensor that would be nicer to use:

def make_tensor(size, device: torch.device, dtype: torch.dtype, *,
low, high, requires_grad: bool = False) -> torch.Tensor:

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks so much for pointing out this issue, and offering the kind tip of using the make_tensor function.
#47101 has been created to address this comment and special cases of zero exponents and the 1 + 0j base are added as well, please kindly help review.

facebook-github-bot pushed a commit that referenced this pull request Nov 2, 2020
Summary:
Related #45259

This PR is to address the #45259 (comment)

- leverage the `make_tensor`  function to generate a random tensor as the exponent, preventing the full zeros for the integer exponent.
- add some special cases for the zero exponents and the `1 + 0j` base.

Pull Request resolved: #47101

Reviewed By: mruberry

Differential Revision: D24682430

Pulled By: zou3519

fbshipit-source-id: f559dc0ba08f37ae070036fb25a52ede17a24149
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Merged open source triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Poor support of torch.pow(out) for complex tensor
7 participants