Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

trying to make pow work for tensor raised to the power of a scalar #46185

Closed
wants to merge 3 commits into from

Conversation

janeyx99
Copy link
Contributor

@janeyx99 janeyx99 commented Oct 12, 2020

Fixes #46037

I'm not sure this is the most performant solution, but this works:

torch.pow(cuda_tensor, 5) should work and worked before.
torch.pow(cuda_tensor, torch.tensor(5)), should work and works now!
torch.pow(cuda_tensor, torch.tensor((5,))), should NOT work and complain the tensors are on different devices and indeed continues to complain.

@janeyx99 janeyx99 requested review from malfet and a team October 12, 2020 15:24
@samestep
Copy link
Contributor

unit test?

@@ -62,6 +62,9 @@ Tensor& pow_(Tensor& base, Scalar alpha) {
}

Tensor pow(const Tensor& base, const Tensor& exp) {
if (exp.dim() == 0) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can you add a test in test/test_torch.py?

@dr-ci
Copy link

dr-ci bot commented Oct 12, 2020

💊 CI failures summary and remediations

As of commit f3765e9 (more details on the Dr. CI page):


💚 💚 Looks good so far! There are no failures yet. 💚 💚


This comment was automatically generated by Dr. CI (expand for details).Follow this link to opt-out of these comments for your Pull Requests.

Please report bugs/suggestions on the GitHub issue tracker or post in the (internal) Dr. CI Users group.

See how this bot performed.

This comment has been revised 32 times.

Copy link
Contributor

@facebook-github-bot facebook-github-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@janeyx99 has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

@samestep
Copy link
Contributor

torch.pow(cuda_tensor, 5) should work and worked before.

I'm guessing there's already a test for this?

torch.pow(cuda_tensor, torch.tensor(5)), should work and works now!

I see you added a test for this 👍

torch.pow(cuda_tensor, torch.tensor((5,))), should NOT work and complain the tensors are on different devices and indeed continues to complain.

There's no test for this, right? Should one be added?

Copy link
Contributor

@facebook-github-bot facebook-github-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@janeyx99 has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

@walterddr
Copy link
Contributor

walterddr commented Oct 12, 2020

@samestep: torch.pow(cuda_tensor, torch.tensor((5,)) should be covered by normal test torch.pow(cpu_tensor, torch.tensor(()5,)) because it is not that mixed cuda & cpu tensor that causes the problem, but cpu exponent wrapped in dim=0 tensor should be unwrapped.

Copy link
Contributor

@facebook-github-bot facebook-github-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@janeyx99 has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

@facebook-github-bot
Copy link
Contributor

@janeyx99 merged this pull request in ad376f1.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

torch.pow doesn't check for tensors being on different devices
5 participants