Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add scalar.conj() and update backward formulas for add and sub #46596

Closed
wants to merge 9 commits into from

Conversation

anjali411
Copy link
Contributor

@anjali411 anjali411 commented Oct 20, 2020

  1. Added conj method for scalar similar to numpy.
  2. Updates backward formulas for add and sub to work correctly for R -> C cases and for the case when alpha is complex.
  3. Enabled complex backward for nonzero (no formula update needed).

Stack from ghstack:

Differential Revision: D24529227

… sub"

1. Added `conj` method for scalar similar to numpy.
2. Updates backward formulas for add and sub to work correctly for R -> C cases and for the case when alpha is complex.
3. Enabled complex backward for nonzero (no formula update needed).




[ghstack-poisoned]
@anjali411 anjali411 added complex_autograd module: complex Related to complex number support in PyTorch labels Oct 20, 2020
@anjali411 anjali411 requested a review from ezyang October 20, 2020 17:22
… sub"

1. Added `conj` method for scalar similar to numpy.
2. Updates backward formulas for add and sub to work correctly for R -> C cases and for the case when alpha is complex.
3. Enabled complex backward for nonzero (no formula update needed).




[ghstack-poisoned]
@dr-ci
Copy link

dr-ci bot commented Oct 20, 2020

💊 CI failures summary and remediations

As of commit 5b6d3a7 (more details on the Dr. CI page):


None of the CI failures appear to be your fault 💚



❄️ 1 failure tentatively classified as flaky

but reruns have not yet been triggered to confirm:

See CircleCI build pytorch_linux_xenial_py3_clang5_android_ndk_r19c_vulkan_x86_32_build (1/1)

Step: "Build" (full log | diagnosis details | 🔁 rerun) ❄️

Error response from daemon: received unexpected HTTP status: 500 Internal Server Error
Parallel backend flags:  
DOCKER_IMAGE: 308535385114.dkr.ecr.us-east-1.amazonaws.com/pytorch/pytorch-linux-xenial-py3-clang5-android-ndk-r19c:4d22616f4e3faf830d069137acd2208e7a023625 
Error response from daemon: received unexpected HTTP status: 500 Internal Server Error 

This comment was automatically generated by Dr. CI (expand for details).Follow this link to opt-out of these comments for your Pull Requests.

Please report bugs/suggestions on the GitHub issue tracker or post in the (internal) Dr. CI Users group.

See how this bot performed.

This comment has been revised 19 times.

… sub"

1. Added `conj` method for scalar similar to numpy.
2. Updates backward formulas for add and sub to work correctly for R -> C cases and for the case when alpha is complex.
3. Enabled complex backward for nonzero (no formula update needed).




[ghstack-poisoned]
anjali411 added a commit that referenced this pull request Oct 20, 2020
ghstack-source-id: 422aeb2ba1dcdc9d526e2c9b1b73996bbd64831b
Pull Request resolved: #46596
if (isComplex()) {
return Scalar(std::conj(v.z));
} else if (isFloatingPoint()) {
return Scalar(v.d);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can't you just return *this here?

Copy link
Contributor

@ezyang ezyang left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

At some point when you're not in a rush, it would be good to use @robieta's tools for instruction counts to assess what the non-complex overheads of the new complex idioms and conjugations are. It will be a little tricky to setup since you have to go through autograd, but we might be able to setup a microbenchmark where we just call the microbenchmark directly. That will help us understand how urgent extra optimizing here is (which I know @ngimel is nervous about the conj additions here.)

Copy link
Collaborator

@mruberry mruberry left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Test changes look good. Test failure doesn't appear to be related.

@@ -995,11 +995,11 @@
self: std_backward(result, grad, self, dim, unbiased, keepdim)

- name: sub.Tensor(Tensor self, Tensor other, *, Scalar alpha=1) -> Tensor
self: grad
other: -grad * alpha
self: handle_r_to_c(self.scalar_type(), grad)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why do we need this? This seems pretty similar to

grad = grad.to(c10::typeMetaToScalarType(metadata.options().dtype()));
-- is it because to doesn't work c->r?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

.to works for C->R. currently handle_r_to_c returns a (real) view of the final gradient value in case the gradient is complex and the input is not.

@anjali411
Copy link
Contributor Author

anjali411 commented Oct 22, 2020

At some point when you're not in a rush, it would be good to use @robieta's tools for instruction counts to assess what the non-complex overheads of the new complex idioms and conjugations are. It will be a little tricky to setup since you have to go through autograd, but we might be able to setup a microbenchmark where we just call the microbenchmark directly. That will help us understand how urgent extra optimizing here is (which I know @ngimel is nervous about the conj additions here.)

I think that's a great idea! is there a doc to help with the setup?

… sub"

1. Added `conj` method for scalar similar to numpy.
2. Updates backward formulas for add and sub to work correctly for R -> C cases and for the case when alpha is complex.
3. Enabled complex backward for nonzero (no formula update needed).




[ghstack-poisoned]
anjali411 added a commit that referenced this pull request Oct 22, 2020
ghstack-source-id: 3fb369be52893955af285a321c874c210d34aa83
Pull Request resolved: #46596
@ezyang
Copy link
Contributor

ezyang commented Oct 23, 2020

Hopefully Taylor's talk today should help? He's also been working on written docs as well.

… sub"

1. Added `conj` method for scalar similar to numpy.
2. Updates backward formulas for add and sub to work correctly for R -> C cases and for the case when alpha is complex.
3. Enabled complex backward for nonzero (no formula update needed).


Differential Revision: [D24529227](https://our.internmc.facebook.com/intern/diff/D24529227)

[ghstack-poisoned]
anjali411 added a commit that referenced this pull request Oct 26, 2020
ghstack-source-id: a94571f9f3745d984c3ec8cf2b7c6eb829acfb36
Pull Request resolved: #46596
… sub"

1. Added `conj` method for scalar similar to numpy.
2. Updates backward formulas for add and sub to work correctly for R -> C cases and for the case when alpha is complex.
3. Enabled complex backward for nonzero (no formula update needed).


Differential Revision: [D24529227](https://our.internmc.facebook.com/intern/diff/D24529227)

[ghstack-poisoned]
anjali411 added a commit that referenced this pull request Oct 27, 2020
ghstack-source-id: 1ddd2625999bcd6b02ea6c6c393f16c24d859f31
Pull Request resolved: #46596
… sub"

1. Added `conj` method for scalar similar to numpy.
2. Updates backward formulas for add and sub to work correctly for R -> C cases and for the case when alpha is complex.
3. Enabled complex backward for nonzero (no formula update needed).


Differential Revision: [D24529227](https://our.internmc.facebook.com/intern/diff/D24529227)

[ghstack-poisoned]
anjali411 added a commit that referenced this pull request Oct 28, 2020
ghstack-source-id: 459ce186587e2bff4acb8dae8ce3728333c918c3
Pull Request resolved: #46596
… sub"

1. Added `conj` method for scalar similar to numpy.
2. Updates backward formulas for add and sub to work correctly for R -> C cases and for the case when alpha is complex.
3. Enabled complex backward for nonzero (no formula update needed).


Differential Revision: [D24529227](https://our.internmc.facebook.com/intern/diff/D24529227)

[ghstack-poisoned]
anjali411 added a commit that referenced this pull request Nov 2, 2020
ghstack-source-id: 9a8402e75da9dfdbeb9dfe77a49f95f9a99999ae
Pull Request resolved: #46596
@facebook-github-bot
Copy link
Contributor

@anjali411 merged this pull request in cedeee2.

@facebook-github-bot facebook-github-bot deleted the gh/anjali411/66/head branch November 6, 2020 15:19
@@ -538,6 +538,7 @@ def method_tests():
('add', (), ((S, S, S),), 'scalar_broadcast_lhs', (True,)),
('add', (S, S, S), (3.14,), 'constant', (True,)),
('add', (), (3.14,), 'scalar_constant', (True,)),
('add', (S, S, S), (3.14j,), 'complex_scalar_constant', (True,)),
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does this actually calls into the scalar overload? I think that the python binding is converting these to Tensors all the time.

You can double check this by doing:

import torch

a = torch.rand(10, requires_grad=True)
print((a + a).grad_fn)
print((a + 1.).grad_fn)

That will print

<AddBackward0 object at 0x7fc1c1446150>
<AddBackward0 object at 0x7fc1c2f14f50>

The important part is the 0 here that means that both are actually using the 0th overload (the one for Tensor inputs).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
cla signed complex_autograd Merged module: complex Related to complex number support in PyTorch
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

6 participants