New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We鈥檒l occasionally send you account related emails.
Already on GitHub? Sign in to your account
Pytorch 1.5.0 requires_grad being automatically set to false in C++ registered operators #37306
Comments
cc @smessmer |
It looks like these lines are the culprit:
|
Introduced in #30650 |
Relevant comment #29934 (comment) Looks like I didn't realize that fallthroughs were supported by this behavior previously |
@inspiros as a workaround, can you try doing this as your registration?
|
Standalone reproducer:
|
If the derivative is given manually, everything seems to work (which is why torchvision isn't broken):
This is probably an independent bug |
Demo that the workaround works:
|
Potentially fixes #37306 Differential Revision: [D21261946](https://our.internmc.facebook.com/intern/diff/D21261946/) [ghstack-poisoned]
Potentially fixes #37306 Differential Revision: [D21261946](https://our.internmc.facebook.com/intern/diff/D21261946/) ghstack-source-id: 102971495 Pull Request resolved: #37355
@ezyang Perfect, that should work, for now at least. I saw the new style API |
Potentially fixes #37306 Differential Revision: [D21261946](https://our.internmc.facebook.com/intern/diff/D21261946/) [ghstack-poisoned]
Potentially fixes #37306 Differential Revision: [D21261946](https://our.internmc.facebook.com/intern/diff/D21261946/) [ghstack-poisoned]
Potentially fixes #37306 Differential Revision: [D21261946](https://our.internmc.facebook.com/intern/diff/D21261946/) [ghstack-poisoned]
Pull Request resolved: #37355 Potentially fixes #37306 ghstack-source-id: 103073537 Differential Revision: [D21261946](https://our.internmc.facebook.com/intern/diff/D21261946/)
Can you verify that #37355 fixes this? It was just merged into master. |
Keeping the issue open while we assess if it's necessary for 1.5 point release |
@inspiros OK. Note that the workaround syntax is probably going to stop working when next release rolls around, though I guess we will try harder not to break it now that we know at least one person is using it. This is definitely a bug in the old API, and the new API inherits the problem too (they use the same underlying implementation). |
closing as this has been merged to 1.5.1. |
馃悰 Bug
Autograd no longer works on registered operators in pytorch verison 1.5.0. I'm not sure if it is a bug or new feature. If it is an API change then how this thing supposed to work from now on because my project heavily relies on it.
To Reproduce
Steps to reproduce the behavior:
The returned tensor has no grad and is unable to backward:
After some tests, tensor options
dtype
,device
seem to work ok butrequires_grad
is always set tofalse
.Expected behavior
In pytorch 1.4.0, the same code (given exact seed) yields:
Environment
conda
,pip
, source):pip install torch-1.5.0-cp37-cp37m-win_amd64.whl
Additional context
I also opened a forum topic about this in discuss.pytorch.org, but doesn't seem to receive any response from the community so I turned to the developers.
cc @ezyang @gchanan @zou3519 @ssnl @albanD @gqchen
The text was updated successfully, but these errors were encountered: