New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix inference_mode decorator #68617
Fix inference_mode decorator #68617
Conversation
CI Flow Status⚛️ CI FlowRuleset - Version:
You can add a comment to the PR and tag @pytorchbot with the following commands: # ciflow rerun, "ciflow/default" will always be added automatically
@pytorchbot ciflow rerun
# ciflow rerun with additional labels "-l <ciflow/label_name>", which is equivalent to adding these labels manually and trigger the rerun
@pytorchbot ciflow rerun -l ciflow/scheduled -l ciflow/slow For more information, please take a look at the CI Flow Wiki. |
🔗 Helpful links
💊 CI failures summary and remediationsAs of commit 6c2acd9 (more details on the Dr. CI page): 💚 💚 Looks good so far! There are no failures yet. 💚 💚 This comment was automatically generated by Dr. CI (expand for details).Please report bugs/suggestions to the (internal) Dr. CI Users group. |
Not sure why the windows build is failing. |
torch/autograd/grad_mode.py
Outdated
@@ -24,7 +24,7 @@ def __call__(self, func: F) -> F: | |||
|
|||
@functools.wraps(func) | |||
def decorate_context(*args, **kwargs): | |||
with self.__class__(): | |||
with deepcopy(self): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could you detail how deepcopy is different from creating a new instance here?
In particular this wrapper works just fine for set_grad_enabled()
that has the same API as inference mode
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
deepcopy will also copy the internal state. This is only useful for inference_mode
, which has the mode
attribute (and parameter to __init__
). If we create a new instance from there (with no arguments), then inference_mode
will use the default mode and not the one from self
, which results in the described bug.
For no_grad
, enable_grad
and set_grad_enabled
this does not change the current behaviour.
I guess copy
instead of deepcopy
could also work if you see any drawback from deepcopy
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ho set_grad_enabled is not a subclass of this. So it would have the same issue if it did.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
But I'm not sure the deepcopy will work fine here as you don't call the init.
Maybe we want to have a method that each subclass defined to clone itself?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It does work fine for the other current subclasses (no_grad and enable_grad) because they either don't have an internal state, or if they have, it is set during __enter__
, which gets called anyway.
If you expect that in the future there will be another subclass wich does important things in the __init__
and not in the __enter__
, then adding clone
would be a good option too (by default, it would return a new instance with self.__class__()
, and inference_mode
would override that).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A benefit of the clone
method solution would also be that we could move set_grad_enabled
to a subclass of _DecoratorContextManager
so that it can also be used as a decorator. Let me know if that is preferred.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sorry for the delay, I was out of the office.
I think the clone could be a nice thing here indeed. Do you think you can update this PR to add the clone method on the main class?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done. Also squashed and rebased on master
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Awesome thanks!
@albanD has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator. |
Summary: This fixes the case when `torch.inference_mode` is called with `mode=False` (disabled). When used as a decorator, it ignored the argument and enabled inference mode anyway. `_DecoratorContextManager` is changed so that a new instance is a copy instead of a new instance with default parameters. I also added more tests to cover this case. Current behaviour: ```python >>> import torch >>> x = torch.ones(1, 2, 3, requires_grad=True) >>> torch.inference_mode(mode=False) ... def func(x): ... return x * x ... >>> out = func(x) >>> out.requires_grad False ``` New behaviour (fixed): ```python >>> import torch >>> x = torch.ones(1, 2, 3, requires_grad=True) >>> torch.inference_mode(mode=False) ... def func(x): ... return x * x ... >>> out = func(x) >>> out.requires_grad True ``` Pull Request resolved: #68617 Reviewed By: mrshenli Differential Revision: D32958434 Pulled By: albanD fbshipit-source-id: 133c69970ef8bffb9fc9ab5142dedcffc4c32945
Summary: This fixes the case when `torch.inference_mode` is called with `mode=False` (disabled). When used as a decorator, it ignored the argument and enabled inference mode anyway. `_DecoratorContextManager` is changed so that a new instance is a copy instead of a new instance with default parameters. I also added more tests to cover this case. Current behaviour: ```python >>> import torch >>> x = torch.ones(1, 2, 3, requires_grad=True) >>> torch.inference_mode(mode=False) ... def func(x): ... return x * x ... >>> out = func(x) >>> out.requires_grad False ``` New behaviour (fixed): ```python >>> import torch >>> x = torch.ones(1, 2, 3, requires_grad=True) >>> torch.inference_mode(mode=False) ... def func(x): ... return x * x ... >>> out = func(x) >>> out.requires_grad True ``` Pull Request resolved: #68617 Reviewed By: mrshenli Differential Revision: D32958434 Pulled By: albanD fbshipit-source-id: 133c69970ef8bffb9fc9ab5142dedcffc4c32945
This fixes the case when
torch.inference_mode
is called withmode=False
(disabled). When used as a decorator, it ignored the argument and enabled inference mode anyway._DecoratorContextManager
is changed so that a new instance is a copy instead of a new instance with default parameters.I also added more tests to cover this case.
Current behaviour:
New behaviour (fixed):