autograd.Function should be consistent about returning the same Tensor object if mark_dirty was used. #90209
Labels
module: autograd
Related to torch.autograd, and the autograd engine in general
module: functorch
Pertaining to torch.func or pytorch/functorch
triaged
This issue has been looked at a team member, and triaged and prioritized into an appropriate module
🐛 Describe the bug
If z requires_grad, then
y is x
returns True.If z does not require grad, then
y is x
returns False.This is inconsistent. In-place PyTorch operators always return the same object, no matter the require_grad-ness.
NB: this might be difficult to actually do.
Context
The consistency makes autograd.Function + ctx.mark_dirty with functorch transforms inconsistent.
Versions
main
cc @ezyang @albanD @gqchen @pearu @nikitaved @soulitzer @lezcano @Varal7 @Chillee @samdow @soumith
The text was updated successfully, but these errors were encountered: