Skip to content

Commit

Permalink
Remove backward and requires_grad from Autograd backend key (#49613)
Browse files Browse the repository at this point in the history
Summary:
Pull Request resolved: #49613

Just following a TODO in the code base...
ghstack-source-id: 119450484

Test Plan: waitforsandcastle

Reviewed By: ezyang

Differential Revision: D25644597

fbshipit-source-id: 26f5fa6af480929d0468b0de3ab103813e40d78b
  • Loading branch information
smessmer authored and facebook-github-bot committed Jan 6, 2021
1 parent 6643e9f commit eef5eb0
Showing 1 changed file with 0 additions and 8 deletions.
8 changes: 0 additions & 8 deletions torch/csrc/autograd/VariableTypeManual.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -387,14 +387,6 @@ TORCH_LIBRARY_IMPL(aten, Autograd, m) {
m.impl("detach", torch::dispatch(DispatchKey::Autograd, TORCH_FN(VariableType::detach)));
m.impl("detach_", torch::dispatch(DispatchKey::Autograd, TORCH_FN(VariableType::detach_)));
m.impl("copy_", torch::dispatch(DispatchKey::Autograd, TORCH_FN(VariableType::copy_)));
// For backward() and requires_grad_(), we need the DefaultBackend kernel, but we also need the Autograd backend
// kernel, because when called with a VariableTensorId tensor, it goes through the variable fallback kernel,
// which calls callBoxed(), which doesn't support optional tensor arguments yet and backward() has an optional
// tensor argument.
// TODO Once callBoxed() supports optional tensor arguments, we can enable `use_c10_dispatcher: full` for backward()
// and requires_grad_(), then remove the backend Autograd kernel here, only leaving the Math kernel.
m.impl("_backward", torch::dispatch(DispatchKey::Autograd, TORCH_FN(VariableType::_backward)));
m.impl("requires_grad_", torch::dispatch(DispatchKey::Autograd, TORCH_FN(VariableType::requires_grad_)));
m.impl("_fw_primal", torch::dispatch(DispatchKey::Autograd, TORCH_FN(VariableType::_fw_primal)));
}

Expand Down

0 comments on commit eef5eb0

Please sign in to comment.