Skip to content

Commit

Permalink
Update autograd related comments (pytorch#50166)
Browse files Browse the repository at this point in the history
Summary:
Remove outdated comment and update to use new paths.

Pull Request resolved: pytorch#50166

Reviewed By: zou3519

Differential Revision: D25824942

Pulled By: albanD

fbshipit-source-id: 7dc694891409e80e1804eddcdcc50cc21b60f822
  • Loading branch information
albanD authored and hwangdeyu committed Jan 14, 2021
1 parent 4ef6543 commit 5ecb613
Show file tree
Hide file tree
Showing 2 changed files with 1 addition and 13 deletions.
12 changes: 0 additions & 12 deletions c10/core/TensorImpl.h
Original file line number Diff line number Diff line change
Expand Up @@ -582,9 +582,6 @@ struct C10_API TensorImpl : public c10::intrusive_ptr_target {

/**
* Set whether or not a tensor requires gradient.
*
* It is only valid to call this method on a Variable.
* See Note [Tensor versus Variable in C++].
*/
void set_requires_grad(bool requires_grad);

Expand All @@ -594,27 +591,18 @@ struct C10_API TensorImpl : public c10::intrusive_ptr_target {
* we can automatically differentiate back to them. A tensor that
* requires gradient and has no history is a "leaf" tensor, which we
* accumulate gradients into.
*
* It is only valid to call this method on a Variable.
* See Note [Tensor versus Variable in C++].
*/
bool requires_grad() const;

/**
* Return a mutable reference to the gradient. This is conventionally
* used as `t.grad() = x` to set a gradient to a completely new tensor.
*
* It is only valid to call this method on a Variable.
* See Note [Tensor versus Variable in C++].
*/
at::Tensor& mutable_grad();

/**
* Return the accumulated gradient of a tensor. This gradient is written
* into when performing backwards, when this tensor is a leaf tensor.
*
* It is only valid to call this method on a Variable.
* See Note [Tensor versus Variable in C++].
*/
const at::Tensor& grad() const;

Expand Down
2 changes: 1 addition & 1 deletion tools/autograd/derivatives.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -86,7 +86,7 @@
# e.g., it is used by _cudnn_rnn
#
# If you need a complex expression, e.g., with local variables,
# write a _backward function in tools/autograd/templates/Functions.cpp
# write a _backward function in torch/csrc/autograd/FunctionsManual.cpp
# and invoke it from here. By the way, go read
# https://github.com/zdevito/ATen/issues/163; this describes an
# important hazard that occurs when porting backwards from Python to C++
Expand Down

0 comments on commit 5ecb613

Please sign in to comment.