Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix attribute names (creator -> grad_fn) #91

Merged
merged 1 commit into from
May 27, 2017
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
12 changes: 6 additions & 6 deletions beginner_source/former_torchies/autograd_tutorial.py
Original file line number Diff line number Diff line change
Expand Up @@ -25,9 +25,9 @@
There’s one more class which is very important for autograd
implementation - a ``Function``. ``Variable`` and ``Function`` are
interconnected and build up an acyclic graph, that encodes a complete
history of computation. Each variable has a ``.creator`` attribute that
history of computation. Each variable has a ``.grad_fn`` attribute that
references a function that has created a function (except for Variables
created by the user - these have ``None`` as ``.creator``).
created by the user - these have ``None`` as ``.grad_fn``).

If you want to compute the derivatives, you can call ``.backward()`` on
a ``Variable``. If ``Variable`` is a scalar (i.e. it holds a one element
Expand All @@ -52,7 +52,7 @@
###############################################################
#

print(x.creator) # we've created x ourselves
print(x.grad_fn) # we've created x ourselves

###############################################################
# Do an operation of x:
Expand All @@ -62,8 +62,8 @@

###############################################################
# y was created as a result of an operation,
# so it has a creator
print(y.creator)
# so it has a grad_fn
print(y.grad_fn)

###############################################################
# More operations on y:
Expand Down Expand Up @@ -91,7 +91,7 @@

x = Variable(torch.ones(2, 2), requires_grad=True)
y = x + 2
y.backward(torch.ones(2, 2), retain_variables=True)
y.backward(torch.ones(2, 2), retain_graph=True)
# the retain_variables flag will prevent the internal buffers from being freed
print(x.grad)

Expand Down
10 changes: 5 additions & 5 deletions beginner_source/nlp/pytorch_tutorial.py
Original file line number Diff line number Diff line change
Expand Up @@ -177,13 +177,13 @@
print(z.data)

# BUT z knows something extra.
print(z.creator)
print(z.grad_fn)


######################################################################
# So Variables know what created them. z knows that it wasn't read in from
# a file, it wasn't the result of a multiplication or exponential or
# whatever. And if you keep following z.creator, you will find yourself at
# whatever. And if you keep following z.grad_fn, you will find yourself at
# x and y.
#
# But how does that help us compute a gradient?
Expand All @@ -192,7 +192,7 @@
# Lets sum up all the entries in z
s = z.sum()
print(s)
print(s.creator)
print(s.grad_fn)


######################################################################
Expand Down Expand Up @@ -248,15 +248,15 @@
var_y = autograd.Variable(y)
# var_z contains enough information to compute gradients, as we saw above
var_z = var_x + var_y
print(var_z.creator)
print(var_z.grad_fn)

var_z_data = var_z.data # Get the wrapped Tensor object out of var_z...
# Re-wrap the tensor in a new variable
new_var_z = autograd.Variable(var_z_data)

# ... does new_var_z have information to backprop to x and y?
# NO!
print(new_var_z.creator)
print(new_var_z.grad_fn)
# And how could it? We yanked the tensor out of var_z (that is
# what var_z.data is). This tensor doesn't know anything about
# how it was computed. We pass it into new_var_z, and this is all the
Expand Down