Skip to content

Conversation

@dcslin
Copy link
Member

@dcslin dcslin commented Jun 29, 2020

No description provided.

for p, dp in backward(y, dy):
grads[p] = dp
# TODO: this fn is only helper for test case for now.
# 1. could implement __hash__ or
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

does it affect other places e.g., optimizer, which will take the param and grad for updating.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This gradients() is currently only used in 2 of the operations test cases, so it does not affect other places.
Opt.py does backward itself, and not using this function.
This gradients() is more like "do backward and do NOT update and return gradients". I did not directly move it into test scripts because in some cases, user might want to get gradiants explicitly. Like pytorch x.grad.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ok. then pls update the docstring for the returned data.

@nudles nudles merged commit b58384c into apache:dev Jul 8, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants