Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Question about autograd of pytorch. #11

Closed
Jackie-Chou opened this issue Oct 12, 2018 · 2 comments
Closed

Question about autograd of pytorch. #11

Jackie-Chou opened this issue Oct 12, 2018 · 2 comments

Comments

@Jackie-Chou
Copy link

if True: # TODO: this is a potential problems.

Hi dragen:
I have a question about the autograd mechanism of pytorch. It seems to me that pytorch doesn't support high-order gradient since it builds the graph during forward pass, while computes the gradients by reverse passing the graph without expanding it, thus the gradient variable actually has no link to the origin variable so we cant further compute its gradient w.r.t. the origin var.. However in the above part of the code, where the meta update is conducted, seems no special mechanism is used so I think the update doesn't consider high-order gradient formed by K times inner loop.
To further demonstrate the problem, I write a simple demo.

@Jackie-Chou
Copy link
Author

My bad. After further reading the original paper and I find the first-order approximation mentioned in Sec 5.2. Experiment results show that the approximation is good enough as well as easy to implement. :)
Thank u for the great code.

@dragen1860
Copy link
Owner

haha, Pytorch support 2nd derivate from version 0.2.
actually, my code has a potential bug in Line 399, maml.py.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants