Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

the setting about torch.autograd.grad function #17

Closed
lynlynlyn opened this issue Jan 23, 2019 · 8 comments
Closed

the setting about torch.autograd.grad function #17

lynlynlyn opened this issue Jan 23, 2019 · 8 comments

Comments

@lynlynlyn
Copy link

Thank you for your code.
if i want to run 2-order MAML,should I set create_graph=True?

@dragen1860
Copy link
Owner

dragen1860 commented Jan 23, 2019

Generally speaking, you should set create_graph for 2nd order gradient.
However, the implementaton in this repo. is the 2nd order version.

@lynlynlyn
Copy link
Author

I use pytorch1.0.The default setting create_graph is False.
And I checked the grad on Meta.py line 86 and line116.
grad = torch.autograd.grad(loss, fast_weights)
The require_grads about grad is False.
I am not sure the implementaton in this repo. is the 2nd order version.

@dragen1860
Copy link
Owner

@lynlynlyn IT's 2nd order implementation.
I think you should digest the paper to understand how to derive 2nd order gradient.
It's not simply set create_graph=True.
Sorry.

@lynlynlyn
Copy link
Author

if the inner loop gradients's require_grads=False,where is the 2nd order gradient?

@eugval
Copy link

eugval commented Apr 19, 2019

Hello!

Is there a consensus on this? Unless I am missing something, I also think you might need create_graph = True in order to propagate your gradients correctly through the inner gradient.

@Deng-Y
Copy link

Deng-Y commented Apr 20, 2019

I also think it needs create_graph = True.
grad = torch.autograd.grad(loss, self.net.parameters())
My understanding is that the line above will give you grad.requires_grad = False. Thus, there is no higher order gradients available. It's a first order MAML.
grad = torch.autograd.grad(loss, self.net.parameters(), create_graph = True)
The line above will give you grad.requires_grad = True.

@dragen1860
Copy link
Owner

@Deng-Y well, I have not worked on this project a long time. sorry for misleading information. Have you tried setting create_grape=True?

@Deng-Y
Copy link

Deng-Y commented Apr 22, 2019

@dragen1860 I only try it on the other project. Not sure if it will improve the performance on this one.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants