-
Notifications
You must be signed in to change notification settings - Fork 422
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
I'm really confused with the v2.0 meta.py #14
Comments
3.yes, you are right. For only single step's update setting, the code |
At the end of each task i, you append Lines 130 to 131 in fc20b31
The accumulated last step's loss should be losses_q[self.update_step] instead of losses_q[-1] , because the length of losses_q is update_step + 1 + task_num in the end.In fact, I think the above two lines are redundant and useless. |
Yes, it's a bug! |
First, the comment says the index of
losses_q
is tasks index.MAML-Pytorch/meta.py
Line 77 in fc20b31
However, in each task i , the whole list is updated.
MAML-Pytorch/meta.py
Line 94 in fc20b31
MAML-Pytorch/meta.py
Line 105 in fc20b31
MAML-Pytorch/meta.py
Line 123 in fc20b31
Second, I haven't seen the sum of loss_q?
MAML-Pytorch/meta.py
Lines 134 to 135 in fc20b31
losses_q[-1]
seems to be the last step's loss for the last task?Third, if
update_step == 1
, there will be only one inner update. However, the loss after first update is computed undertorch.no_grad()
, so I think there is no backward update information on the query set.MAML-Pytorch/meta.py
Lines 100 to 109 in fc20b31
The text was updated successfully, but these errors were encountered: