Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Cannot understand the insights of the work #41

Closed
ybpaopao opened this issue Sep 24, 2018 · 0 comments
Closed

Cannot understand the insights of the work #41

ybpaopao opened this issue Sep 24, 2018 · 0 comments

Comments

@ybpaopao
Copy link

Hi Chelsea,

I am new to the meta learning problem and after reading your paper, I have some doubts. For the pseudo code of Algorithm 1, you said that "while not done do". Dose it mean that there are many iterations and in each iteration, you sampled K samples from each task, updated the parameters theta' corresponding to each task, sampled samples using new theta', and finally, updated the network parameter theta based all tasks. Whether the stop criterion is the iteration number you set beforehand? By the way, the results in your work seems wonderful but it's rather hard for me to figure out the insights of MAML. Why using such update strategy can achieve such well performance? Is there any work that can be suggestive? Anyway, thanks for your idea and have a nice day!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants