You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am new to the meta learning problem and after reading your paper, I have some doubts. For the pseudo code of Algorithm 1, you said that "while not done do". Dose it mean that there are many iterations and in each iteration, you sampled K samples from each task, updated the parameters theta' corresponding to each task, sampled samples using new theta', and finally, updated the network parameter theta based all tasks. Whether the stop criterion is the iteration number you set beforehand? By the way, the results in your work seems wonderful but it's rather hard for me to figure out the insights of MAML. Why using such update strategy can achieve such well performance? Is there any work that can be suggestive? Anyway, thanks for your idea and have a nice day!
The text was updated successfully, but these errors were encountered:
Hi Chelsea,
I am new to the meta learning problem and after reading your paper, I have some doubts. For the pseudo code of Algorithm 1, you said that "while not done do". Dose it mean that there are many iterations and in each iteration, you sampled K samples from each task, updated the parameters theta' corresponding to each task, sampled samples using new theta', and finally, updated the network parameter theta based all tasks. Whether the stop criterion is the iteration number you set beforehand? By the way, the results in your work seems wonderful but it's rather hard for me to figure out the insights of MAML. Why using such update strategy can achieve such well performance? Is there any work that can be suggestive? Anyway, thanks for your idea and have a nice day!
The text was updated successfully, but these errors were encountered: