You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am very impressed from your work and I would like to try and replicate your training to understand how your method works. I was wondering how did you manage to train the models because it seems that it will take me 6-7 days to train a model using the iterations set-up on configs. did you parallelize it somehow and if yes do you have any instructions for that?
Regards,
Michalis
The text was updated successfully, but these errors were encountered:
Hey Michalis,
Thanks for your kind words, appreciate it!
Indeed, we have set the maximum epochs to 100,000 in all training configs but we also mention in the paper and the repository Readme.md that "We obtained the best model at the 82,000-th and 67,500-th iteration for (5-way, 1-shot) mini and tieredImagenet tasks respectively, and at the 22,500-th and 48,000-th iteration for (5-way, 5-shot) mini and tieredImagenet tasks, respectively." Thus, it actually doesn't take 6-7 days for training to finish. Regarding the parallelization of code, due to the scarcity of meta-gradient / meta-update computation libraries in pytorch, we chose to use the 'learn2learn' implementation which doesn't offer parallelization functionalities.
Thanks for your interest in our work and for the apt question, I think it makes sense to change this epoch parameter in the config files.
Dear Anuj,
I am very impressed from your work and I would like to try and replicate your training to understand how your method works. I was wondering how did you manage to train the models because it seems that it will take me 6-7 days to train a model using the iterations set-up on configs. did you parallelize it somehow and if yes do you have any instructions for that?
Regards,
Michalis
The text was updated successfully, but these errors were encountered: