Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How did you run the 100000 iterations, it seems it will take a week to train for only 1 setting #2

Closed
MichalisLazarou opened this issue Nov 15, 2022 · 1 comment

Comments

@MichalisLazarou
Copy link

Dear Anuj,

I am very impressed from your work and I would like to try and replicate your training to understand how your method works. I was wondering how did you manage to train the models because it seems that it will take me 6-7 days to train a model using the iterations set-up on configs. did you parallelize it somehow and if yes do you have any instructions for that?

Regards,
Michalis

@anujinho
Copy link
Owner

Hey Michalis,
Thanks for your kind words, appreciate it!
Indeed, we have set the maximum epochs to 100,000 in all training configs but we also mention in the paper and the repository Readme.md that "We obtained the best model at the 82,000-th and 67,500-th iteration for (5-way, 1-shot) mini and tieredImagenet tasks respectively, and at the 22,500-th and 48,000-th iteration for (5-way, 5-shot) mini and tieredImagenet tasks, respectively." Thus, it actually doesn't take 6-7 days for training to finish. Regarding the parallelization of code, due to the scarcity of meta-gradient / meta-update computation libraries in pytorch, we chose to use the 'learn2learn' implementation which doesn't offer parallelization functionalities.
Thanks for your interest in our work and for the apt question, I think it makes sense to change this epoch parameter in the config files.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants