Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Cannot reproduce the results of the paper #3

Open
zaccharieramzi opened this issue Mar 22, 2023 · 0 comments
Open

Cannot reproduce the results of the paper #3

zaccharieramzi opened this issue Mar 22, 2023 · 0 comments

Comments

@zaccharieramzi
Copy link

Hi @aravindr93 ,

Thank you so much for providing this repo and the code. I was able to run it in relatively little amount of time and get some results.

However, even when raising the number of tasks to 200,000 and using regularization of 2.0 in the evaluation, I get the following accuracy for validation: 98.204%, which is far from the reported 99.5% on the 5 way 1 shot case of Omniglot.

Do you know if there is any explanation to this? Do you have the weights of models trained to get 99.5% accuracy? I would be very interested in those.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant