-
Notifications
You must be signed in to change notification settings - Fork 2.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
BYOL convert weights to PyTorch #84
Comments
Hi, It seems the culprit was indeed the padding. With that fixed, I am able to get 74.6%! I am thinking of creating a repository for the PyTorch weights. Would you consider linking to it in the README so that people can easily find it? |
Hi, glad to know you managed to pinpoint the issue and managed to reproduce the results! Sure, providing PyTorch-compatible weights could also be very useful. Feel free to reach out (maybe by email?) if you would like us to host the files, and we will add a link (crediting you of course) to the README. Best, |
Hi, Great. I need to convert the R200x2 weights and clean up the code a bit. Thanks again. |
Hi @mbsariyildiz, sorry for the delay. I was just being lazy in cleaning things up. Here's the code for conversion. It only supports ResNet-50 for now. Let me know if you need the ResNet-200x2 model as well. |
Hey @ChigUr, thanks a lot for sharing your converter! I need just ResNet-50 models. :) |
Hi,
Thanks for open sourcing this. It is very helpful. I am in the process of converting the BYOL R50x1 weights to PyTorch. I have been able to get the dimensions of the weights to match with the standard torchvision R50 model. When I evaluate the pytorch weights, I get ~70% on ImageNet val set. Any idea what I may be missing? I not sure, but 'SAME' padding in conv and max pool are primary suspects right now. Although it looks normal to me, is there any caveat in input image pre-processing?
The text was updated successfully, but these errors were encountered: