-
Notifications
You must be signed in to change notification settings - Fork 81
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Why BN Update is not used for other methods like SGD #7
Comments
For SGD we don't use BN update because the weights are used throughout training. In this case statistics of BN are accumulated during training. In SWA and SWAG the final weights that we evaluate are never used during training, and we have no activation statistics for them. In this case we need to do a BN update. See e.g. Section 3.2 of "Averaging Weights Leads to Wider Optima and Better Generalization". Let me know if you have any other questions. |
Thank you. Appreciate your quick response. I have couple of questions related to SWA paper. I just checked-out the code and wanted to replicate the SWA experiment as mentioned here . In Table1 of the paper you have mentioned SGD numbers are much higher than what I get when I run the code. Infact the SWA number I get is better. Also not knowing about details of BN Update, I called that operation even for SGD before evaluation and it improves the performance, not sure why ? I am attaching files of training VGG16BN model on CIFAR-10 dataset. Will be thankful for your insights. VGG16BN Model with BN update for SGD What's the best way to get similar or exact numbers as mentioned in paper? I used the same command as mentioned in Git repo: Thank you once again. |
I'd suggest also opening an issue with Timur's repo if you can't figure it out. I believe that the discrepancy in your case would be caused by not including the |
That's not the issue, actually "--use-test" flag is not applicable for training "swa" model code. It's applicable for your code base " run_swag.py" . I tried replicating SWAG following the steps mentioned here but still unable to get the same numbers as mentioned in your paper. Can you please suggest where could be the issue ? I followed the exact steps as mentioned on the git-repo. Including example of CIFAR-100, VGG16 model: (Can provide others too) Followed by this, used your uncertainty code: Thank You for your help |
Closing due to resolution in timgaripov/swa#14 . |
Hi,
Batch Normalisation can be applied to any DNN. You have compared performance of DNN trained with SGD and SWAG (after BN update). Why not use BN update before evaluating all the methods ?
swa_gaussian/experiments/uncertainty/uncertainty.py
Line 184 in cbe3abc
Thank You
The text was updated successfully, but these errors were encountered: