Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Why BN Update is not used for other methods like SGD #7

Closed
amitchandak opened this issue Dec 29, 2019 · 5 comments
Closed

Why BN Update is not used for other methods like SGD #7

amitchandak opened this issue Dec 29, 2019 · 5 comments

Comments

@amitchandak
Copy link

amitchandak commented Dec 29, 2019

Hi,
Batch Normalisation can be applied to any DNN. You have compared performance of DNN trained with SGD and SWAG (after BN update). Why not use BN update before evaluating all the methods ?

utils.bn_update(loaders["train"], model)

Thank You

@wjmaddox
Copy link
Owner

For SGD we don't use BN update because the weights are used throughout training. In this case statistics of BN are accumulated during training. In SWA and SWAG the final weights that we evaluate are never used during training, and we have no activation statistics for them. In this case we need to do a BN update. See e.g. Section 3.2 of "Averaging Weights Leads to Wider Optima and Better Generalization".

Let me know if you have any other questions.

@amitchandak
Copy link
Author

amitchandak commented Dec 29, 2019

Thank you. Appreciate your quick response. I have couple of questions related to SWA paper.
Can you help me with that or should I post my queries to: SWA ?

I just checked-out the code and wanted to replicate the SWA experiment as mentioned here . In Table1 of the paper you have mentioned SGD numbers are much higher than what I get when I run the code. Infact the SWA number I get is better. Also not knowing about details of BN Update, I called that operation even for SGD before evaluation and it improves the performance, not sure why ? I am attaching files of training VGG16BN model on CIFAR-10 dataset. Will be thankful for your insights.
VGG16BN Model

VGG16BN Model with BN update for SGD

What's the best way to get similar or exact numbers as mentioned in paper? I used the same command as mentioned in Git repo:
python3 train.py --dir=/home/swa/swa_cifar10_VGG16BNModel/ --dataset=CIFAR10 --data_path=/home/swa_gaussian/cifar10_data/ --model=VGG16BN --epochs=300 --lr_init=0.05 --wd=5e-4 --swa --swa_start=161 --swa_lr=0.01 --save_freq=50 --eval_freq=10 > cifar10_VGG16BNModel_swaLogs

Thank you once again.

@wjmaddox
Copy link
Owner

I'd suggest also opening an issue with Timur's repo if you can't figure it out.

I believe that the discrepancy in your case would be caused by not including the --use-test flag in your command. If that flag is not included, then you will be training the model on a slightly smaller dataset (45k examples with 5k test). By comparison, the results in both this repo and in Timur's use the real CIFAR10 test set.

@amitchandak
Copy link
Author

amitchandak commented Dec 31, 2019

That's not the issue, actually "--use-test" flag is not applicable for training "swa" model code. It's applicable for your code base " run_swag.py" . I tried replicating SWAG following the steps mentioned here but still unable to get the same numbers as mentioned in your paper. Can you please suggest where could be the issue ? I followed the exact steps as mentioned on the git-repo.

Including example of CIFAR-100, VGG16 model: (Can provide others too)
./experiments/train/run_swag.py --data_path=/data/swa_gaussian/cifar100_data/ --epochs=300 --dataset=CIFAR100 --save_freq=300 --model=VGG16 --lr_init=0.05 --wd=5e-4 --swa --swa_start=161 --swa_lr=0.01 --cov_mat --use_test --dir=./cifar100_VGGModel > cifar100_VGG16Model_swagLogs

Followed by this, used your uncertainty code:
python3 ./experiments/uncertainty/uncertainty.py --data_path=/data/swa_gaussian/cifar100_data/ --dataset=CIFAR100 --model=VGG16 --use_test --cov_mat --method=SWAG --scale=0.5 --file=./cifar100_VGGModel/checkpoint-300.pt --save_path=./cifar100_VGGModel/ >
uncertainty_cifar100_VGG16_SWAGLogs

Thank You for your help

@wjmaddox
Copy link
Owner

Closing due to resolution in timgaripov/swa#14 .

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants