Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Training with VGG-16 backbone #2

Closed
AadSah opened this issue Nov 20, 2020 · 1 comment
Closed

Training with VGG-16 backbone #2

AadSah opened this issue Nov 20, 2020 · 1 comment

Comments

@AadSah
Copy link

AadSah commented Nov 20, 2020

Hi @tim-learn

Can you please share the implementation details you used to obtain the results with the VGG-16 backbone. Like, which layers did you train, and which layers did you freeze? And did you use the BatchNorm variant of VGG or the normal one?

Thanks

@tim-learn
Copy link
Owner

Hi @tim-learn

Can you please share the implementation details you used to obtain the results with the VGG-16 backbone. Like, which layers did you train, and which layers did you freeze? And did you use the BatchNorm variant of VGG or the normal one?

Thanks

The details can be found in the network.py, we did not use the batchnorm variant but utilize the normal one. No layers are frozen during the training process, just the learning rate of training new layers is 10 times that of the layers in the backbone network. In fact, how to train with VGG backbone is also given in readme.md item 2 with --net VGG16.

@AadSah AadSah closed this as completed Nov 20, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants