You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Can you please share the implementation details you used to obtain the results with the VGG-16 backbone. Like, which layers did you train, and which layers did you freeze? And did you use the BatchNorm variant of VGG or the normal one?
Thanks
The text was updated successfully, but these errors were encountered:
Can you please share the implementation details you used to obtain the results with the VGG-16 backbone. Like, which layers did you train, and which layers did you freeze? And did you use the BatchNorm variant of VGG or the normal one?
Thanks
The details can be found in the network.py, we did not use the batchnorm variant but utilize the normal one. No layers are frozen during the training process, just the learning rate of training new layers is 10 times that of the layers in the backbone network. In fact, how to train with VGG backbone is also given in readme.md item 2 with --net VGG16.
Hi @tim-learn
Can you please share the implementation details you used to obtain the results with the VGG-16 backbone. Like, which layers did you train, and which layers did you freeze? And did you use the BatchNorm variant of VGG or the normal one?
Thanks
The text was updated successfully, but these errors were encountered: