You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
During training for SiamFC+, do you freeze the weights of first 7*7 conv as described in the paper? If yes, why I cannot find the corresponding operations in this code? And does the same operation applies for the SiamRPN+?
Thanks for your time.
The text was updated successfully, but these errors were encountered:
During training for SiamFC+, do you freeze the weights of first 7*7 conv as described in the paper? If yes, why I cannot find the corresponding operations in this code? And does the same operation applies for the SiamRPN+?
Thanks for your time.
There is no difference for results whether you freeze the last layer or not. And yes, I unfreeze the blocks gradually during training. For simplity, I provide the 'unfreezing all with smaller learning rate' version. You can try these two versions, similiar results come. RPN and FC use similiar training strategy.
During training for SiamFC+, do you freeze the weights of first 7*7 conv as described in the paper? If yes, why I cannot find the corresponding operations in this code? And does the same operation applies for the SiamRPN+?
Thanks for your time.
I provide a version with freeze-out strategy. See backbone.py.This issue will be closed.
Hi,
During training for SiamFC+, do you freeze the weights of first 7*7 conv as described in the paper? If yes, why I cannot find the corresponding operations in this code? And does the same operation applies for the SiamRPN+?
Thanks for your time.
The text was updated successfully, but these errors were encountered: