Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

About training detaills #13

Closed
shallowtoil opened this issue Apr 29, 2019 · 2 comments
Closed

About training detaills #13

shallowtoil opened this issue Apr 29, 2019 · 2 comments

Comments

@shallowtoil
Copy link

Hi,

During training for SiamFC+, do you freeze the weights of first 7*7 conv as described in the paper? If yes, why I cannot find the corresponding operations in this code? And does the same operation applies for the SiamRPN+?

Thanks for your time.

@JudasDie
Copy link
Contributor

Hi,

During training for SiamFC+, do you freeze the weights of first 7*7 conv as described in the paper? If yes, why I cannot find the corresponding operations in this code? And does the same operation applies for the SiamRPN+?

Thanks for your time.

There is no difference for results whether you freeze the last layer or not. And yes, I unfreeze the blocks gradually during training. For simplity, I provide the 'unfreezing all with smaller learning rate' version. You can try these two versions, similiar results come. RPN and FC use similiar training strategy.

@JudasDie
Copy link
Contributor

Hi,

During training for SiamFC+, do you freeze the weights of first 7*7 conv as described in the paper? If yes, why I cannot find the corresponding operations in this code? And does the same operation applies for the SiamRPN+?

Thanks for your time.

I provide a version with freeze-out strategy. See backbone.py.This issue will be closed.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants