Replies: 2 comments
-
Besides, the training task is siamrpn_resnet50, before epoch10, use pretrained backbone without parameter update, layer 2,3,4 will be updated after epoch10 |
Beta Was this translation helpful? Give feedback.
0 replies
-
Possibily duplicates with #80 |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
When I resume training from epoch 17, then I got a
ValueError: loaded state dict contains a parameter group that doesn't match the size of optimizer's group
Then I reviewed the code and modified the "train.py" in line 293:
optimizer, lr_scheduler = build_opt_lr(model, cfg.TRAIN.START_EPOCH)
tooptimizer, lr_scheduler = build_opt_lr(model, 17)
And I can continue to training. But I want to know if this is correct?
Beta Was this translation helpful? Give feedback.
All reactions