You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Now I want to continue training on your pretrained model, after loading the pretrained model, the epoch begins from the 12400, but the ckp name is 00000100-ckp.pth.tar, which means the ckp was generated after the 100 epochs? Do you have any idea about the issue? Thank you!
The text was updated successfully, but these errors were encountered:
I'm sorry this mismatch may confuse you. This is caused by the change of 'num_repeats' in 'train.py'. When I train the model, it is set to 1, so each epoch takes only a few minutes to facilitate faster debugging. When I release the code, it is set to 100, which is consistent with FOMM. In the current config, the checkpoint is roughly at the progress of 124 epoch.
Thanks a lot for your code and pre-trained model.
Now I want to continue training on your pretrained model, after loading the pretrained model, the epoch begins from the 12400, but the ckp name is 00000100-ckp.pth.tar, which means the ckp was generated after the 100 epochs? Do you have any idea about the issue? Thank you!
The text was updated successfully, but these errors were encountered: