Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Finetuning and continue training #23

Closed
khoangothe opened this issue Jul 13, 2022 · 1 comment
Closed

Finetuning and continue training #23

khoangothe opened this issue Jul 13, 2022 · 1 comment

Comments

@khoangothe
Copy link

Hello, Thank you for the awesome work. I am trying to use the model on another dataset, so I figure I should structure my data accordingly to the format of phoenix2014. Is there anything else I should worry about or just running the preprocessing with the same structure is gonna be alright?

Also, since I am training on google colab, I won't be able to train for 80 epochs consecutively and plan to split it into several different runs. Is there a built in function to load the previous model and continue training (or finetuning, if I want to finetune the pretrain) or how should I begin to tackle this problem? I am not sure if --load-weights tag is enough. Thank you so much.

@ycmin95
Copy link
Collaborator

ycmin95 commented Jul 13, 2022

Thanks for your attention, If your resolusion of video data is pretty high, perhaps a human detection can preserve more useful information before resizing the whole image. Our recent version can achieve comparable results with 40 epochs, and --load-checkpoints can load the previous model and continue training. Details can be found in config and here.

Good luck~

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants