Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

training detail & inference time of the model #16

Closed
jjldr opened this issue Dec 25, 2020 · 1 comment
Closed

training detail & inference time of the model #16

jjldr opened this issue Dec 25, 2020 · 1 comment

Comments

@jjldr
Copy link

jjldr commented Dec 25, 2020

Hi, I train the model from scratch(use pretrained dla34 in imagenet),but the training loss is about 12 after 50 epochs, the loss is too huge. In your code, you freeze the dla34 backbone when training, is it the reason why I can not get a lower loss?
another question is the inference time of the model. When I test your model in Tesla V 100 GPU, the inference time of one sample is about 0.37s, it is slower than original centerNet model. centerNet is real-time model, but your model is too slow to be real-time
hope for your answer, thanks

@mrnabati
Copy link
Owner

Hi. I did not try training from scratch. I started from the pre-trained CenterNet model which is trained for 140 epochs, and trained it for another 60 epochs. So you probably need to train for about 200 epochs to get to the same loss values. I froze the backbone mostly to increase training speed, but it might have some effect on the loss values as well.
As for the inference time, it is slower than the original CenterNet. The bottleneck is mostly the way the generate_pc_hm() function is implemented. There are more efficient ways to do it, but I just haven't had time to do it yet.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants