-
Notifications
You must be signed in to change notification settings - Fork 43
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Pre-training with DET and VID dataset #13
Comments
Thanks for your attention. May I ask how many images you sampled from the VID dataset and can you please share the training log for debugging? We release our training annotation link for the baseline model in the README.MD file which contains 1/10 VID images and all DET images with the same classes. You can try this one and we are eager for your response. |
Besides, do you use the coco pre-trained model for finetuning? |
I sampled all the images from VID dataset and all the images from the DET dataset that has the same classes. Yes I used coco pre-trained model for finetuning. I am getting an mAP score of around 0.56 when validating on the entire VID dataset. |
Please follow our instructions and try it again. |
My bad, I used 1/10 from VID dataset and all images from DET dataset, sorry that was a typo |
Could you please use our annotations and try it again and keep all of other settings the same as this repo. We find that aug setting in this log has been changed (e.g. 30 epochs are too much for the small model, we use 7 by default). Hopefully, we can find out the reason. |
@mukundkhanna123 Can you give me your small dataset from VID, the VID is hard for us to download .Thank you!!! |
Hey, I had a question about what training methodology was used to pre-train the yolox baseline models. I added the images from DET dataset with similar classes as the VID dataset and trained yolox-s model but I was not able to replicate your results. So could you elaborate on how you pretrained the yolox model?
The text was updated successfully, but these errors were encountered: