New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Nearly 50% images are missed in training and validation #46
Comments
I validated on 5000 images. Only 2693 validation images have human pose annotations, that’s why it says 2307 are missing. The same goes for the training images. 64115 have human pose annotations and 54172 do not. |
Thank you very much for your prompt reply. It really astonished me that coco_person_keypoints has so many meaningless images or annotations. |
No problem. They aren’t meaningless! They don’t have people but they have other objects for the object detection task. For keypoint detection it’s important to train on images without people so that your model doesn’t predict too many false positives. |
Thank you very much. Does other works follow the same way as you have done? For example, DEKR? |
Yep! |
Scanning data/datasets/coco/kp_labels/img_txt/train2017.cache images and labels... 64115 found, 54172 missing, 0 empty, 0 corrupted'
Scanning data/datasets/coco/kp_labels/img_txt/val2017.cache images and labels... 2693 found, 2307 missing, 0 empty, 0 corrupted'
I wonder if you are using an ad-hoc way of data preprocessing/filtering. If you were doing so, then the numbers you had reported in the paper would not be comparable to DEKR.
The text was updated successfully, but these errors were encountered: