Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Which Faster RCNN repo do you use during testing and validation #52

Open
HuAndrew opened this issue Nov 12, 2018 · 6 comments
Open

Which Faster RCNN repo do you use during testing and validation #52

HuAndrew opened this issue Nov 12, 2018 · 6 comments

Comments

@HuAndrew
Copy link

HuAndrew commented Nov 12, 2018

Hi, @leoxiaobin
Thanks for sharing your excellent work! It have very good results.I am curious about which your bounding box detector.

I evaluate your valided bbox results, it's get 56.8 performance in detection task, and I get 49.5 with maskrcnn detector. Would you give your refered codes or repo, and have you train faster RCNN codes?

Thanks a lot!

MaskRCNN

@bearpaw
Copy link

bearpaw commented Feb 4, 2019

Hi @HuAndrew , same question here.

I am playing with this code recently and was also wondering about how did you generate the detection part.

To be more specific, I am detecting humans from the COCO val 2017 keypoints images (5000 images) from the person_keypoints_val2017.json. I try to use Yolo v3 detector and keep only the bounding boxes regarding humans. Then I dump the JSON file which is similar to this repo's.

However, the size of the generated JSON is quite small compared with theirs (~1.3MB vs 16.4MB). Also, when I run cocoEval and use person_keypoints_val2017.json as groundtruth, I can only get about 40 AP.

Any suggestions? Thank you in advance :)

@yurymalkov
Copy link

I have the same question. Can you please share your detector or give a link to a similar one?

@namheegordonkim
Copy link

👍 . Related papers keep mentioning of the "person detector used in Simple Baseline..." but it's nowhere to be found

@Odaimoko
Copy link

Hi @HuAndrew , same question here.

I am playing with this code recently and was also wondering about how did you generate the detection part.

To be more specific, I am detecting humans from the COCO val 2017 keypoints images (5000 images) from the person_keypoints_val2017.json. I try to use Yolo v3 detector and keep only the bounding boxes regarding humans. Then I dump the JSON file which is similar to this repo's.

However, the size of the generated JSON is quite small compared with theirs (~1.3MB vs 16.4MB). Also, when I run cocoEval and use person_keypoints_val2017.json as groundtruth, I can only get about 40 AP.

Any suggestions? Thank you in advance :)

Well the author said 56.4 AP on person category. I have used Detectron's model . In End-to-End Faster & Mask R-CNN Baselines, the entry X-101-64x4d-FPN with 42.4 box AP can get 55.7 AP on person cat. I think this is competitive.

@HuAndrew
Copy link
Author

@bearpaw @Odaimoko Hello, I test multi detector, like mask, cascade_RCNN , and the detector vis and other preds' results are as follows:

vis samples

image

preds samples

256x192_pose_resnet_50_d256d256d256 total person detect AP keypoint
ground truth 11004 XXXXX 72.4
faster author 104125 56.4 70.5
mask rcnn_0.7 13167 48.6 68.1
mask rcnn_0.5 15530 49.5 68.6
mask rcnn_0.3 15796 49.6 68.7
Cascade_RCNN 73597 53.0 70.0

Then
From the test results, something can be found:

  • In order to achieve the purpose of rescore tricks, the author let detector gives multiple detection boxes for every person instance(rescore operation refer to COCO17-Keypoints-TeamOKS). And rescore could amend pred results.
  • Then if we want to get multi bboxs, we can adjust NMS postprocess.
  • As long as the detector position is very correct like gt bbox, the prediction results are also very good.
  • Top-down methods, the detector is very import to improve preds results.
  • But I use multi bbox, preds results are worse. So I guess the author used the byte bboxs and the NMS operation together amend the detectors performance.
  • Other detectors: maskrcnn-benchmark, yolov3.

Welcome to Join pose forum www.ilovepose.com

@wmcnally
Copy link

wmcnally commented Jan 29, 2022

Evaluated using the Detectron2 repo:

  • Faster R-CNN with ResNeXt-101 FPN backbone gets 56.6 AP for the person category on COCO val2017.
  • Faster R-CNN with ResNet-101 FPN backbone gets 55.7 AP for the person category on COCO val2017.

https://github.com/facebookresearch/detectron2/blob/main/MODEL_ZOO.md

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants