Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

About evaluation metric #5

Closed
Pandaxia8 opened this issue Dec 5, 2021 · 2 comments
Closed

About evaluation metric #5

Pandaxia8 opened this issue Dec 5, 2021 · 2 comments

Comments

@Pandaxia8
Copy link

I don't know your exact evaluation metric. Normally, IoU=0.50:0.95 is the official evaluation metric. But when I use the model with iterative and two-stage that you provided, the running result is far lower than that of the paper. If the evaluation metric only considers IoU=0.5,the result is close to your paper, even better.The following is the specific results:

Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.241
Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.425
Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.228
Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.038
Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.220
Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.481
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.180
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.315
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.341
Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.098
Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.336
Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.625

@encounter1997
Copy link
Owner

encounter1997 commented Dec 6, 2021

Thank you for your interest.
As mentioned in the Experimental Setup in Section 5.1.1 of our paper, we follow existing domain adaptive object detection methods, e.g. Domain Adaptive Faster RCNN, to adopt the Mean Average Precision with a threshold of 0.5 as the evaluation metric. In our paper, all methods built on Deformable DETR are trained without iterative box refinement or two-stage processing.

@Pandaxia8
Copy link
Author

Thank you for your interest. As mentioned in the Experimental Setup in Section 5.1.1 of our paper. We follow existing domain adaptive object detection methods, e.g. Domain Adaptive Faster RCNN, to adopt the Mean Average Precision with a threshold of 0.5 as the evaluation metric. In our paper, all methods built on Deformable DETR are trained without iterative box refinement or two-stage processing.

Thanks for your reply! :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants