You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I don't know your exact evaluation metric. Normally, IoU=0.50:0.95 is the official evaluation metric. But when I use the model with iterative and two-stage that you provided, the running result is far lower than that of the paper. If the evaluation metric only considers IoU=0.5,the result is close to your paper, even better.The following is the specific results:
Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.241
Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.425
Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.228
Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.038
Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.220
Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.481
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.180
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.315
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.341
Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.098
Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.336
Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.625
The text was updated successfully, but these errors were encountered:
Thank you for your interest.
As mentioned in the Experimental Setup in Section 5.1.1 of our paper, we follow existing domain adaptive object detection methods, e.g. Domain Adaptive Faster RCNN, to adopt the Mean Average Precision with a threshold of 0.5 as the evaluation metric. In our paper, all methods built on Deformable DETR are trained without iterative box refinement or two-stage processing.
Thank you for your interest. As mentioned in the Experimental Setup in Section 5.1.1 of our paper. We follow existing domain adaptive object detection methods, e.g. Domain Adaptive Faster RCNN, to adopt the Mean Average Precision with a threshold of 0.5 as the evaluation metric. In our paper, all methods built on Deformable DETR are trained without iterative box refinement or two-stage processing.
I don't know your exact evaluation metric. Normally, IoU=0.50:0.95 is the official evaluation metric. But when I use the model with iterative and two-stage that you provided, the running result is far lower than that of the paper. If the evaluation metric only considers IoU=0.5,the result is close to your paper, even better.The following is the specific results:
Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.241
Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.425
Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.228
Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.038
Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.220
Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.481
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.180
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.315
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.341
Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.098
Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.336
Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.625
The text was updated successfully, but these errors were encountered: