-
Notifications
You must be signed in to change notification settings - Fork 52
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Does "smaller percentage" in Evaluation-Notice mean "precision"? #15
Comments
Hi, thanks for mention this very good question. In the NOCS paper, the introduction of their metric is: "For 6D pose estimation, we report the average precision of object instances for which the error is less than m cm for translation and n° for rotation similar to [39, 30]. " (NOCS, page 6, middle right column) However, both [39, 30](Shotton et al. 2013 and Li et al. 2018) are using the "smaller percentage" metric (same as ours), which is not mAP. NOCS can use mAP as its metric mainly because it is based on a detection model. However, most of the other methods that do not have a per-frame level detection result cannot calculate this mAP score (Our 6pack is a tracking model). In order to achieve a fair comparison with NOCS, we re-tested all the NOCS evaluation results in "<5cm5° precision metric" which is originally introduced by [39, 30] and report the score in the experiment section of this paper. |
Okay thank you. |
Can I ask the difference between "smaller percentage" and "AUC (area under the curve)" ?? |
Hi, does "samller percentange" you mentioned in Evaluation-Notice mean "precision" ? while the metric used in NOCS is mAP.
The text was updated successfully, but these errors were encountered: