Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Does "smaller percentage" in Evaluation-Notice mean "precision"? #15

Closed
Liuchongpei opened this issue Jul 20, 2020 · 3 comments
Closed

Comments

@Liuchongpei
Copy link

Hi, does "samller percentange" you mentioned in Evaluation-Notice mean "precision" ? while the metric used in NOCS is mAP.

@j96w
Copy link
Owner

j96w commented Jul 24, 2020

Hi, thanks for mention this very good question. In the NOCS paper, the introduction of their metric is:

"For 6D pose estimation, we report the average precision of object instances for which the error is less than m cm for translation and n° for rotation similar to [39, 30]. " (NOCS, page 6, middle right column)

However, both [39, 30](Shotton et al. 2013 and Li et al. 2018) are using the "smaller percentage" metric (same as ours), which is not mAP.

NOCS can use mAP as its metric mainly because it is based on a detection model. However, most of the other methods that do not have a per-frame level detection result cannot calculate this mAP score (Our 6pack is a tracking model). In order to achieve a fair comparison with NOCS, we re-tested all the NOCS evaluation results in "<5cm5° precision metric" which is originally introduced by [39, 30] and report the score in the experiment section of this paper.

@Liuchongpei
Copy link
Author

Okay thank you.

@taeyeopl
Copy link

Can I ask the difference between "smaller percentage" and "AUC (area under the curve)" ??

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants