You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hello, thank you very much for sharing the open source code and your article. After reading your paper carefully, I have a question that is very curious. For evaluation metrics, is there any difference between Accuracy proposed in your paper and the Success adopted in "Leveraging Shape Completion for 3D Siamese Tracking" paper? I think the two metrics seems to be the same. I would appreciate it if you could answer this question.
The text was updated successfully, but these errors were encountered:
Hi, thank you for your interest in our project! @StiphyJay
As you have noticed, the proposed metrics are very similar in our paper and the paper your mentioned (which is called OPE evaluation metrics, I think). The design of our metrics is largely motivated by the OPE metrics, thus share the same intuition. But there are still some differences:
On evaluating the precision of the tracker, OPE metrics use precision and we use accuracy. precision and accuracy are differerent in the following two aspects:
precision only concern the frames whose prediction quality already pass a certain threshold, while accuracy considers all the frames. This design is to avoid meaningless evaluation for precision under certain cases. For example, when the tracker quickly loses the target, the precision will then only consider the begining frames, thus lead to very high performance, which does not reflect the true ability of the trackers at all.
precision uses distance, while accuracy uses IOU. Both of the choices are reasonable. We just follow the ideas in object detection comminities and use IOU.
The success and robustness are also different.
success counts all the frames whose IOU pass the threshold, but robustness only concerns the frames before the first failure. For example, if our tracker has an IOU sequence of [0.6, 0.6, 0.6, 0.4, 0.4, 0.6] and the threshold is 0.5, success considers all the 4 frames with IOU=0.6, while robustness only considers the first 3 frames. As can be noticed, the metric of robustness demands better ability from trackers.
Hopefully, the discussion above is useful for you! Please feel free to contact me for further discussions!
Hello, thank you very much for sharing the open source code and your article. After reading your paper carefully, I have a question that is very curious. For evaluation metrics, is there any difference between Accuracy proposed in your paper and the Success adopted in "Leveraging Shape Completion for 3D Siamese Tracking" paper? I think the two metrics seems to be the same. I would appreciate it if you could answer this question.
The text was updated successfully, but these errors were encountered: