You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
for testing I running metrics with MOT17 train labels as prediction and GT but result is :
this special case use MOT17-02-DPM GT label as prediction and GT.
This is probably caused by the non-pedestrian objects that are present in the MOT17 ground-truth, which have confidence set to 0. The apps.eval_motchallenge script filters the ground-truth by min_confidence while the same filtering is not applied to the predicted tracks. See the difference here:
As a result, the non-pedestrian objects will be present in the predictions but not the ground-truth (100% recall but < 100% precision, as you are seeing).
The conf value contains the detection confidence in the det.txt files. For the ground truth, it acts as a flag whether the entry is to be considered. A value of 0 means that this particular instance is ignored in the evaluation, while any other value can be used to mark it as active. For submitted results, all lines in the .txt file are considered.
for testing I running metrics with MOT17 train labels as prediction and GT but result is :
this special case use
MOT17-02-DPM
GT label as prediction and GT.as it clear why we have 11422 FP ?
The text was updated successfully, but these errors were encountered: