Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Got 0 in motmetrics #17

Open
B10515007 opened this issue Jul 15, 2021 · 3 comments
Open

Got 0 in motmetrics #17

B10515007 opened this issue Jul 15, 2021 · 3 comments

Comments

@B10515007
Copy link

I run test_net.py with MOT17, and got result below

image

What happend?
I didn't edit any code in this project.

@bingshuai2019
Copy link

Not sure whether you've loaded the ground truth correctly, as all predictions are considered as "False Positive (FP)", and there is no False Negative (FN) and ID switches. This means that there is no ground truth bounding boxes for all videos.

@mondrasovic
Copy link

mondrasovic commented Sep 6, 2021

I ended up with the same problem. I have been playing with this architecture for quite some time and I am really surprised that I cannot get it to work. My understanding of the inner workings of this tracker must be wrong.

Here is the output of the dataset loading procedure.

INFO:root:Loading annotation file E:/datasets\MOT17\annotation\anno.json...
INFO:root:loaded anno json
INFO:root:loaded 42 samples
INFO:root:Split subpath: annotation\splits.json
INFO:root:Loaded splits with # samples: {'test': 21, 'train': 21}
INFO:root:Loading annotation file E:/datasets\MOT17\annotation\anno_pub_detection.json...
INFO:root:loaded anno json
INFO:root:loaded 42 samples
INFO:root:Split subpath: annotation\splits.json
INFO:root:Loaded splits with # samples: {'test': 21, 'train': 21}

The evaluation report is equal to the one already given above:

               num_frames MT PT ML IDs     FP FN  MOTA MOTP IDF1
MOT17-01-DPM          450  0  0  0   0   2688  0 -inf%  NaN 0.0%
MOT17-01-FRCNN        450  0  0  0   0   2813  0 -inf%  NaN 0.0%
MOT17-01-SDP          450  0  0  0   0   3066  0 -inf%  NaN 0.0%
MOT17-03-DPM         1500  0  0  0   0  31028  0 -inf%  NaN 0.0%
MOT17-03-FRCNN       1500  0  0  0   0  24860  0 -inf%  NaN 0.0%
MOT17-03-SDP         1500  0  0  0   0  16304  0 -inf%  NaN 0.0%
MOT17-06-DPM         1194  0  0  0   0   4143  0 -inf%  NaN 0.0%
MOT17-06-FRCNN       1194  0  0  0   0   4384  0 -inf%  NaN 0.0%
MOT17-06-SDP         1194  0  0  0   0   3904  0 -inf%  NaN 0.0%
MOT17-07-DPM          500  0  0  0   0   5864  0 -inf%  NaN 0.0%
MOT17-07-FRCNN        500  0  0  0   0   5880  0 -inf%  NaN 0.0%
MOT17-07-SDP          500  0  0  0   0   5502  0 -inf%  NaN 0.0%
MOT17-08-DPM          625  0  0  0   0   7823  0 -inf%  NaN 0.0%
MOT17-08-FRCNN        625  0  0  0   0   6827  0 -inf%  NaN 0.0%
MOT17-08-SDP          625  0  0  0   0   7710  0 -inf%  NaN 0.0%
MOT17-12-DPM          900  0  0  0   0   3318  0 -inf%  NaN 0.0%
MOT17-12-FRCNN        900  0  0  0   0   2919  0 -inf%  NaN 0.0%
MOT17-12-SDP          900  0  0  0   0   3383  0 -inf%  NaN 0.0%
MOT17-14-DPM          750  0  0  0   0   5779  0 -inf%  NaN 0.0%
MOT17-14-FRCNN        750  0  0  0   0   5499  0 -inf%  NaN 0.0%
MOT17-14-SDP          750  0  0  0   0   5296  0 -inf%  NaN 0.0%
OVERALL             17757  0  0  0   0 158990  0 -inf%  NaN 0.0%

Thank you for your help in advance.

@mondrasovic
Copy link

I have just solved it. Well, I would not call it like that. It was my bad that I was not careful enough about my assumptions. The thing is, that MOT17 does not have any ground truth publicly available (e.g., see this issue). Even though I have known this all along, yet I have spent days trying to figure this "bug" out.

If you look more closely, you realize that the test set of the MOT17 does not contain the gt folder. This causes the ingested annotations files to contain "id": -1" for every detection. Conversely, you can find this folder in the train part. So, what I did was to evaluate the tracker on the train set instead of the test set, by using the additional --set train flag. Now everything works and produces the proper results.

The moral of this story is, that if you want to obtain the official MOT17 score, you have to upload them to their dedicated website.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants