Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Some confusion about the paper #53

Closed
quxu91 opened this issue Jul 20, 2022 · 6 comments
Closed

Some confusion about the paper #53

quxu91 opened this issue Jul 20, 2022 · 6 comments

Comments

@quxu91
Copy link

quxu91 commented Jul 20, 2022

Hi, thanks for you great job!
I have a question about your paper, In the MOT17 experiment section of the paper, The dataset you used for the test is the MOT17 test dataset or a part of training dataset as the test dataset?

@quxu91 quxu91 changed the title Some confusion about the code Some confusion about the paper Jul 20, 2022
@timmeinhardt
Copy link
Owner

Both are used but for different things. For the test set submissions in the benchmark section, we used the MOT17 test set. For the ablation experiments we used a 50-50 frame split on the training data, i.e., the first 50% of frames of each sequence are for training and the latter 50% for validation.

@quxu91
Copy link
Author

quxu91 commented Aug 4, 2022

Thanks for your reply!
However, I still a confusion about your work, Is the result of train sets in the public detection and private detection generated by run the track.py? But how to generate the result of test sets in the public detection and private detection respectively?

@timmeinhardt
Copy link
Owner

Ablation studies are evaluated on splits of the training set where we have ground truth. The test set ground truth is not available. One has to generate prediction files and submit them to the https://motchallenge.net/ evaluation server.

@quxu91
Copy link
Author

quxu91 commented Aug 5, 2022

@timmeinhardt Thanks for your quickly reply!
When I submit the result to the evaluation server, Should I set whether the test is public or private detection in the evaluation server? And where to set it if needed?

@timmeinhardt
Copy link
Owner

There is a setting when u create a tracker on the motchallenge webpage which puts your tracker in either the public or private leaderboard. But this only matters when u want to publish the tracker. The GT for public and private evaluation is the same.

@quxu91
Copy link
Author

quxu91 commented Aug 8, 2022

Thanks for your reply! Issue closed

@quxu91 quxu91 closed this as completed Aug 8, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants