New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Some confusion about the paper #53
Comments
Both are used but for different things. For the test set submissions in the benchmark section, we used the MOT17 test set. For the ablation experiments we used a 50-50 frame split on the training data, i.e., the first 50% of frames of each sequence are for training and the latter 50% for validation. |
Thanks for your reply! |
Ablation studies are evaluated on splits of the training set where we have ground truth. The test set ground truth is not available. One has to generate prediction files and submit them to the https://motchallenge.net/ evaluation server. |
@timmeinhardt Thanks for your quickly reply! |
There is a setting when u create a tracker on the motchallenge webpage which puts your tracker in either the public or private leaderboard. But this only matters when u want to publish the tracker. The GT for public and private evaluation is the same. |
Thanks for your reply! Issue closed |
Hi, thanks for you great job!
I have a question about your paper, In the MOT17 experiment section of the paper, The dataset you used for the test is the MOT17 test dataset or a part of training dataset as the test dataset?
The text was updated successfully, but these errors were encountered: