New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Cross-dataset testing? #4
Comments
Thanks for a quick response. I am having trouble understanding this table. Can you please explain it a bit? Also, there is no mention of cross data-set testing in the paper, do you know a possible reason why these numbers are so low as compared to the the numbers in the Table 4 of the paper. Why inter data-set is still such a big challenge? Also, what steps should I be taking if I have to make a person re-id system based on your repository to be able to work in it on a custom dataset? |
"Ours" in that table are the direct-transfer results, e.g., Market->Duke: the network is trained on Market and directly tested on Duke with no domain adaptation. Compared to the supervised single domain setting, apparently, the large domain gap between the two datasets drags down the accuracy. So to improve the performance on a custom dataset, domain adaption is needed. |
Thanks a lot, @xiaodongyang . By domain adaptation, do you mean that I can fine-tune the model trained by you on my custom data-set? |
Finetuning is one way, but people are more interested in unsupervised domain adaptation, i.e., no annotation on custom data. You can find quite a few recent re-id papers on this topic. |
Here is some brief idea about adaption and the table of state-of-the-art methods.
The primary motivation is that collecting ID annotation is relatively-expensive in human resource and time cost. Is it possible to use less annotation on the target dataset, especially ID labels? If we have a model with good scalability, it could work on different datasets. Our method (DG-Net) does not see any target data, and only trained on the source dataset. Other methods may adapt the clustering methods to further fine-tune the model on target data.
|
@anant15 Please let us know if you have any other questions, otherwise, please close the issue. |
Have you tried cross-dataset testing - training on one dataset, say market1501 and testing on another, say cuhk03 or duke? Have you come across any model with code which has tried this? Will be really grateful to you, thanks in advance.
The text was updated successfully, but these errors were encountered: