Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Low accuracy in Sysu #28

Closed
absagargupta opened this issue Jun 6, 2020 · 17 comments
Closed

Low accuracy in Sysu #28

absagargupta opened this issue Jun 6, 2020 · 17 comments

Comments

@absagargupta
Copy link

hello there.

It is not an issue. I am asking for suggestion. I ran the MMT on market 1501 and Sysu(Mix of RGB and IR images) dataset. I am getting very low mAP and CMCs. Individual accuracy of kmeans on Market is in high 80-90s whereas in case of Sysu it is like 0.7% for rank 5. Any idea why it might be happening ?

@yxgeee
Copy link
Owner

yxgeee commented Jun 6, 2020

Could you provide more details? For example, how did you split market and sysu datasets? Market for source and sysu(both RGB&IR) for target?

@absagargupta
Copy link
Author

Yes market for source and sysu (both RGB and IR) as targets.

I also tried taking both source as Sysu dataset and target as modified sysu and even on that the accuracy was quite less.

@yxgeee
Copy link
Owner

yxgeee commented Jun 7, 2020

I think the major reason might be that RGB images and IR images in Sysu dataset share the same identities, however, it is quite difficult to assign overlapping IDs for RGB and IR images with clustering algorithm. Intuitively, a RGB image and an IR image of the same person are generally far away from each other in the latent space.

@yxgeee
Copy link
Owner

yxgeee commented Jun 7, 2020

During the inference on the test set of Sysu, the trained models are required to identify the same person’s IR images, given his/her RGB images. And vice versa. So, it is important to assign overlapping pseudo labels for RGB and IR images during training, however, it is hard with the current algorithm.

@yxgeee
Copy link
Owner

yxgeee commented Jun 7, 2020

I have one idea. You can try to load the RGB images in Sysu with only grayscale, so that they might be closer to IR images in the latent space.

@yxgeee
Copy link
Owner

yxgeee commented Jun 7, 2020

Remember that if you use grayscale RGB images for training, you should also load them in grayscale when testing. Maybe it will be better to load source-domain images in grayscale too.

@absagargupta
Copy link
Author

Yeah that makes sense. I will have the source as the greyscale-IR images of sysu and target as the greyscale-IR images of the modified sysu. Maybe it will work. If this does not work I will try loading only greyscale in both source and target and get back to you with the results.

@absagargupta
Copy link
Author

absagargupta commented Jun 8, 2020

I tried training on source sysu (greyscale+IR) and target (greyscale+IR) it still gives quite bad results. mAP of 0.4%, Rank1 0.4, rank 5 1.9% and rank 10 as 3.4%.

@yxgeee yxgeee changed the title Low accuracy in Market -1501 Low accuracy in Sysu Jul 28, 2020
@Jennifer0329
Copy link

Dear author, I runned the code in Market2Duke and Duke2Market, the results are similar to those mentioned in the paper. However, the results of Duke2MSMT17 and Market2MSMT17 have an obvious drop in performance. The best mAP result is about 15%+. No changes to the training codes, could you please give me some advice about what the reasons probably be or show me the detalis when training on MSMT17 dataset ? Thanks.

@yxgeee
Copy link
Owner

yxgeee commented Oct 22, 2020

Dear author, I runned the code in Market2Duke and Duke2Market, the results are similar to those mentioned in the paper. However, the results of Duke2MSMT17 and Market2MSMT17 have an obvious drop in performance. The best mAP result is about 15%+. No changes to the training codes, could you please give me some advice about what the reasons probably be or show me the detalis when training on MSMT17 dataset ? Thanks.

Maybe you can try to use iters=800 for msmt dataset, but actually I did not meet this issue in my training. I guess maybe it's caused by randomness. Someone has also discussed this issue with me: #25

@Jennifer0329
Copy link

Dear author, I runned the code in Market2Duke and Duke2Market, the results are similar to those mentioned in the paper. However, the results of Duke2MSMT17 and Market2MSMT17 have an obvious drop in performance. The best mAP result is about 15%+. No changes to the training codes, could you please give me some advice about what the reasons probably be or show me the detalis when training on MSMT17 dataset ? Thanks.

Maybe you can try to use iters=800 for msmt dataset, but actually I did not meet this issue in my training. I guess maybe it's caused by randomness. Someone has also discussed this issue with me: #25

ok I see. I runned the code on MSMT17_V2 dataset. And you have mentioned that DBSCAN-based MMT achieved better performance on MSMT17, do you have tried V2+DBSCAN+iters400 on Market2MSMT or Duke2MSMT? What about the resault? Thanks.

@yxgeee
Copy link
Owner

yxgeee commented Oct 22, 2020

Dear author, I runned the code in Market2Duke and Duke2Market, the results are similar to those mentioned in the paper. However, the results of Duke2MSMT17 and Market2MSMT17 have an obvious drop in performance. The best mAP result is about 15%+. No changes to the training codes, could you please give me some advice about what the reasons probably be or show me the detalis when training on MSMT17 dataset ? Thanks.

Maybe you can try to use iters=800 for msmt dataset, but actually I did not meet this issue in my training. I guess maybe it's caused by randomness. Someone has also discussed this issue with me: #25

ok I see. I runned the code on MSMT17_V2 dataset. And you have mentioned that DBSCAN-based MMT achieved better performance on MSMT17, do you have tried V2+DBSCAN+iters400 on Market2MSMT or Duke2MSMT? What about the resault? Thanks.

I used MSMT17_V1 dataset, I did not try MSMT17_V2 dataset.
Yes, DBSCAN-based MMT achieves better performance than kmeans-based MMT in most cases.
See Table 2 in https://arxiv.org/pdf/2006.02713v1.pdf for the results of DBSCAN-based MMT.

@Jennifer0329
Copy link

Dear author, I runned the code in Market2Duke and Duke2Market, the results are similar to those mentioned in the paper. However, the results of Duke2MSMT17 and Market2MSMT17 have an obvious drop in performance. The best mAP result is about 15%+. No changes to the training codes, could you please give me some advice about what the reasons probably be or show me the detalis when training on MSMT17 dataset ? Thanks.

Maybe you can try to use iters=800 for msmt dataset, but actually I did not meet this issue in my training. I guess maybe it's caused by randomness. Someone has also discussed this issue with me: #25

ok I see. I runned the code on MSMT17_V2 dataset. And you have mentioned that DBSCAN-based MMT achieved better performance on MSMT17, do you have tried V2+DBSCAN+iters400 on Market2MSMT or Duke2MSMT? What about the resault? Thanks.

Dear author, I runned the code in Market2Duke and Duke2Market, the results are similar to those mentioned in the paper. However, the results of Duke2MSMT17 and Market2MSMT17 have an obvious drop in performance. The best mAP result is about 15%+. No changes to the training codes, could you please give me some advice about what the reasons probably be or show me the detalis when training on MSMT17 dataset ? Thanks.

Maybe you can try to use iters=800 for msmt dataset, but actually I did not meet this issue in my training. I guess maybe it's caused by randomness. Someone has also discussed this issue with me: #25

ok I see. I runned the code on MSMT17_V2 dataset. And you have mentioned that DBSCAN-based MMT achieved better performance on MSMT17, do you have tried V2+DBSCAN+iters400 on Market2MSMT or Duke2MSMT? What about the resault? Thanks.

I used MSMT17_V1 dataset, I did not try MSMT17_V2 dataset.
Yes, DBSCAN-based MMT achieves better performance than kmeans-based MMT in most cases.
See Table 2 in https://arxiv.org/pdf/2006.02713v1.pdf for the results of DBSCAN-based MMT.

Maybe the drop is due to the version of MSMT17? It confused me a lot. And when you train on MSMT17 dataset, you change rho from 1.6e-3 to 0.7e-3, right?

Dear author, I runned the code in Market2Duke and Duke2Market, the results are similar to those mentioned in the paper. However, the results of Duke2MSMT17 and Market2MSMT17 have an obvious drop in performance. The best mAP result is about 15%+. No changes to the training codes, could you please give me some advice about what the reasons probably be or show me the detalis when training on MSMT17 dataset ? Thanks.

Maybe you can try to use iters=800 for msmt dataset, but actually I did not meet this issue in my training. I guess maybe it's caused by randomness. Someone has also discussed this issue with me: #25

ok I see. I runned the code on MSMT17_V2 dataset. And you have mentioned that DBSCAN-based MMT achieved better performance on MSMT17, do you have tried V2+DBSCAN+iters400 on Market2MSMT or Duke2MSMT? What about the resault? Thanks.

I used MSMT17_V1 dataset, I did not try MSMT17_V2 dataset.
Yes, DBSCAN-based MMT achieves better performance than kmeans-based MMT in most cases.
See Table 2 in https://arxiv.org/pdf/2006.02713v1.pdf for the results of DBSCAN-based MMT.

Maybe the drop is due to the version of MSMT17? It confused me a lot. And when you train on MSMT17 dataset, you change rho from 1.6e-3 to 0.7e-3, right?

@yxgeee
Copy link
Owner

yxgeee commented Oct 22, 2020

Dear author, I runned the code in Market2Duke and Duke2Market, the results are similar to those mentioned in the paper. However, the results of Duke2MSMT17 and Market2MSMT17 have an obvious drop in performance. The best mAP result is about 15%+. No changes to the training codes, could you please give me some advice about what the reasons probably be or show me the detalis when training on MSMT17 dataset ? Thanks.

Maybe you can try to use iters=800 for msmt dataset, but actually I did not meet this issue in my training. I guess maybe it's caused by randomness. Someone has also discussed this issue with me: #25

ok I see. I runned the code on MSMT17_V2 dataset. And you have mentioned that DBSCAN-based MMT achieved better performance on MSMT17, do you have tried V2+DBSCAN+iters400 on Market2MSMT or Duke2MSMT? What about the resault? Thanks.

Dear author, I runned the code in Market2Duke and Duke2Market, the results are similar to those mentioned in the paper. However, the results of Duke2MSMT17 and Market2MSMT17 have an obvious drop in performance. The best mAP result is about 15%+. No changes to the training codes, could you please give me some advice about what the reasons probably be or show me the detalis when training on MSMT17 dataset ? Thanks.

Maybe you can try to use iters=800 for msmt dataset, but actually I did not meet this issue in my training. I guess maybe it's caused by randomness. Someone has also discussed this issue with me: #25

ok I see. I runned the code on MSMT17_V2 dataset. And you have mentioned that DBSCAN-based MMT achieved better performance on MSMT17, do you have tried V2+DBSCAN+iters400 on Market2MSMT or Duke2MSMT? What about the resault? Thanks.

I used MSMT17_V1 dataset, I did not try MSMT17_V2 dataset.
Yes, DBSCAN-based MMT achieves better performance than kmeans-based MMT in most cases.
See Table 2 in https://arxiv.org/pdf/2006.02713v1.pdf for the results of DBSCAN-based MMT.

Maybe the drop is due to the version of MSMT17? It confused me a lot. And when you train on MSMT17 dataset, you change rho from 1.6e-3 to 0.7e-3, right?

Dear author, I runned the code in Market2Duke and Duke2Market, the results are similar to those mentioned in the paper. However, the results of Duke2MSMT17 and Market2MSMT17 have an obvious drop in performance. The best mAP result is about 15%+. No changes to the training codes, could you please give me some advice about what the reasons probably be or show me the detalis when training on MSMT17 dataset ? Thanks.

Maybe you can try to use iters=800 for msmt dataset, but actually I did not meet this issue in my training. I guess maybe it's caused by randomness. Someone has also discussed this issue with me: #25

ok I see. I runned the code on MSMT17_V2 dataset. And you have mentioned that DBSCAN-based MMT achieved better performance on MSMT17, do you have tried V2+DBSCAN+iters400 on Market2MSMT or Duke2MSMT? What about the resault? Thanks.

I used MSMT17_V1 dataset, I did not try MSMT17_V2 dataset.
Yes, DBSCAN-based MMT achieves better performance than kmeans-based MMT in most cases.
See Table 2 in https://arxiv.org/pdf/2006.02713v1.pdf for the results of DBSCAN-based MMT.

Maybe the drop is due to the version of MSMT17? It confused me a lot. And when you train on MSMT17 dataset, you change rho from 1.6e-3 to 0.7e-3, right?

You could try MSMT17_V1 for checking.
In the paper of SpCL, MMT-DBSCAN adopts constant eps=0.6 instead of rho for training.

@Jennifer0329
Copy link

Dear author, I runned the code in Market2Duke and Duke2Market, the results are similar to those mentioned in the paper. However, the results of Duke2MSMT17 and Market2MSMT17 have an obvious drop in performance. The best mAP result is about 15%+. No changes to the training codes, could you please give me some advice about what the reasons probably be or show me the detalis when training on MSMT17 dataset ? Thanks.

Maybe you can try to use iters=800 for msmt dataset, but actually I did not meet this issue in my training. I guess maybe it's caused by randomness. Someone has also discussed this issue with me: #25

ok I see. I runned the code on MSMT17_V2 dataset. And you have mentioned that DBSCAN-based MMT achieved better performance on MSMT17, do you have tried V2+DBSCAN+iters400 on Market2MSMT or Duke2MSMT? What about the resault? Thanks.

Dear author, I runned the code in Market2Duke and Duke2Market, the results are similar to those mentioned in the paper. However, the results of Duke2MSMT17 and Market2MSMT17 have an obvious drop in performance. The best mAP result is about 15%+. No changes to the training codes, could you please give me some advice about what the reasons probably be or show me the detalis when training on MSMT17 dataset ? Thanks.

Maybe you can try to use iters=800 for msmt dataset, but actually I did not meet this issue in my training. I guess maybe it's caused by randomness. Someone has also discussed this issue with me: #25

ok I see. I runned the code on MSMT17_V2 dataset. And you have mentioned that DBSCAN-based MMT achieved better performance on MSMT17, do you have tried V2+DBSCAN+iters400 on Market2MSMT or Duke2MSMT? What about the resault? Thanks.

I used MSMT17_V1 dataset, I did not try MSMT17_V2 dataset.
Yes, DBSCAN-based MMT achieves better performance than kmeans-based MMT in most cases.
See Table 2 in https://arxiv.org/pdf/2006.02713v1.pdf for the results of DBSCAN-based MMT.

Maybe the drop is due to the version of MSMT17? It confused me a lot. And when you train on MSMT17 dataset, you change rho from 1.6e-3 to 0.7e-3, right?

Dear author, I runned the code in Market2Duke and Duke2Market, the results are similar to those mentioned in the paper. However, the results of Duke2MSMT17 and Market2MSMT17 have an obvious drop in performance. The best mAP result is about 15%+. No changes to the training codes, could you please give me some advice about what the reasons probably be or show me the detalis when training on MSMT17 dataset ? Thanks.

Maybe you can try to use iters=800 for msmt dataset, but actually I did not meet this issue in my training. I guess maybe it's caused by randomness. Someone has also discussed this issue with me: #25

ok I see. I runned the code on MSMT17_V2 dataset. And you have mentioned that DBSCAN-based MMT achieved better performance on MSMT17, do you have tried V2+DBSCAN+iters400 on Market2MSMT or Duke2MSMT? What about the resault? Thanks.

I used MSMT17_V1 dataset, I did not try MSMT17_V2 dataset.
Yes, DBSCAN-based MMT achieves better performance than kmeans-based MMT in most cases.
See Table 2 in https://arxiv.org/pdf/2006.02713v1.pdf for the results of DBSCAN-based MMT.

Maybe the drop is due to the version of MSMT17? It confused me a lot. And when you train on MSMT17 dataset, you change rho from 1.6e-3 to 0.7e-3, right?

You could try MSMT17_V1 for checking.
In the paper of SpCL, MMT-DBSCAN adopts constant eps=0.6 instead of rho for training

Dear author, I runned the code in Market2Duke and Duke2Market, the results are similar to those mentioned in the paper. However, the results of Duke2MSMT17 and Market2MSMT17 have an obvious drop in performance. The best mAP result is about 15%+. No changes to the training codes, could you please give me some advice about what the reasons probably be or show me the detalis when training on MSMT17 dataset ? Thanks.

Maybe you can try to use iters=800 for msmt dataset, but actually I did not meet this issue in my training. I guess maybe it's caused by randomness. Someone has also discussed this issue with me: #25

ok I see. I runned the code on MSMT17_V2 dataset. And you have mentioned that DBSCAN-based MMT achieved better performance on MSMT17, do you have tried V2+DBSCAN+iters400 on Market2MSMT or Duke2MSMT? What about the resault? Thanks.

Dear author, I runned the code in Market2Duke and Duke2Market, the results are similar to those mentioned in the paper. However, the results of Duke2MSMT17 and Market2MSMT17 have an obvious drop in performance. The best mAP result is about 15%+. No changes to the training codes, could you please give me some advice about what the reasons probably be or show me the detalis when training on MSMT17 dataset ? Thanks.

Maybe you can try to use iters=800 for msmt dataset, but actually I did not meet this issue in my training. I guess maybe it's caused by randomness. Someone has also discussed this issue with me: #25

ok I see. I runned the code on MSMT17_V2 dataset. And you have mentioned that DBSCAN-based MMT achieved better performance on MSMT17, do you have tried V2+DBSCAN+iters400 on Market2MSMT or Duke2MSMT? What about the resault? Thanks.

I used MSMT17_V1 dataset, I did not try MSMT17_V2 dataset.
Yes, DBSCAN-based MMT achieves better performance than kmeans-based MMT in most cases.
See Table 2 in https://arxiv.org/pdf/2006.02713v1.pdf for the results of DBSCAN-based MMT.

Maybe the drop is due to the version of MSMT17? It confused me a lot. And when you train on MSMT17 dataset, you change rho from 1.6e-3 to 0.7e-3, right?

Dear author, I runned the code in Market2Duke and Duke2Market, the results are similar to those mentioned in the paper. However, the results of Duke2MSMT17 and Market2MSMT17 have an obvious drop in performance. The best mAP result is about 15%+. No changes to the training codes, could you please give me some advice about what the reasons probably be or show me the detalis when training on MSMT17 dataset ? Thanks.

Maybe you can try to use iters=800 for msmt dataset, but actually I did not meet this issue in my training. I guess maybe it's caused by randomness. Someone has also discussed this issue with me: #25

ok I see. I runned the code on MSMT17_V2 dataset. And you have mentioned that DBSCAN-based MMT achieved better performance on MSMT17, do you have tried V2+DBSCAN+iters400 on Market2MSMT or Duke2MSMT? What about the resault? Thanks.

I used MSMT17_V1 dataset, I did not try MSMT17_V2 dataset.
Yes, DBSCAN-based MMT achieves better performance than kmeans-based MMT in most cases.
See Table 2 in https://arxiv.org/pdf/2006.02713v1.pdf for the results of DBSCAN-based MMT.

Maybe the drop is due to the version of MSMT17? It confused me a lot. And when you train on MSMT17 dataset, you change rho from 1.6e-3 to 0.7e-3, right?

You could try MSMT17_V1 for checking.
In the paper of SpCL, MMT-DBSCAN adopts constant eps=0.6 instead of rho for training.

Would you please share me the MSMT17_V1 download link for research only? I have only got the V2 version before. Thanks.

@yxgeee
Copy link
Owner

yxgeee commented Oct 22, 2020

Dear author, I runned the code in Market2Duke and Duke2Market, the results are similar to those mentioned in the paper. However, the results of Duke2MSMT17 and Market2MSMT17 have an obvious drop in performance. The best mAP result is about 15%+. No changes to the training codes, could you please give me some advice about what the reasons probably be or show me the detalis when training on MSMT17 dataset ? Thanks.

Maybe you can try to use iters=800 for msmt dataset, but actually I did not meet this issue in my training. I guess maybe it's caused by randomness. Someone has also discussed this issue with me: #25

ok I see. I runned the code on MSMT17_V2 dataset. And you have mentioned that DBSCAN-based MMT achieved better performance on MSMT17, do you have tried V2+DBSCAN+iters400 on Market2MSMT or Duke2MSMT? What about the resault? Thanks.

Dear author, I runned the code in Market2Duke and Duke2Market, the results are similar to those mentioned in the paper. However, the results of Duke2MSMT17 and Market2MSMT17 have an obvious drop in performance. The best mAP result is about 15%+. No changes to the training codes, could you please give me some advice about what the reasons probably be or show me the detalis when training on MSMT17 dataset ? Thanks.

Maybe you can try to use iters=800 for msmt dataset, but actually I did not meet this issue in my training. I guess maybe it's caused by randomness. Someone has also discussed this issue with me: #25

ok I see. I runned the code on MSMT17_V2 dataset. And you have mentioned that DBSCAN-based MMT achieved better performance on MSMT17, do you have tried V2+DBSCAN+iters400 on Market2MSMT or Duke2MSMT? What about the resault? Thanks.

I used MSMT17_V1 dataset, I did not try MSMT17_V2 dataset.
Yes, DBSCAN-based MMT achieves better performance than kmeans-based MMT in most cases.
See Table 2 in https://arxiv.org/pdf/2006.02713v1.pdf for the results of DBSCAN-based MMT.

Maybe the drop is due to the version of MSMT17? It confused me a lot. And when you train on MSMT17 dataset, you change rho from 1.6e-3 to 0.7e-3, right?

Dear author, I runned the code in Market2Duke and Duke2Market, the results are similar to those mentioned in the paper. However, the results of Duke2MSMT17 and Market2MSMT17 have an obvious drop in performance. The best mAP result is about 15%+. No changes to the training codes, could you please give me some advice about what the reasons probably be or show me the detalis when training on MSMT17 dataset ? Thanks.

Maybe you can try to use iters=800 for msmt dataset, but actually I did not meet this issue in my training. I guess maybe it's caused by randomness. Someone has also discussed this issue with me: #25

ok I see. I runned the code on MSMT17_V2 dataset. And you have mentioned that DBSCAN-based MMT achieved better performance on MSMT17, do you have tried V2+DBSCAN+iters400 on Market2MSMT or Duke2MSMT? What about the resault? Thanks.

I used MSMT17_V1 dataset, I did not try MSMT17_V2 dataset.
Yes, DBSCAN-based MMT achieves better performance than kmeans-based MMT in most cases.
See Table 2 in https://arxiv.org/pdf/2006.02713v1.pdf for the results of DBSCAN-based MMT.

Maybe the drop is due to the version of MSMT17? It confused me a lot. And when you train on MSMT17 dataset, you change rho from 1.6e-3 to 0.7e-3, right?

You could try MSMT17_V1 for checking.
In the paper of SpCL, MMT-DBSCAN adopts constant eps=0.6 instead of rho for training

Dear author, I runned the code in Market2Duke and Duke2Market, the results are similar to those mentioned in the paper. However, the results of Duke2MSMT17 and Market2MSMT17 have an obvious drop in performance. The best mAP result is about 15%+. No changes to the training codes, could you please give me some advice about what the reasons probably be or show me the detalis when training on MSMT17 dataset ? Thanks.

Maybe you can try to use iters=800 for msmt dataset, but actually I did not meet this issue in my training. I guess maybe it's caused by randomness. Someone has also discussed this issue with me: #25

ok I see. I runned the code on MSMT17_V2 dataset. And you have mentioned that DBSCAN-based MMT achieved better performance on MSMT17, do you have tried V2+DBSCAN+iters400 on Market2MSMT or Duke2MSMT? What about the resault? Thanks.

Dear author, I runned the code in Market2Duke and Duke2Market, the results are similar to those mentioned in the paper. However, the results of Duke2MSMT17 and Market2MSMT17 have an obvious drop in performance. The best mAP result is about 15%+. No changes to the training codes, could you please give me some advice about what the reasons probably be or show me the detalis when training on MSMT17 dataset ? Thanks.

Maybe you can try to use iters=800 for msmt dataset, but actually I did not meet this issue in my training. I guess maybe it's caused by randomness. Someone has also discussed this issue with me: #25

ok I see. I runned the code on MSMT17_V2 dataset. And you have mentioned that DBSCAN-based MMT achieved better performance on MSMT17, do you have tried V2+DBSCAN+iters400 on Market2MSMT or Duke2MSMT? What about the resault? Thanks.

I used MSMT17_V1 dataset, I did not try MSMT17_V2 dataset.
Yes, DBSCAN-based MMT achieves better performance than kmeans-based MMT in most cases.
See Table 2 in https://arxiv.org/pdf/2006.02713v1.pdf for the results of DBSCAN-based MMT.

Maybe the drop is due to the version of MSMT17? It confused me a lot. And when you train on MSMT17 dataset, you change rho from 1.6e-3 to 0.7e-3, right?

Dear author, I runned the code in Market2Duke and Duke2Market, the results are similar to those mentioned in the paper. However, the results of Duke2MSMT17 and Market2MSMT17 have an obvious drop in performance. The best mAP result is about 15%+. No changes to the training codes, could you please give me some advice about what the reasons probably be or show me the detalis when training on MSMT17 dataset ? Thanks.

Maybe you can try to use iters=800 for msmt dataset, but actually I did not meet this issue in my training. I guess maybe it's caused by randomness. Someone has also discussed this issue with me: #25

ok I see. I runned the code on MSMT17_V2 dataset. And you have mentioned that DBSCAN-based MMT achieved better performance on MSMT17, do you have tried V2+DBSCAN+iters400 on Market2MSMT or Duke2MSMT? What about the resault? Thanks.

I used MSMT17_V1 dataset, I did not try MSMT17_V2 dataset.
Yes, DBSCAN-based MMT achieves better performance than kmeans-based MMT in most cases.
See Table 2 in https://arxiv.org/pdf/2006.02713v1.pdf for the results of DBSCAN-based MMT.

Maybe the drop is due to the version of MSMT17? It confused me a lot. And when you train on MSMT17 dataset, you change rho from 1.6e-3 to 0.7e-3, right?

You could try MSMT17_V1 for checking.
In the paper of SpCL, MMT-DBSCAN adopts constant eps=0.6 instead of rho for training.

Would you please share me the MSMT17_V1 download link for research only? I have only got the V2 version before. Thanks.

You could send an email to Prof. Shiliang Zhang for the link.

@yxgeee yxgeee closed this as completed Nov 5, 2020
@gyh420
Copy link

gyh420 commented Jan 5, 2024

Dear author, I runned the code in Market2Duke and Duke2Market, the results are similar to those mentioned in the paper. However, the results of Duke2MSMT17 and Market2MSMT17 have an obvious drop in performance. The best mAP result is about 15%+. No changes to the training codes, could you please give me some advice about what the reasons probably be or show me the detalis when training on MSMT17 dataset ? Thanks.

Maybe you can try to use iters=800 for msmt dataset, but actually I did not meet this issue in my training. I guess maybe it's caused by randomness. Someone has also discussed this issue with me: #25

ok I see. I runned the code on MSMT17_V2 dataset. And you have mentioned that DBSCAN-based MMT achieved better performance on MSMT17, do you have tried V2+DBSCAN+iters400 on Market2MSMT or Duke2MSMT? What about the resault? Thanks.

Dear author, I runned the code in Market2Duke and Duke2Market, the results are similar to those mentioned in the paper. However, the results of Duke2MSMT17 and Market2MSMT17 have an obvious drop in performance. The best mAP result is about 15%+. No changes to the training codes, could you please give me some advice about what the reasons probably be or show me the detalis when training on MSMT17 dataset ? Thanks.

Maybe you can try to use iters=800 for msmt dataset, but actually I did not meet this issue in my training. I guess maybe it's caused by randomness. Someone has also discussed this issue with me: #25

ok I see. I runned the code on MSMT17_V2 dataset. And you have mentioned that DBSCAN-based MMT achieved better performance on MSMT17, do you have tried V2+DBSCAN+iters400 on Market2MSMT or Duke2MSMT? What about the resault? Thanks.

I used MSMT17_V1 dataset, I did not try MSMT17_V2 dataset.
Yes, DBSCAN-based MMT achieves better performance than kmeans-based MMT in most cases.
See Table 2 in https://arxiv.org/pdf/2006.02713v1.pdf for the results of DBSCAN-based MMT.

Maybe the drop is due to the version of MSMT17? It confused me a lot. And when you train on MSMT17 dataset, you change rho from 1.6e-3 to 0.7e-3, right?

Dear author, I runned the code in Market2Duke and Duke2Market, the results are similar to those mentioned in the paper. However, the results of Duke2MSMT17 and Market2MSMT17 have an obvious drop in performance. The best mAP result is about 15%+. No changes to the training codes, could you please give me some advice about what the reasons probably be or show me the detalis when training on MSMT17 dataset ? Thanks.

Maybe you can try to use iters=800 for msmt dataset, but actually I did not meet this issue in my training. I guess maybe it's caused by randomness. Someone has also discussed this issue with me: #25

ok I see. I runned the code on MSMT17_V2 dataset. And you have mentioned that DBSCAN-based MMT achieved better performance on MSMT17, do you have tried V2+DBSCAN+iters400 on Market2MSMT or Duke2MSMT? What about the resault? Thanks.

I used MSMT17_V1 dataset, I did not try MSMT17_V2 dataset.
Yes, DBSCAN-based MMT achieves better performance than kmeans-based MMT in most cases.
See Table 2 in https://arxiv.org/pdf/2006.02713v1.pdf for the results of DBSCAN-based MMT.

Maybe the drop is due to the version of MSMT17? It confused me a lot. And when you train on MSMT17 dataset, you change rho from 1.6e-3 to 0.7e-3, right?

You could try MSMT17_V1 for checking.
In the paper of SpCL, MMT-DBSCAN adopts constant eps=0.6 instead of rho for training

Dear author, I runned the code in Market2Duke and Duke2Market, the results are similar to those mentioned in the paper. However, the results of Duke2MSMT17 and Market2MSMT17 have an obvious drop in performance. The best mAP result is about 15%+. No changes to the training codes, could you please give me some advice about what the reasons probably be or show me the detalis when training on MSMT17 dataset ? Thanks.

Maybe you can try to use iters=800 for msmt dataset, but actually I did not meet this issue in my training. I guess maybe it's caused by randomness. Someone has also discussed this issue with me: #25

ok I see. I runned the code on MSMT17_V2 dataset. And you have mentioned that DBSCAN-based MMT achieved better performance on MSMT17, do you have tried V2+DBSCAN+iters400 on Market2MSMT or Duke2MSMT? What about the resault? Thanks.

Dear author, I runned the code in Market2Duke and Duke2Market, the results are similar to those mentioned in the paper. However, the results of Duke2MSMT17 and Market2MSMT17 have an obvious drop in performance. The best mAP result is about 15%+. No changes to the training codes, could you please give me some advice about what the reasons probably be or show me the detalis when training on MSMT17 dataset ? Thanks.

Maybe you can try to use iters=800 for msmt dataset, but actually I did not meet this issue in my training. I guess maybe it's caused by randomness. Someone has also discussed this issue with me: #25

ok I see. I runned the code on MSMT17_V2 dataset. And you have mentioned that DBSCAN-based MMT achieved better performance on MSMT17, do you have tried V2+DBSCAN+iters400 on Market2MSMT or Duke2MSMT? What about the resault? Thanks.

I used MSMT17_V1 dataset, I did not try MSMT17_V2 dataset.
Yes, DBSCAN-based MMT achieves better performance than kmeans-based MMT in most cases.
See Table 2 in https://arxiv.org/pdf/2006.02713v1.pdf for the results of DBSCAN-based MMT.

Maybe the drop is due to the version of MSMT17? It confused me a lot. And when you train on MSMT17 dataset, you change rho from 1.6e-3 to 0.7e-3, right?

Dear author, I runned the code in Market2Duke and Duke2Market, the results are similar to those mentioned in the paper. However, the results of Duke2MSMT17 and Market2MSMT17 have an obvious drop in performance. The best mAP result is about 15%+. No changes to the training codes, could you please give me some advice about what the reasons probably be or show me the detalis when training on MSMT17 dataset ? Thanks.

Maybe you can try to use iters=800 for msmt dataset, but actually I did not meet this issue in my training. I guess maybe it's caused by randomness. Someone has also discussed this issue with me: #25

ok I see. I runned the code on MSMT17_V2 dataset. And you have mentioned that DBSCAN-based MMT achieved better performance on MSMT17, do you have tried V2+DBSCAN+iters400 on Market2MSMT or Duke2MSMT? What about the resault? Thanks.

I used MSMT17_V1 dataset, I did not try MSMT17_V2 dataset.
Yes, DBSCAN-based MMT achieves better performance than kmeans-based MMT in most cases.
See Table 2 in https://arxiv.org/pdf/2006.02713v1.pdf for the results of DBSCAN-based MMT.

Maybe the drop is due to the version of MSMT17? It confused me a lot. And when you train on MSMT17 dataset, you change rho from 1.6e-3 to 0.7e-3, right?

You could try MSMT17_V1 for checking.
In the paper of SpCL, MMT-DBSCAN adopts constant eps=0.6 instead of rho for training.

Would you please share me the MSMT17_V1 download link for research only? I have only got the V2 version before. Thanks.

You could send an email to Prof. Shiliang Zhang for the link.

hi , where l can find MSMT17_v2?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants