Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

why mAP in your paper is lower than others? #7

Closed
llf1234 opened this issue Apr 25, 2018 · 3 comments
Closed

why mAP in your paper is lower than others? #7

llf1234 opened this issue Apr 25, 2018 · 3 comments

Comments

@llf1234
Copy link

llf1234 commented Apr 25, 2018

recently, i recover your excellent work hashNet,and find that the mAP in cifar10 is near 0.8 which is higher than that report in your follow-up work hashGANs
but not merely so,
In your paper, you report CNNH achieve 0.5696 in NUS-WIDE(16bits) But In there paper is 0.611 in NUS-WIDE(12bit) In there original paper,there network is composed of three conv-pooling layers, one fully connected layer and an output layers, which is even smaller than AlexNet.
In other hash method,they use CNN-F model as backbone whose network is similiar with AlexNet ,and they achieve mAP about 0.8898 in cifar 10(16bit) ,but your paper is much lower.
So, can you tell me why your mAP is lower than other? It is very import for me to do future work。Thanks!

@caozhangjie
Copy link
Collaborator

I think we do not report any results on Cifar10 but use a more difficult dataset, ImageNet.
In the NUS-WIDE dataset, many image urls are invalid. Thus, the NUS-WIDE downloaded by us may be a little different than theirs. And they use the images associated with the 21 most frequent concept tags (classes), but we use all the images in the NUS-WIDE, which are associated with 81 concepts. Thus, our task is a little difficult than theirs, which causes the performance to be worse.

@llf1234
Copy link
Author

llf1234 commented Apr 25, 2018

Thanks for your interpretation!but In your professional brothers Yue Cao‘s paper:"HashGAN: Deep Learning to Hash with Pair Conditional Wasserstein GAN" they report the results on cifar10 using hashNet。the results(0.643(16bits)) is much lower than others(above 0.8)。the only differernt is that their backbone is CNN-F but not AlexNet。I recover your work with tensorflow, it also run up to 0.8。I feel so puzzled。

@caozhangjie
Copy link
Collaborator

You can ask them about the details of their experiments. Yue Cao's email is caoyue10@gmail.com

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants