Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

the mAP on cifar10 is only 0.30 #28

Closed
chenshen03 opened this issue May 26, 2019 · 5 comments
Closed

the mAP on cifar10 is only 0.30 #28

chenshen03 opened this issue May 26, 2019 · 5 comments

Comments

@chenshen03
Copy link

For CIFAR-10, I randomly select 100 images per class as the test query set, and 500 images per class as the training set. The remaining images serve as database set.

I trained HashNet with command as follow:
python -u train.py --gpu_id 0 --dataset cifar --prefix resnet50_hashnet --hash_bit 48 --net ResNet50 --lr 0.0003 --class_num 1.0

The final mAP is only 0.302986.

@dweiiiiiiiiii
Copy link

你好,请问你也只是在cifar-10上得到的结果不太好而在其他数据集上结果正常吗?

@chenshen03
Copy link
Author

@dweiiiiiiiiii 是的,应该和你结果差不多。

@chenshen03
Copy link
Author

@caozhangjie Can you explain why HashNet behaves so badly on CIFAR10?

@caozhangjie
Copy link
Collaborator

Cifar 10 has 10 classes, I recommend replace the class_num to 10. Also, Cifar 10 needs a different backbone network. Please search for a proper backbone.

@zhouxiaohang
Copy link

Cifar 10 has 10 classes, I recommend replace the class_num to 10. Also, Cifar 10 needs a different backbone network. Please search for a proper backbone.

Since CIFAR 10 with 10 classes needs class_num to be 10, why COCO dataset with 80 classes is compatible with class_num=1.0? I've tried (64 bits are used.):

COCO dataset with class_num=80.0, mAP=0.53
COCO dataset with class_num=1.0, mAP=0.80 (which is higher than that in paper)

How to explain this?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants