Towards Human-Machine Cooperation: Self-supervised Sample Mining for Object Detection
Keze Wang, Xiaopeng Yan, Dongyu Zhang, Lei Zhang, Liang Lin
Sun Yat-Sen University, Presented at CVPR2018
For Academic Research Use Only!
If you find SSM useful in your research, please consider citing:
@InProceedings{Wang_2018_CVPR,
author = {Wang, Keze and Yan, Xiaopeng and Zhang, Dongyu and Zhang, Lei and Lin, Liang},
title = {Towards Human-Machine Cooperation: Self-Supervised Sample Mining for Object Detection},
booktitle = {The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2018}
}
The code is built on top of R-FCN. Please carefully read through py-R-FCN and make sure py-R-FCN can run within your enviornment.
-
In our paper, we used Pascal VOC2007/VOC2012 and COCO as our datasets, and ResNet-101 model as our pre-trained model.
-
Please download ImageNet-pre-trained ResNet-101 model manually, and put them into $SSM_ROOT/data/imagenet_models
-
training
Before training, please prepare your dataset and pre-trained model and store them in the right path as R-FCN. You can go to ./tools/ and modify train_net.py to reset some parameters.Then, simply run sh ./train.sh.
-
testing
Before testing, you can modify test.sh to choose the trained model path, then simply run sh ./test.sh to get the evaluation result.
Tested on Ubuntu 14.04 with a Titan X GPU (12G) and Intel(R) Xeon(R) CPU E5-2623 v3 @ 3.00GHz.
Thanks for the contribution of Xiaoxi Wang.