Pytorch SSD Series
Pytorch 4.1 is suppoted on branch 0.4 now.
- SSD SSD: Single Shot Multibox Detector
- FSSD FSSD: Feature Fusion Single Shot Multibox Detector
- RFB-SSDReceptive Field Block Net for Accurate and Fast Object Detection
- RefineDetSingle-Shot Refinement Neural Network for Object Detection
|System||mAP||FPS (Titan X Maxwell)|
|Faster R-CNN (VGG16)||73.2||7|
|SSD300 (VGG)||77.8||150 (1080Ti)|
|FSSD300 (VGG)||78.8||120 (1080Ti)|
|System||test-dev mAP||Time (Titan X Maxwell)|
|Faster R-CNN++ (ResNet-101)||34.9||3.36s|
Note: * The speed here is tested on the newest pytorch and cudnn version (0.2.0 and cudnnV6), which is obviously faster than the speed reported in the paper (using pytorch-0.1.12 and cudnnV5).
|System||COCO minival mAP||#parameters|
*: slightly better than the original ones in the paper (20.5).
- Install PyTorch-0.2.0-0.3.1 by selecting your environment on the website and running the appropriate command.
- Clone this repository. This repository is mainly based onRFBNet, ssd.pytorch and Chainer-ssd, a huge thank to them.
- Note: We currently only support Python 3+.
- Compile the nms and coco tools:
Note*: Check you GPU architecture support in utils/build.py, line 131. Default is:
- Install pyinn for MobileNet backbone:
pip install git+https://github.com/szagoruyko/pyinn.git@master
- Then download the dataset by following the instructions below and install opencv.
conda install opencv
To make things easy, we provide simple VOC and COCO dataset loader that inherits
torch.utils.data.Dataset making it fully compatible with the
Download VOC2007 trainval & test
# specify a directory for dataset to be downloaded into, else default is ~/data/ sh data/scripts/VOC2007.sh # <directory>
Download VOC2012 trainval
# specify a directory for dataset to be downloaded into, else default is ~/data/ sh data/scripts/VOC2012.sh # <directory>
Install the MS COCO dataset at /path/to/coco from official website, default is ~/data/COCO. Following the instructions to prepare minival2014 and valminusminival2014 annotations. All label files (.json) should be under the COCO/annotations/ folder. It should have this basic structure
$COCO/ $COCO/cache/ $COCO/annotations/ $COCO/images/ $COCO/images/test2015/ $COCO/images/train2014/ $COCO/images/val2014/
UPDATE: The current COCO dataset has released new train2017 and val2017 sets which are just new splits of the same image sets.
First download the fc-reduced VGG-16 PyTorch base network weights at: https://s3.amazonaws.com/amdegroot-models/vgg16_reducedfc.pth or from our BaiduYun Driver
MobileNet pre-trained basenet is ported from MobileNet-Caffe, which achieves slightly better accuracy rates than the original one reported in the paper, weight file is available at: https://drive.google.com/open?id=13aZSApybBDjzfGIdqN1INBlPsddxCK14 or BaiduYun Driver.
By default, we assume you have downloaded the file in the
mkdir weights cd weights wget https://s3.amazonaws.com/amdegroot-models/vgg16_reducedfc.pth
- To train RFBNet using the train script simply specify the parameters listed in
train_RFB.pyas a flag or manually change them.
python train_test.py -d VOC -v RFB_vgg -s 300
- -d: choose datasets, VOC or COCO.
- -v: choose backbone version, RFB_VGG, RFB_E_VGG or RFB_mobile.
- -s: image size, 300 or 512.
- You can pick-up training from a checkpoint by specifying the path as one of the training parameters (again, see
The test frequency can be found in the train_test.py
By default, it will directly output the mAP results on VOC2007 test or COCO minival2014. For VOC2012 test and COCO test-dev results, you can manually change the datasets in the
test_RFB.py file, then save the detection results and submitted to the server.