DSOD: Learning Deeply Supervised Object Detectors from Scratch. In ICCV 2017.
Switch branches/tags
Nothing to show
Clone or download
Latest commit 2d29b3a Aug 10, 2018


DSOD: Learning Deeply Supervised Object Detectors from Scratch

This repository contains the code for the following paper

DSOD: Learning Deeply Supervised Object Detectors from Scratch (ICCV 2017).

Zhiqiang Shen*, Zhuang Liu*, Jianguo Li, Yu-Gang Jiang, Yurong chen, Xiangyang Xue. (*Equal Contribution)

The code is based on the SSD framework.

Other Implementations: [Pytorch] by Yun Chen, [Pytorch] by uoip, [Pytorch] by qqadssp, [Pytorch] by Ellinier , [Mxnet] by Leo Cheng, [Mxnet] by eureka7mt, [Tensorflow] by Windaway.

If you find this helps your research, please cite:

	title = {DSOD: Learning Deeply Supervised Object Detectors from Scratch},
	author = {Shen, Zhiqiang and Liu, Zhuang and Li, Jianguo and Jiang, Yu-Gang and Chen, Yurong and Xue, Xiangyang},
	booktitle = {ICCV},
	year = {2017}


DSOD focuses on the problem of training object detector from scratch (without pretrained models on ImageNet). To the best of our knowledge, this is the first work that trains neural object detectors from scratch with state-of-the-art performance. In this work, we contribute a set of design principles for this purpose. One of the key findings is the deeply supervised structure enabled by dense layer-wise connections, plays a critical role in learning a good detection model. Please see our paper for more details.

Figure 1: DSOD prediction layers with plain and dense structures (for 300×300 input).


  1. Visualizations of network structures (tools from ethereon, ignore the warning messages):

Results & Models

The tables below show the results on PASCAL VOC 2007, 2012 and MS COCO.

PASCAL VOC test results:

Method VOC 2007 test mAP fps (Titan X) # parameters Models
DSOD300_smallest (07+12) 73.6 - 5.9M Download (23.5M)
DSOD300_lite (07+12) 76.7 25.8 10.4M Download (41.8M)
DSOD300 (07+12) 77.7 17.4 14.8M Download (59.2M)
DSOD300 (07+12+COCO) 81.7 17.4 14.8M Download (59.2M)
Method VOC 2012 test mAP fps # parameters Models
DSOD300 (07++12) 76.3 17.4 14.8M Download (59.2M)
DSOD300 (07++12+COCO) 79.3 17.4 14.8M Download (59.2M)

COCO test-dev 2015 result (COCO has more object categories than VOC dataset, so the model size is slightly bigger.):

Method COCO test-dev 2015 mAP (IoU 0.5:0.95) Models
DSOD300 (COCO trainval) 29.3 Download (87.2M)


  1. Install SSD (https://github.com/weiliu89/caffe/tree/ssd) following the instructions there, including: (1) Install SSD caffe; (2) Download PASCAL VOC 2007 and 2012 datasets; and (3) Create LMDB file. Make sure you can run it without any errors.

    Our PASCAL VOC LMDB files:

    Method LMDBs
    Train on VOC07+12 and test on VOC07 Download
    Train on VOC07++12 and test on VOC12 (Comp4) Download
    Train on VOC12 and test on VOC12 (Comp3) Download
  2. Create a subfolder dsod under example/, add files DSOD300_pascal.py, DSOD300_pascal++.py, DSOD300_coco.py, score_DSOD300_pascal.py and DSOD300_detection_demo.py to the folder example/dsod/.

  3. Create a subfolder grp_dsod under example/, add files GRP_DSOD320_pascal.py and score_GRP_DSOD320_pascal.py to the folder example/grp_dsod/.

  4. Replace the file model_libs.py in the folder python/caffe/ with ours.

Training & Testing

  • Train a DSOD model on VOC 07+12:

    python examples/dsod/DSOD300_pascal.py
  • Train a DSOD model on VOC 07++12:

    python examples/dsod/DSOD300_pascal++.py
  • Train a DSOD model on COCO trainval:

    python examples/dsod/DSOD300_coco.py
  • Evaluate the model (DSOD):

    python examples/dsod/score_DSOD300_pascal.py
  • Run a demo (DSOD):

    python examples/dsod/DSOD300_detection_demo.py
  • Train a GRP_DSOD model on VOC 07+12:

    python examples/grp_dsod/GRP_DSOD320_pascal.py
  • Evaluate the model (GRP_DSOD):

    python examples/dsod/score_GRP_DSOD320_pascal.py

Note: You can modify the file model_lib.py to design your own network structure as you like.



Zhiqiang Shen (zhiqiangshen0214 at gmail.com)

Zhuang Liu (liuzhuangthu at gmail.com)

Any comments or suggestions are welcome!