Skip to content
Distance-IoU Loss into SSD
Python
Branch: master
Clone or download
Latest commit 65b68b5 Dec 9, 2019
Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
config Update config.py Nov 17, 2019
data Add files via upload Nov 17, 2019
model Add files via upload Nov 17, 2019
tools Add files via upload Nov 17, 2019
utils Update box_utils.py Nov 18, 2019
work_dir Create DIoU-NMS.txt Nov 17, 2019
LICENSE Initial commit Nov 17, 2019
README.md Update README.md Dec 9, 2019

README.md

Distance-IoU Loss into other SOTA detection methods can be found here.

[arxiv] [pdf]

SSD_FPN_DIoU,CIoU in PyTorch

The code references SSD: Single Shot MultiBox Object Detector, in PyTorch, mmdet and JavierHuang. Currently, some experiments are carried out on the VOC dataset, if you want to train your own dataset, more details can be refer to the links above.

If you use this work, please consider citing:

@inproceedings{zheng2020distance,
  author    = {Zhaohui Zheng, Ping Wang, Wei Liu, Jinze Li, Rongguang Ye, Dongwei Ren},
  title     = {Distance-IoU Loss: Faster and Better Learning for Bounding Box Regression},
  booktitle = {The AAAI Conference on Artificial Intelligence (AAAI)},
   year      = {2020},
}

Losses

Losses can be chosen with the losstype option in the config/config.py file The valid options are currently: [Iou|Giou|Diou|Ciou|SmoothL1].

VOC:
  'losstype': 'Ciou'

DIoU-NMS

NMS can be chosen with the nms_kind option in the config/config.py file. If set it to greedynms, it means using greedy-NMS. Besides that, similar to DIoU-NMS in Faster R-CNN, we also introduce beta1 for DIoU-NMS in SSD, that is DIoU = IoU - R_DIoU ^ {beta1}. With this operation, DIoU-NMS may perform better than default beta1=1.0. But for SSD beta1=1.0 seems to be good enough.

  'nms_kind': "diounms"

Fold-Structure

The fold structure as follow:

  • config/
    • config.py
    • init.py
  • data/
    • init.py
    • VOC.py
    • VOCdevkit/
  • model/
    • build_ssd.py
    • init.py
    • backbone/
    • neck/
    • head/
    • utils/
  • utils/
    • box/
    • detection/
    • loss/
    • init.py
  • tools/
    • train.py
    • eval.py
    • test.py
  • work_dir/

Environment

  • pytorch 0.4.1
  • python3+
  • visdom
    • for real-time loss visualization during training!
     pip install visdom
    • Start the server (probably in a screen or tmux)
     python visdom
    • Then (during training) navigate to http://localhost:8097/ (see the Train section below for training details).

Datasets

  • PASCAL VOC:Download VOC2007, VOC2012 dataset, then put VOCdevkit in the data directory

Training

Training VOC

python tools/train.py
  • Note:
    • For training, default NVIDIA GPU.
    • You can set the parameters in the train.py (see 'tools/train.py` for options)
    • In the config,you can set the work_dir to save your training weight.(see 'configs/config.py`)

Evaluation

  • To evaluate a trained network:
python tools/ap.py --trained_model {your_weight_address}

For example: (the output is AP50, AP75 and AP of our CIoU loss)

Results:
0.033
0.015
0.009
0.011
0.008
0.083
0.044
0.042
0.004
0.014
0.026
0.034
0.010
0.006
0.009
0.006
0.009
0.013
0.106
0.011
0.025
~~~~~~~~

--------------------------------------------------------------
Results computed with the **unofficial** Python eval code.
Results should be very close to the official MATLAB eval code.
--------------------------------------------------------------
0.7884902583981603 0.5615516772893671 0.5143832356646468

Test

  • To test a trained network:
python test.py -- trained_model {your_weight_address}

if you want to visual the box, you can add the command --visbox True(default False)

Performance

VOC2007 Test mAP

  • Backbone is ResNet50-FPN:
Test AP AP75
IoU 51.01 54.74
GIoU 51.06 55.48
DIoU 51.31 55.71
CIoU 51.44 56.16

Pretrained weights

Here are the trained models using the configurations in this repository.

You can’t perform that action at this time.