Skip to content
Learning Dynamic Routing for Semantic Segmentation
Python Cuda C++
Branch: master
Clone or download

Latest commit

李彦玮 Update README
Latest commit 6a88d01 Mar 24, 2020


Type Name Latest commit message Commit time
Failed to load latest commit information.
datasets Init commit Mar 23, 2020
dl_lib Init commit Mar 23, 2020
intro Init commit Mar 23, 2020
tools Init commit Mar 23, 2020
.flake8 Init commit Mar 23, 2020
.gitignore Init commit Mar 23, 2020
LICENSE Init commit Mar 23, 2020 Update README Mar 24, 2020 Init commit Mar 23, 2020


This project provides an implementation for "Learning Dynamic Routing for Semantic Segmentation" (CVPR2020 Oral) on PyTorch. For the reason that experiments in the paper were conducted using internal framework, this project reimplements them on dl_lib and reports detailed comparisons below. Some parts of code in dl_lib are based on detectron2.

introduce image


  • Python >= 3.6
    • python3 --version
  • PyTorch >= 1.3
    • pip3 install torch torchvision
  • OpenCV
    • pip3 install opencv-python
  • GCC >= 4.9
    • gcc --version


Make sure that your get at least one gpu when compiling. Run:

  • git clone
  • cd DynamicRouting
  • sudo python3 build develop



We use Cityscapes dataset for training and validation. Please refer to datasets/ or dataset structure in detectron2 for more details.

Pretrained Model

We give ImageNet pretained models:


For example, if you want to train Dynamic Network with Layer16 backbone:

  • Train from scratch
    cd playground/Dynamic/Seg.Layer16
    dl_train --num-gpus 4
  • Use ImageNet pretrain
    cd playground/Dynamic/Seg.Layer16.ImageNet
    dl_train --num-gpus 4 MODEL.WEIGHTS /path/to/your/save_dir/ckpt.pth

NOTE: Please set FIX_SIZE_FOR_FLOPS to [768,768] and [1024,2048] for training and evaluation, respectively.


You can evaluate the trained or downloaded model:

  • Evaluate the trained model
    dl_test --num-gpus 8
  • Evaluate the downloaded model:
    dl_test --num-gpus 8 MODEL.WEIGHTS /path/to/your/save_dir/ckpt.pth 

NOTE: If your machine does not support such setting, please change settings in to a suitable value.


Cityscapes val set

Without ImageNet Pretrain:

Methods Backbone Iter/K mIoU (paper) GFLOPs (paper) mIoU (ours) GFLOPs (ours) Model
Dynamic-A Layer16 186 72.8 44.9 73.9 52.5 GoogleDrive
Dynamic-B Layer16 186 73.8 58.7 74.3 58.9 GoogleDrive
Dynamic-C Layer16 186 74.6 66.6 74.8 59.8 GoogleDrive
Dynamic-Raw Layer16 186 76.1 119.5 76.7 114.9 GoogleDrive
Dynamic-Raw Layer16 558 78.3 113.3 78.1 114.2 GoogleDrive

With ImageNet Pretrain:

Methods Backbone Iter/K mIoU (paper) GFLOPs (paper) mIoU (ours) GFLOPs (ours) Model
Dynamic-Raw Layer16 186 78.6 119.4 78.8 117.8 GoogleDrive
Dynamic-Raw Layer33 186 79.2 242.3 79.4 243.1 GoogleDrive

To do

  • Faster inference speed
  • Support more vision tasks
    • Object detection
    • Instance segmentation
    • Panoptic segmentation



Consider cite the Dynamic Routing in your publications if it helps your research.

    title = {Learning Dynamic Routing for Semantic Segmentation},
    author = {Yanwei Li, Lin Song, Yukang Chen, Zeming Li, Xiangyu Zhang, Xingang Wang, Jian Sun},
    booktitle = {IEEE Conference on Computer Vision and Pattern Recognition},
    year = {2020}

Consider cite this project in your publications if it helps your research.

    author = {Yanwei Li},
    title = {DynamicRouting},
    howpublished = {\url{}},
    year ={2020}
You can’t perform that action at this time.