CGNet: A Light-weight Context Guided Network for Semantic Segmentation
Switch branches/tags
Nothing to show
Clone or download
Latest commit 6c03050 Dec 8, 2018
Type Name Latest commit message Commit time
Failed to load latest commit information.
checkpoint Create Nov 24, 2018
dataset First Commit Nov 20, 2018
img Add files via upload Nov 20, 2018
model First Commit Nov 20, 2018
utils first commit Nov 20, 2018
LICENSE Create LICENSE Nov 29, 2018 Update Dec 8, 2018 First Commit Nov 20, 2018 First Commit Nov 20, 2018 First Commit Nov 20, 2018 First Commit Nov 20, 2018 First Commit Nov 20, 2018 First Commit Nov 20, 2018

CGNet: A Light-weight Context Guided Network for Semantic Segmentation


The demand of applying semantic segmentation model on mobile devices has been increasing rapidly. Current state-of-the-art networks have enormous amount of parameters hence unsuitable for mobile devices, while other small memory footprint models ignore the inherent characteristicof semantic segmentation. To tackle this problem, we propose a novel Context Guided Network (CGNet), which is a light-weight network for semantic segmentation on mobile devices. We first propose the Context Guided (CG) block, which learns the joint feature of both local feature and surrounding context, and further improves the joint feature with the global context. Based on the CG block, we develop Context Guided Network (CGNet), which captures contextual information in all stages of the network and is specially tailored for increasing segmentation accuracy. CGNet is also elaborately designed to reduce the number of parameters and save memory footprint. Under an equivalent number of parameters, the proposed CGNet significantly outperforms existing segmentation networks. Extensive experiments on Cityscapes and CamVid datasets verify the effectiveness of the proposed approach. Specifically, without any post-processing, CGNet achieves 64.8% mean IoU on Cityscapes with less than 0.5 M parameters, and has a frame-rate of 50 fps on one NVIDIA Tesla K80 card for 2048 × 1024 high-resolution images.

Results on Cityscapes test set

We train the proposed CGNet with only fine annotated data and submit our test results to the official evaluation server. image

Results on Camvid test set

We use the training set and validation set to train our model. Here, we use 480×360 resolution for training and evaluation. The number of parameters of CGNet is close to the current smallest semantic segmentation model ENet , and the accuracy of our proposed CGNet is 14.3% higher than it.



  1. Install PyTorch
  • The code is developed on python3.6.5 on Ubuntu 16.04. (GPU: Tesla K80; PyTorch: 0.5.0a0+9b0cece; Cuda: 8.0)
  1. Clone the repository
    git clone 
    cd CGNet
  2. Dataset
├── cityscapes_test_list.txt
├── cityscapes_train_list.txt
├── cityscapes_trainval_list.txt
├── cityscapes_val_list.txt
├── cityscapes_val.txt
├── gtCoarse
│   ├── train
│   ├── train_extra
│   └── val
├── gtFine
│   ├── test
│   ├── train
│   └── val
├── leftImg8bit
│   ├── test
│   ├── train
│   └── val
├── license.txt
  • Download the Camvid dataset. It should have this basic structure.
├── camvid_test_list.txt
├── camvid_train_list.txt
├── camvid_trainval_list.txt
├── camvid_val_list.txt
├── test
├── testannot
├── train
├── trainannot
├── val
└── valannot

Train your own model

For Cityscapes

  1. training on train set
python --gpus 0,1 --dataset cityscapes --train_type ontrain --train_data_list ./dataset/list/Cityscapes/cityscapes_train_list.txt --max_epochs 300
  1. training on train+val set
python --gpus 0,1 --dataset cityscapes --train_type ontrainval --train_data_list ./dataset/list/Cityscapes/cityscapes_trainval_list.txt --max_epochs 350
  1. Evaluation (on validation set)
python --gpus 0 --val_data_list ./dataset/list/Cityscapes/cityscapes_val_list.txt --resume ./checkpoint/cityscapes/CGNet_M3N21bs16gpu2_ontrain/model_cityscapes_train_on_trainset.pth
  1. Testing (on test set)
python --gpus 0 --test_data_list ./dataset/list/Cityscapes/cityscapes_test_list.txt --resume ./checkpoint/cityscapes/CGNet_M3N21bs16gpu2_ontrainval/model_cityscapes_train_on_trainvalset.pth

For Camvid

  1. training on train+val set
  1. testing (on test set)


If CGNet is useful for your research, please consider citing:

    title={CGNet: A Light-weight Context Guided Network for Semantic Segmentation},
    author={Wu, Tianyi and Tang, Sheng and Zhang, Rui and Zhang, Yongdong},
    journal={arXiv preprint arXiv:1811.08201},


This code is released under the MIT License. See LICENSE for additional details.