Skip to content
This repo is an unofficial pytorch implementation of CVPR2019 paper: Decoders Matter for Semantic Segmentation: Data-Dependent Decoding Enables Flexible Feature Aggregation
Branch: master
Clone or download
Latest commit 754f184 Mar 16, 2019
Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
.idea fix Mar 16, 2019
data update Readme.md Mar 11, 2019
image update Readme.md Mar 11, 2019
libs
models fix Mar 16, 2019
options
utils update Readme.md Mar 11, 2019
.gitignore update Readme.md Mar 11, 2019
.swp first model Jan 29, 2019
README.md fix Mar 16, 2019
__init__.py
test.py update Readme.md Mar 11, 2019
test.sh update Readme.md Mar 11, 2019
train.py
train.sh fix some details Mar 15, 2019

README.md

DUpsampling

This repo is an unofficial pytorch implementation of CVPR19 paper: Decoders Matter for Semantic Segmentation: Data-Dependent Decoding Enables Flexible Feature Aggregation: https://arxiv.org/abs/1903.02120

Most recurrent update:

2019.03.14 - Add Synchronous BN operation and gradient accumulate to save gpu memory.

2019.03.13 - Add Weight pre-compute process.

2019.03.12 - Add softmax with temperature.

Installation

  • pytorch==0.4.1
  • python==3.5
  • numpy
  • torchvision
  • matplotlib
  • opencv-python
  • dominate
  • random
  • collections
  • shutil

Dataset and pretrained model

Plesae download VOC12_aug dataset and unzip the dataset into data folder.

Please download imagenet pretrained resnet50-imagenet.pth, and put it into checkpoints folder.

Please modify your configuration in options/base options.py.

Usage

if you want to use the model with normal batch norm operation:

python train.py \
--name dunet \
--gpu_ids 0,1 \
--model DUNet \
--pretrained_model ./checkpoints/resnet50-imagenet.pth \
--batchSize 16 \
--dataroot ./data/voc_12aug \
--train_list_path ./data/train_aug.txt \
--val_list_path ./data/val.txt \
--accum_steps 1 \
--nepochs 100 \
--tf_log --verbose

if you want to use Synchronous BN operation with CUDA implementation, which must be compiled with the following commands:

cd libs
sh build.sh
python build.py

The build.sh script assumes that the nvcc compiler is available in the current system search path. The CUDA kernels are compiled for sm_50, sm_52 and sm_61 by default. To change this (e.g. if you are using a Kepler GPU), please edit the CUDA_GENCODE variable in build.sh.

Run the following command to run:

python train.py \
--name dunet_sybn \
--gpu_ids 0,1 \
--model DUNet_sybn \
--pretrained_model ./checkpoints/resnet50-imagenet.pth \
--batchSize 16 \
--dataroot ./data/voc_12aug \
--train_list_path ./data/train_aug.txt \
--val_list_path ./data/val.txt \
--accum_steps 1 \
--nepochs 100 \
--tf_log --verbose

Segmentation results on val set

To do

  • Add softmax function with temperature

  • Modify the network and improve the accuracy.

  • Add Synchronous BN.

  • Debug and report the performance.

  • Improve code style and show more details.

under construction...

If you have any question, feel free to contact me or submit issue.

Thanks to the Third Party Libs

inplace_abn - Pytorch-Deeplab - PyTorch-Encoding- pix2pix- Pytorch-segmentation-toolbox

You can’t perform that action at this time.
You signed in with another tab or window. Reload to refresh your session. You signed out in another tab or window. Reload to refresh your session.