Skip to content
No description, website, or topics provided.
Python Cuda C++ Other
Branch: master
Clone or download
Fetching latest commit…
Cannot retrieve the latest commit at this time.

Files

Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
configs first commit Feb 13, 2020
mmdet first commit Feb 13, 2020
tools first commit Feb 13, 2020
INSTALL.md first commit Feb 13, 2020
LICENSE first commit Feb 13, 2020
MODEL_ZOO.md first commit Feb 13, 2020
README.md Update README.md Feb 14, 2020
TECHNICAL_DETAILS.md first commit Feb 13, 2020
compile.sh first commit Feb 13, 2020
setup.py first commit Feb 13, 2020

README.md

Cross-Iteration Batch Normalization

By Zhuliang Yao, Yue Cao, Shuxin Zheng, Gao Huang, Stephen Lin.

This repo is an official implementation of "Cross-Iteration Batch Normalization" on COCO object detection based on open-mmlab's mmdetection. This repository contains a PyTorch implementation of the CBN layer, as well as some training scripts to reproduce the COCO object detection and instance segmentation results reported in our paper.

Introduction

CBN is initially described in arxiv. A well-known issue of Batch Normalization is its significantly reduced effectiveness in the case of small mini-batch sizes. Here we present Cross-Iteration Batch Normalization (CBN), in which examples from multiple recent iterations are jointly utilized to enhance estimation quality. A challenge is that the network activations from different iterations are not comparable to each other due to changes in network weights. We thus compensate for the network weight changes via a proposed technique based on Taylor polynomials, so that the statistics can be accurately estimated. On object detection and image classification with small mini-batch sizes, CBN is found to outperform the original batch normalization and a direct calculation of statistics over previous iterations without the proposed compensation technique.

Citing CBN

@article{zhu2020CBN,
  title={Cross-Iteration Batch Normalization},
  author={Yao, Zhuliang and Cao, Yue and Zheng, Shuxin and Huang, Gao and Lin, Stephen},
  journal={arXiv preprint arXiv:2002.05712},
  year={2020}
}

Main Results

Backbone Method Norm APb APb0.50 APb0.75 APm APm0.50 APm0.75 Download
R-50-FPN Faster R-CNN - 36.8 57.9 39.8 - - - model
R-50-FPN Faster R-CNN SyncBN 37.5 58.4 40.6 - - - model
R-50-FPN Faster R-CNN GN 37.7 59.2 41.2 - - - model
R-50-FPN Faster R-CNN CBN 37.6 58.5 40.9 - - - model
R-50-FPN Mask R-CNN - 37.6 58.5 41.0 34.0 55.2 36.2 model
R-50-FPN Mask R-CNN SyncBN 38.5 58.9 42.0 34.3 55.7 36.7 model
R-50-FPN Mask R-CNN GN 38.5 59.4 41.8 35.0 56.4 37.3 model
R-50-FPN Mask R-CNN CBN 38.4 58.9 42.2 34.7 55.9 37.0 model

*All results are trained with 1x schedule. Normalization layers of backbone are fixed by default.

Installation

Please refer to INSTALL.md for installation and dataset preparation.

Demo

Test

Download the pretrained model

# Faster R-CNN
python tools/test.py {configs_file} {downloaded model} --gpus 4 --out {tmp.pkl} --eval bbox
# Mask R-CNN
python tools/test.py {configs_file} {downloaded model} --gpus 4 --out {tmp.pkl} --eval bbox segm

Train Mask R-CNNN

One node with 4GPUs:

# SyncBN
./tools/dist_train.sh ./configs/cbn/mask_rcnn_r50_fpn_syncbn_1x.py 4
# GN
./tools/dist_train.sh ./configs/cbn/mask_rcnn_r50_fpn_gn_1x.py 4
# CBN
./tools/dist_train.sh ./configs/cbn/mask_rcnn_r50_fpn_cbn_buffer3_burnin8_1x.py 4

TODO

  • Clean up mmdetection code base
  • Add CBN layer support
  • Add default configs for training
  • Upload pretrained models for quick test demo
  • Provide a conv_module of Conv & CBN
  • Speedup CBN layer with CUDA/CUDNN

Thanks

This implementation is based on mmdetection. Ref to this link for more details about mmdetection.

You can’t perform that action at this time.