Skip to content
Codes for our paper "Multiple Instance Detection Network with Online Instance Classifier Refinement" and "PCL: Proposal Cluster Learning for Weakly Supervised Object Detection".
Branch: master
Clone or download
Latest commit d06e0bf Aug 25, 2018
Type Name Latest commit message Commit time
Failed to load latest commit information.
caffe-oicr @ c68ee00 the first commit May 5, 2017
data/scripts the first commit May 5, 2017
lib Update Mar 5, 2018
models fix a typo Feb 1, 2018
tools Update Jun 27, 2017
.gitignore Initial commit Feb 28, 2017
LICENSE Initial commit Feb 28, 2017

Multiple Instance Detection Network with Online Instance Classifier Refinement

By Peng Tang, Xinggang Wang, Xiang Bai, and Wenyu Liu.

The code to train and eval OICR using PyTorch as backend is available here. Thanks Vadim!

We have released the codes of our PCL work at here. The PCL is the extension of OICR and obtains better performance than OICR!


Online Instance Classifier Refinement (OICR) is a framework for weakly supervised object detection with deep ConvNets.

  • It achieves state-of-the-art performance on weakly supervised object detection (Pascal VOC 2007 and 2012).
  • Our code is written by C++ and Python, based on Caffe, fast r-cnn, and faster r-cnn.

The paper has been accepted by CVPR 2017. For more details, please refer to our paper.


OICR architecture


Method VOC2007 test mAP VOC2007 trainval CorLoc VOC2012 test mAP VOC2012 trainval CorLoc
OICR-VGG_M 37.9 57.3 34.6 60.7
OICR-VGG16 41.2 60.6 37.9 62.1
OICR-Ens. 42.0 61.2 38.2 63.5
OICR-Ens.+FRCNN 47.0 64.3 42.5 65.6


Some OICR visualization results

Some OICR visualization results.

Some visualization comparisons among WSDDN, WSDDN+context, and OICR

Some visualization comparisons among WSDDN, WSDDN+context, and OICR.


OICR is released under the MIT License (refer to the LICENSE file for details).

Citing OICR

If you find OICR useful in your research, please consider citing:

    Author = {Tang, Peng and Wang, Xinggang and Bai, Xiang and Liu, Wenyu},
    Title = {Multiple Instance Detection Network with Online Instance Classifier Refinement},
    Booktitle = {CVPR},
    Year = {2017}


  1. Requirements: software
  2. Requirements: hardware
  3. Basic installation
  4. Installation for training and testing
  5. Extra Downloads (selective search)
  6. Extra Downloads (ImageNet models)
  7. Usage

Requirements: software

  1. Requirements for Caffe and pycaffe (see: Caffe installation instructions)

Note: Caffe must be built with support for Python layers!

# In your Makefile.config, make sure to have this line uncommented
  1. Python packages you might not have: cython, python-opencv, easydict

Requirements: hardware

  1. NVIDIA GTX TITANX (~12G of memory)


  1. Clone the OICR repository
# Make sure to clone with --recursive
git clone --recursive
  1. Build the Cython modules
cd $OICR_ROOT/lib
  1. Build Caffe and pycaffe
cd $OICR_ROOT/caffe-oicr
# Now follow the Caffe installation instructions here:

# If you're experienced with Caffe and have all of the requirements installed
# and your Makefile.config in place, then simply do:
make all -j 8
make pycaffe

Installation for training and testing

  1. Download the training, validation, test data and VOCdevkit
  1. Extract all of these tars into one directory named VOCdevkit
tar xvf VOCtrainval_06-Nov-2007.tar
tar xvf VOCtest_06-Nov-2007.tar
tar xvf VOCdevkit_18-May-2011.tar
  1. It should have this basic structure
$VOCdevkit/                           # development kit
$VOCdevkit/VOCcode/                   # VOC utility code
$VOCdevkit/VOC2007                    # image sets, annotations, etc.
# ... and several other directories ...
  1. Create symlinks for the PASCAL VOC dataset
cd $OICR_ROOT/data
ln -s $VOCdevkit VOCdevkit2007

Using symlinks is a good idea because you will likely want to share the same PASCAL dataset installation between multiple projects.

  1. [Optional] follow similar steps to get PASCAL VOC 2012.

  2. You should put the generated proposal data under the folder $OICR_ROOT/data/selective_search_data, with the name "voc_2007_trainval.mat", "voc_2007_test.mat", just as the form of fast-rcnn.

  3. The pre-trained models are all available in the Caffe Model Zoo. You should put it under the folder $OICR_ROOT/data/imagenet_models, just as the form of fast-rcnn.

Download pre-computed Selective Search object proposals

Pre-computed selective search boxes can also be downloaded for VOC2007 and VOC2012.


This will populate the $OICR_ROOT/data folder with selective_selective_data. (The script is copied from the fast-rcnn).

Download pre-trained ImageNet models

Pre-trained ImageNet models can be downloaded.


These models are all available in the Caffe Model Zoo, but are provided here for your convenience. (The script is copied from the fast-rcnn).


Train a OICR_ROOT network. For example, train a VGG16 network on VOC 2007 trainval:

./tools/ --gpu 1 --solver models/VGG16/solver.prototxt \
  --weights data/imagenet_models/$VGG16_model_name --iters 70000

Test a OICR network. For example, test the VGG 16 network on VOC 2007 test:

On trainval

./tools/ --gpu 1 --def models/VGG16/test.prototxt \
  --net output/default/voc_2007_trainval/vgg16_oicr_iter_70000.caffemodel \
  --imdb voc_2007_trainval

On test

./tools/ --gpu 1 --def models/VGG16/test.prototxt \
  --net output/default/voc_2007_trainval/vgg16_oicr_iter_70000.caffemodel \
  --imdb voc_2007_test

Test output is written underneath $OICR_ROOT/output.


For mAP, run the python code tools/

./tools/ $output_dir --imdb voc_2007_test --matlab

For CorLoc, run the python code tools/

./tools/ $output_dir --imdb voc_2007_trainval

The codes for training fast rcnn by pseudo ground truths are available on here.

You can’t perform that action at this time.