Skip to content

Code for our ICCV 2019 paper "Integral Object Mining via Online Attention Accumulation"

Notifications You must be signed in to change notification settings

PengtaoJiang/OAA

Repository files navigation

Online Attention Accumulation

This code is based on the Caffe framework. Recently, I reimplement it using the PyTorch framework at OAA-PyTorch.

Video to observe attention evolution.

Watch the video

Video to observe attention accumulation.

Watch the video

Thanks to the contribution from Ling-Hao Han for this video.

Table of Contents

  1. Pre-computed results
  2. Pytorch re-implementations
  3. Installation
  4. Implementation
  5. Citation

Pre-computed Results

We provide the pre-trained models, pre-computed attention maps and saliency maps for:

  • The pre-trained integral attention model. [link].
  • The pre-computed attention maps for OAA and OAA+.
  • The saliency maps used for proxy labels. [link]
  • The code for generating proxy segmentation labels can be download from this link.
  • The pre-trained vgg16-based segmentation models for OAA and OAA+.
  • CRF parameters: bi_w = 3, bi_xy_std = 67, bi_rgb_std = 4, pos_w = 1, pos_xy_std = 3.

Installation

1. Prerequisites

  • ubuntu 16.04
  • python 2.7 or python 3.x (adjust print function in *.py)
  • caffe dependence

2. Compilie caffe

git clone https://github.com/PengtaoJiang/OAA.git
cd OAA/
make all -j4 && make pycaffe

3. Download

Dataset

Download the VOCdevkit.tar.gz file and extract the voc data into data/ folder.

Init models

Download this model for initializing the classfication network. Move it to examples/oaa.
Download this model for initializing the VGG-based DeepLab-LargeFOV network. Move it to examples/seg.
Download this model for initializing the ResNet-based DeepLab-LargeFOV network. Move it to examples/seg.

Implementation

1. Attention Generation

First, train the classification network for accumulating attention,

cd examples/oaa/
./train.sh exp1 0

After OAA is finished, you can resize the cumulative attention maps to the size of original images by

cd exp1/
python res.py

(optional)
After OAA, you can train an integral attention model.
You need to perform serveal steps:
First, rename the cumulative attention maps,

cd exp1/
python res1.py
python eval.py 30000 0

Second, train the integral attention model,

cd examples/oaa/
./train.sh exp2 0

Third, generate attention maps from the integral attention model,

cd examples/oaa/exp2/
python eval.py 30000 0

2. Segmentation

We provide two Deeplab-LargeFOV versions, VGG16(examples/seg/exp1) and ResNet101(examples/seg/exp2).
After generating proxy labels, put them into data/VOCdevkit/VOC2012/.
Adjust the training list train_ins.txt,

cd examples/seg/exp1/
vim train_ins.txt

Train

cd examples/seg/
./train.sh exp1 0

Test

python eval.py 15000 0 exp1

If you want to use crf to smooth the segmentation results, you can download the crf code from this link.
Move the code the examples/seg/, compile it. Then uncomment line 175 and 176 in examples/seg/eval.py.
The crf parameters are in examples/seg/utils.py.

Citation

If you use these codes and models in your research, please cite:

@inproceedings{jiang2019integral,   
      title={Integral Object Mining via Online Attention Accumulation},   
      author={Jiang, Peng-Tao and Hou, Qibin and Cao, Yang and Cheng, Ming-Ming and Wei, Yunchao and Xiong, Hong-Kai},   
      booktitle={Proceedings of the IEEE International Conference on Computer Vision},   
      pages={2070--2079},   
      year={2019} 
}
@article{jiang2021online,
  title={Online Attention Accumulation for Weakly Supervised Semantic Segmentation},
  author={Jiang, Peng-Tao and Han, Ling-Hao and Hou, Qibin and Cheng, Ming-Ming and Wei, Yunchao},
  journal={IEEE Transactions on Pattern Analysis and Machine Intelligence},
  year={2021},
  publisher={IEEE}
}

If you have any questions about our paper "Integral Object Mining via Online Attention Accumulation", please feel free to contact Me (pt.jiang AT mail DOT nankai.edu.cn).

License

The source code is free for research and education use only. Any comercial use should get formal permission first.

About

Code for our ICCV 2019 paper "Integral Object Mining via Online Attention Accumulation"

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published