Skip to content
Excitation Backprop for RNNs
Branch: master
Clone or download
Fetching latest commit…
Cannot retrieve the latest commit at this time.
Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
cmake
data
docker Update Dockerfile to cuDNN v5 May 16, 2016
docs
examples
excitationBP-RNNs
include/caffe
matlab
models
python
scripts
src
tools
.Doxyfile
.gitignore
.travis.yml
CMakeLists.txt
CONTRIBUTING.md
CONTRIBUTORS.md
INSTALL.md
LICENSE
Makefile
Makefile.config.example
README.md
caffe.cloc

README.md

Caffe-ExcitationBP-RNNs

This is a Caffe implementation of Excitation Backprop for RNNs described in

Sarah Adel Bargal*, Andrea Zunino*, Donghyun Kim, Jianming Zhang, Vittorio Murino, Stan Sclaroff. "Excitation Backprop for RNNs." CVPR, 2018.

This software implementation is provided for academic research and non-commercial purposes only. This implementation is provided without warranty.

Repo for Excitation Backprop for CNNs

Prerequisites

  1. The same prerequisites as Caffe
  2. Anaconda (python packages)

Quick Start

  1. Unzip the files to a local folder (denoted as root_folder).
  2. Enter the root_folder and compile the code the same way as in Caffe.
  • Our code is tested in GPU mode, so make sure to activate the GPU code when compiling the code.
  • Make sure to compile pycaffe, the python interface
  1. Enter root_folder/excitationBP-RNNs, run demo.ipynb using the python notebook. It will show you how to compute the spatiotemporal saliency maps of a video, and includes the examples in the demo video. For details of running the python notebook remotely on a server, see here.

Other comments

  1. We implemented both GPU and CPU versions of Excitation Backprop for RNNs. Change caffe.set_mode_eb_gpu() to caffe.set_mode_eb_cpu() to run the CPU version.
  2. You can download a pre-trained action recognition model at this link. The model must be placed in the folder root_folder/models/VGG16_LSTM/
  3. To apply your own CNN-LSTM model, you need to modify root_folder/models/VGG16_LSTM/deploy.prototxt. You need to add a dummy loss layer at the end of the file.

Reference

@InProceedings{Bargal_2018_CVPR,
author = {Adel Bargal, Sarah and Zunino, Andrea and Kim, Donghyun and Zhang, Jianming and Murino, Vittorio and Sclaroff, Stan},
title = {Excitation Backprop for RNNs},
booktitle = {The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2018}
}
You can’t perform that action at this time.