Skip to content
Switch branches/tags
Go to file


Failed to load latest commit information.
Latest commit message
Commit time


This is a Caffe implementation of Excitation Backprop for RNNs described in

Sarah Adel Bargal*, Andrea Zunino*, Donghyun Kim, Jianming Zhang, Vittorio Murino, Stan Sclaroff. "Excitation Backprop for RNNs." CVPR, 2018.

This software implementation is provided for academic research and non-commercial purposes only. This implementation is provided without warranty.

Repo for Excitation Backprop for CNNs


  1. The same prerequisites as Caffe
  2. Anaconda (python packages)

Quick Start

  1. Unzip the files to a local folder (denoted as root_folder).
  2. Enter the root_folder and compile the code the same way as in Caffe.
  • Our code is tested in GPU mode, so make sure to activate the GPU code when compiling the code.
  • Make sure to compile pycaffe, the python interface
  1. Enter root_folder/excitationBP-RNNs, run demo.ipynb using the python notebook. It will show you how to compute the spatiotemporal saliency maps of a video, and includes the examples in the demo video. For details of running the python notebook remotely on a server, see here.

Other comments

  1. We implemented both GPU and CPU versions of Excitation Backprop for RNNs. Change caffe.set_mode_eb_gpu() to caffe.set_mode_eb_cpu() to run the CPU version.
  2. You can download a pre-trained action recognition model at this link. The model must be placed in the folder root_folder/models/VGG16_LSTM/
  3. To apply your own CNN-LSTM model, you need to modify root_folder/models/VGG16_LSTM/deploy.prototxt. You need to add a dummy loss layer at the end of the file.


author = {Adel Bargal, Sarah and Zunino, Andrea and Kim, Donghyun and Zhang, Jianming and Murino, Vittorio and Sclaroff, Stan},
title = {Excitation Backprop for RNNs},
booktitle = {The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2018}