Skip to content
Implementation of Saliency Tubes for 3D Convolutions in Pytoch and Keras to localise the focus spatio-temporal regions of 3D CNNs.
Branch: master
Clone or download
Type Name Latest commit message Commit time
Failed to load latest commit information.
examples Add files via upload Jan 24, 2019
.gitignore Initial commit Oct 26, 2018
LICENSE Initial commit Oct 26, 2018 Update README with citation May 10, 2019 Add files via upload Oct 26, 2018 add heat_tubes for keras Jan 24, 2019 added mfnet pytorch code Jan 28, 2019 added mfnet pytorch code Jan 28, 2019

Saliency Tubes: Visual Explanations for Spatio-Temporal Convolutions

supported versions Bugzilla bug status license GitHub language count


Deep learning approaches have been established as the main methodology for video classification and recognition. Recently, 3-dimensional convolutions have been used to achieve state-of-the-art performance in many challenging video datasets. Because of the high level of complexity of these methods, as the convolution operations are also extended to additional dimension in order to extract features from them as well, providing a visualization for the signals that the network interpret as informative, is a challenging task. An effective notion of understanding the network's inner-workings would be to isolate the spatio-temporal regions on the video that the network finds most informative. We propose a method called Saliency Tubes which demonstrate the foremost points and regions in both frame level and over time that are found to be the main focus points of the network. We demonstrate our findings on widely used datasets for third-person and egocentric action classification and enhance the set of methods and visualizations that improve 3D Convolutional Neural Networks (CNNs) intelligibility.

To appear in IEEE International Conference on Image Processing (ICIP) 2019    
[arXiv preprint]     [video presentation]

For videos, these frames can be turned to video/GIFs with tools such as ImageMagic or imageio.


Please make sure, Git is installed in your machine:

$ sudo apt-get update
$ sudo apt-get install git
$ git clone


Currently the repository supports either Keras or Pytorch models. OpenCV was used for processes in the frame level. For resizing the to the original video dimensions we used scipy.ndimage.

$ pip install opencv-python
$ pip install scipy



Citing Saliency Tubes

If you use our code in your research, please use the following BibTeX entry:

title={Saliency Tubes: Visual Explanations for Spatio-Temporal Convolutions},
author={Stergiou, Alexandros and Kapidis, Georgios and Kalliatakis, Grigorios and Chrysoulas, Christos and Veltkamp, Remco and Poppe, Ronald},
journal={arXiv preprint arXiv:1902.01078},


Alexandros Stergiou

a.g.stergiou at

Any queries or suggestions are much appreciated!

You can’t perform that action at this time.