Skip to content
Understanding Deep Networks via Extremal Perturbations and Smooth Masks
Python Other
  1. Python 98.9%
  2. Other 1.1%
Branch: master
Clone or download
Latest commit 6a198ee Sep 14, 2019
Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
docs First public release. Oct 5, 2019
examples First public release. Oct 5, 2019
packaging First public release. Oct 5, 2019
scripts First public release. Oct 5, 2019
torchray First public release. Oct 5, 2019
.gitattributes First public release. Oct 5, 2019
.gitignore First public release. Oct 5, 2019
CHANGELOG.md First public release. Oct 5, 2019
CODE_OF_CONDUCT.md First public release. Oct 5, 2019
CONTRIBUTING.md First public release. Oct 5, 2019
LICENSE
MANIFEST.in First public release. Oct 5, 2019
Makefile
README.md First public release. Oct 5, 2019
setup.py

README.md

TorchRay

The TorchRay package implements several visualization methods for deep convolutional neural networks using PyTorch. In this release, TorchRay focuses on attribution, namely the problem of determining which part of the input, usually an image, is responsible for the value computed by a neural network.

TorchRay is research oriented: in addition to implementing well known techniques form the literature, it provides code for reproducing results that appear in several papers, in order to support reproducible research.

TorchRay was initially developed to support the paper:

  • Understanding deep networks via extremal perturbations and smooth masks. Fong, Patrick, Vedaldi. Proceedings of the International Conference on Computer Vision (ICCV), 2019.

Examples

The package contains several usage examples in the examples subdirectory.

Here is a complete example for using GradCAM:

from torchray.attribution.grad_cam import grad_cam
from torchray.benchmark import get_example_data, plot_example

# Obtain example data.
model, x, category_id, _ = get_example_data()

# Grad-CAM backprop.
saliency = grad_cam(model, x, category_id, saliency_layer='features.29')

# Plots.
plot_example(x, saliency, 'grad-cam backprop', category_id)

Requirements

TorchRay requires:

  • Python 3.4 or greater
  • pytorch 1.1.0 or greater
  • matplotlib

For benchmarking, it also requires:

  • torchvision 0.3.0 or greater
  • pycocotools
  • mongodb (suggested)
  • pymongod (suggested)

On Linux/macOS, using conda you can install

while read requirement; do conda install \
-c defaults -c pytorch -c conda-forge --yes $requirement; done <<EOF
pytorch>=1.1.0
pycocotools
torchvision>=0.3.0
mongodb
pymongo
EOF

Installing TorchRay

Using pip:

pip install torchray

From source:

python setup.py install

or

pip install .

Full documentation

The full documentation can be found here.

Changes

See the CHANGELOG.

Join the TorchRay community

See the CONTRIBUTING file for how to help out.

The team

TorchRay has been primarily developed by Ruth C. Fong and Andrea Vedaldi.

License

TorchRay is CC-BY-NC licensed, as found in the LICENSE file.

You can’t perform that action at this time.