Skip to content
Pretrained bag-of-local-features neural networks
Branch: master
Clone or download
Latest commit 14fe216 Feb 14, 2019
Type Name Latest commit message Commit time
Failed to load latest commit information.
bagnets Fixed naming Feb 14, 2019
.gitignore initial commit Aug 12, 2018
heatmap_visualisation.ipynb fixed notebook Feb 8, 2019



In this repository you find the model specification and pretrained weights for the bag-of-local-feature models published in


pip install git+


The code provides simple means to initialize the models in either Pytorch or Keras. After installation please use the following code snippets to load the models:

import bagnets.pytorch
pytorch_model = bagnets.pytorch.bagnet17(pretrained=True)
import bagnets.keras
keras_model = bagnets.keras.bagnet17()

and replace bagnet17 with whatever size you want (available are bagnet9, bagnet17 and bagnet33). The last number refers to the maximum local patch size that the network can integrate over.

Image Preprocessing

The models expect inputs with the standard torchvision preprocessing, i.e.

  • with RGB channels
  • in the format [channel, x, y]
  • loaded with pixel values between 0 and 1 which are then...
  • ...normalized by mean and standard deviation, i.e. for given mean: (M1,...,Mn) and std: (S1,..,Sn) for n channels, the normalization should transform each channel of the input as input[channel] = (input[channel] - mean[channel]) / std[channel]

The mean and standard deviation are:

  • mean = [0.485, 0.456, 0.406]
  • std = [0.229, 0.224, 0.225]


If you find BagNets useful for your scientific work, please consider citing it in resulting publications:

  title={Approximating CNNs with Bag-of-local-Features models8works surprisingly well on ImageNet},
  author={Brendel, Wieland and Bethge, Matthias},
  journal={International Conference on Learning Representations},

You can find the paper on OpenReview:


You can’t perform that action at this time.