Skip to content
IJCNN-19 "Structured Pruning for Efficient ConvNets via Incremental Regularization"; BMVC-18 "Structured probabilistic pruning for convolutional neural network acceleration"
Makefile C++ CMake Python Cuda MATLAB Other
Branch: MultiStage_Reg…
Clone or download
Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
cmake push codes Sep 2, 2017
compression_experiments update lenet5 example May 11, 2019
data added image mean proto May 11, 2019
docker push codes Sep 2, 2017
docs push codes Sep 2, 2017
examples modify the examples/00ipynb for filter visualization May 17, 2018
include/caffe improve the format May 11, 2019
matlab push codes Sep 2, 2017
models push codes Sep 2, 2017
python debug row_prune for alexnet Sep 8, 2017
scripts push codes Sep 2, 2017
src fixed a small count bug for prune_unit == 'Weight' Jun 2, 2019
tools fixed a small bug of training and testing with the same GPU, backward… Jun 23, 2018
.gitignore Update gitignore May 11, 2019
CMakeLists.txt push codes Sep 2, 2017
CONTRIBUTING.md push codes Sep 2, 2017
CONTRIBUTORS.md push codes Sep 2, 2017
INSTALL.md push codes Sep 2, 2017
LICENSE push codes Sep 2, 2017
Makefile fixed the hdf5 bug in Makefile May 11, 2019
Makefile.config fixed small bugs Jul 29, 2018
Makefile.config.example
README.md Update README.md May 21, 2019
caffe.cloc push codes Sep 2, 2017

README.md

Pruned models

Model Baseline accuracy (%) Speedup ratio Pruned accuracy(%)
vgg16 70.62/89.56 5x 67.62/88.04
resnet50 72.92/91.18 2x 72.47/91.05

Note:

  • Speedup ratio is the theoretical value measured by FLOPs reduction in only conv layers.
  • The baseline accuracies are obtained by evaluating the downloaded model without finetuning them on our produced ImageNet dataset.
  • The provided pruned caffemodels are only zero-masked, without taking out the zero weight filters or columns. So they are literally of the same size as their baseline counterparts.

Environment

  • Ubuntu 1404
  • Caffe
  • Python 2.7
  • Use cuDNN

How to run the code

  1. Download this repo and compile: make -j24, see Caffe's official guide. Make sure you get it through.
  2. Here we show how to run the code, taking lenet5 as an example:
    • Preparation:
      • Data: Create your mnist training and testing lmdb (either you can download ours), put them in data/mnist/mnist_train_lmdb and data/mnist/mnist_test_lmdb.
      • Pretrained model: We provide a pretrained lenet5 model in compression_experiments/mnist/weights/baseline_lenet5.caffemodel (test accuracy = 0.991).
    • (We have set up an experiment folder in compression_experiments/lenet5, where there are three files: train.sh, solver.prototxt, train_val.prototxt. There are some path settings in them and pruning configs in solver.prototxt, where we have done that for you, but you are free to change them.)
    • In your caffe root path, run nohup sh compression_experiments/lenet5/train.sh <gpu_id> > /dev/null &, then you are all set! Check your log at compression_experiments/lenet5/weights.

For vgg16, resnet50, we also provided their experiment folders in compression_experiments, check them out and have a try!

Check the log

There are two logs generated during pruning: log_<TimeID>_acc.txt and log_<TimeID>_prune.txt. The former saves the logs printed by the original Caffe; the latter saves the logs printed by our added codes.

Go to the project folder, e.g., compression_experiments/lenet5 for lenet5, then run cat weights/*prune.txt | grep app you will see the pruning and retraining course.

Detailed explanation of the options in solver.prototxt

  • target_reg:
  • IF_eswpf:

License and Citation

Caffe is released under the BSD 2-Clause license. The BVLC reference models are released for unrestricted use.

Please cite these in your publications if this code helps your research:

@proceedins{wang2019increg,
  Author = {Wang, Huan and Zhang, Qiming and Wang, Yuehai and Yu, Lu and Hu, Haoji},
  Title = {Structured Pruning for Efficient ConvNets via Incremental Regularization},
  Booktitle = {IJCNN},
  Year = {2019}
}
@proceedins{wang2018spp,
  Author = {Wang, Huan and Zhang, Qiming and Wang, Yuehai and Hu, Haoji},
  Title = {Structured probabilistic pruning for convolutional neural network acceleration},
  Booktitle = {BMVC},
  Year = {2018}
}
@article{jia2014caffe,
  Author = {Jia, Yangqing and Shelhamer, Evan and Donahue, Jeff and Karayev, Sergey and Long, Jonathan and Girshick, Ross and Guadarrama, Sergio and Darrell, Trevor},
  Journal = {arXiv preprint arXiv:1408.5093},
  Title = {Caffe: Convolutional Architecture for Fast Feature Embedding},
  Year = {2014}
}
You can’t perform that action at this time.