Caffe for Sparse and Low-rank Deep Neural Networks
Clone or download
Latest commit f9d827a Jan 16, 2018
Permalink
Failed to load latest commit information.
.github Add Github issue template to curb misuse. Nov 3, 2016
cmake make cmake find cuDNN on Mac OS Aug 18, 2016
data Merge pull request #4455 from ShaggO/spaceSupportILSVRC12MNIST Jul 14, 2016
docker Update Dockerfile to cuDNN v5 May 16, 2016
docs ICCV17-Poster Sep 17, 2017
examples merge sfm to master Jul 18, 2017
include/caffe merge sfm to master Jul 18, 2017
matlab show Caffe's version from MatCaffe Jan 23, 2016
models Rename config_cvpr.json to config_iccv.json Aug 12, 2017
python Update README.md Aug 12, 2017
scripts NV changed path to cudnn Oct 1, 2016
src merge sfm to master Jul 18, 2017
tools Update parse_log.py Jul 11, 2016
.Doxyfile update doxygen config to stop warnings Sep 3, 2014
.gitignore merge sfm to master Jul 18, 2017
.travis.yml Stop setting cache timeout in TravisCI Jul 15, 2016
CMakeLists.txt [build] (CMake) customisable Caffe version/soversion May 10, 2016
CONTRIBUTING.md [docs] add CONTRIBUTING.md which will appear on GitHub new Issue/PR p… Jul 30, 2015
CONTRIBUTORS.md clarify the license and copyright terms of the project Aug 7, 2014
INSTALL.md installation questions -> caffe-users Oct 19, 2015
LICENSE copyright 2015 Jun 23, 2015
Makefile [build] set default BLAS include for OS X 10.11 Aug 18, 2016
Makefile.config.Ubuntu.16.04 makefile Oct 6, 2016
Makefile.config.Ubuntu.16.04.anaconda.opt makefile Oct 6, 2016
Makefile.config.example [build] note that `make clean` clears build and distribute dirs May 4, 2016
README.md Update README.md Jan 16, 2018
caffe.cloc [fix] stop cloc complaint about cu type Sep 4, 2014

README.md

ABOUT

Repo summary

Lower-rank deep neural networks (ICCV 2017)

Paper: Coordinating Filters for Faster Deep Neural Networks.

Poster is available.

source code is in this master branch.

Sparse Deep Neural Networks (NIPS 2016)

See the source code in branch scnn

(NIPS 2017 Oral) Ternary Gradients to Reduce Communication in Distributed Deep Learning

A work to accelerate training. code

Direct sparse convolution and guided pruning (ICLR 2017)

Originally in branch intel, but merged to IntelLabs/SkimCaffe with major contributions by @jspark1105

Caffe version

Master branch is from caffe @ commit eb4ba30

Lower-rank deep neural networks (ICCV 2017)

Tutorials on using python to decompose DNNs to low-rank space is here.

If any problems/bugs/questions, you are welcome to open an issue and we will response asap.

Details of Force Regularization is in the Paper: Coordinating Filters for Faster Deep Neural Networks.

Training with Force Regularization for Lower-rank DNNs

It is easy to use the code to train DNNs toward lower-rank DNNs. Only three additional protobuf configurations are required:

  1. force_decay in SolverParameter: Specified in solver. The coefficient to make the trade-off between accuracy and ranks. Larger force_decay, smaller ranks and usually lower accuracy.
  2. force_type in SolverParameter: Specified in solver. The kind of force to coordinate filters. Degradation - The strength of pairwise attractive force decreases as the distance decreases. This is the L2-norm force in the paper; Constant - The strength of pairwise attractive force keeps constant regardless of the distance. This is the L1-norm force in the paper.
  3. force_mult in ParamSpec: Specified for the param of weights in each layer. The local multiplier of force_decay for filters in a specific layer, i.e., force_mult*force_decay is the final coefficient for the specific layer. You can set force_mult: 0.0 to eliminate force regularization in any layer.

See details and implementations in caffe.proto and SGDSolver

Examples

An example of training LeNet with L1-norm force regularization:

##############################################################\
# The train/test net with local force decay multiplier       
net: "examples/mnist/lenet_train_test_force.prototxt"        
##############################################################/

test_iter: 100
test_interval: 500
# The base learning rate. For large-scale DNNs, you might try 0.1x smaller base_lr of training the original DNNs from scratch.
base_lr: 0.01
momentum: 0.9
weight_decay: 0.0005

##############################################################\
# The coefficient of force regularization.                   
# The hyper-parameter to tune to make trade-off              
force_decay: 0.001                                           
# The type of force - L1-norm force                          
force_type: "Constant"                                       
##############################################################/

# The learning rate policy
lr_policy: "multistep"
gamma: 0.9
stepvalue: 5000
stepvalue: 7000
stepvalue: 8000
stepvalue: 9000
stepvalue: 9500
# Display every 100 iterations
display: 100
# The maximum number of iterations
max_iter: 10000
# snapshot intermediate results
snapshot: 5000
snapshot_prefix: "examples/mnist/lower_rank_lenet"
snapshot_format: HDF5
solver_mode: GPU

Retraining a trained DNN with force regularization might get better results, comparing with training from scratch.

Hyperparameter

We included the hyperparameter of "lambda_s" for AlexNet in Figure 6.

Some open research topics

Force Regularization can squeeze/coordinate weight information to much lower rank space, but after low-rank decomposition with the same precision of approximation, it is more challenging to recover the accuracy from the much more lightweight DNNs.

License and Citation

Please cite our ICCV and Caffe if it is useful for your research:

@InProceedings{Wen_2017_ICCV,
  author={Wen, Wei and Xu, Cong and Wu, Chunpeng and Wang, Yandan and Chen, Yiran and Li, Hai},
  title={Coordinating Filters for Faster Deep Neural Networks},
  booktitle = {The IEEE International Conference on Computer Vision (ICCV)},
  month = {October},
  year = {2017}
}

Caffe is released under the BSD 2-Clause license. The BVLC reference models are released for unrestricted use.

Please cite Caffe in your publications if it helps your research:

@article{jia2014caffe,
  Author = {Jia, Yangqing and Shelhamer, Evan and Donahue, Jeff and Karayev, Sergey and Long, Jonathan and Girshick, Ross and Guadarrama, Sergio and Darrell, Trevor},
  Journal = {arXiv preprint arXiv:1408.5093},
  Title = {Caffe: Convolutional Architecture for Fast Feature Embedding},
  Year = {2014}
}