Skip to content
[ECCV18] Constraint-Aware Deep Neural Network Compression
Branch: intel_scnn
Clone or download
Fetching latest commit…
Cannot retrieve the latest commit at this time.
Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
.github Add Github issue template to curb misuse. Nov 3, 2016
cmake update the installation instructions and cmake file Jan 21, 2018
configs
data
docker Update Dockerfile to cuDNN v5 May 16, 2016
docs
examples add dataset dtd Mar 8, 2018
experiments/sparsity
include fix finetuning learning rate and caffe sgd bug Feb 8, 2018
matlab
models sync Apr 1, 2018
pruning
python
scripts fix many typos by using codespell Nov 27, 2016
src fix finetuning learning rate and caffe sgd bug Feb 8, 2018
tools incorporate optimizations from Intel Caffe Jan 13, 2017
.Doxyfile update doxygen config to stop warnings Sep 3, 2014
.gitignore
.travis.yml
CMakeLists.txt update the installation instructions and cmake file Jan 21, 2018
CONTRIBUTING.md [docs] add CONTRIBUTING.md which will appear on GitHub new Issue/PR p… Jul 30, 2015
CONTRIBUTORS.md clarify the license and copyright terms of the project Aug 7, 2014
INSTALL.md installation questions -> caffe-users Oct 19, 2015
LICENSE copyright 2015 Jun 23, 2015
Makefile don't use Fontran when compiling libxsmm to remove dependency to Fort… Jun 14, 2017
Makefile.config
Makefile.config.example no longer support old cuda version May 24, 2017
README.md update sampling picture Sep 11, 2018
caffe.cloc [fix] stop cloc complaint about cu type Sep 4, 2014
cfp.config
job.sh modidy batch size to utilize gpu Mar 17, 2018
main.py

README.md

Constraint-Aware Deep Neural Network Compression

Given a real-time operational constraints, this library can automatically compress the network to satisfy the constraint while preserving the accuracy. This library is built on top of SkimCaffe, which has implemented direct sparse convolution operations and has an effective speedup given an input sparse network. This framework can be applied to different constraint types(latency, memory size), different network(Alexnet, Resnet, Googlenet) and different datasets(ImagenNet, DTD).

For more technical details, please refer to the paper.

Framework Overview

Usage

  1. First follow the build-up instructions in SkimCaffe to build SkimCaffe.
  2. Modify the cfp.config file to adapt to your own need(constraint values).
  3. Run training in root directory:
python2 main.py
  1. To visualize the training result, specify the output directory:
python2 pruning/visualize_cbo_results.py [OUTPUT_DIR]

Results

Visualization of sampled data points in the first exponential cooling step

sampling

Citation

If you use this code or ideas from the paper for your research, please cite our paper:

@InProceedings{Chen_2018_ECCV,
author = {Chen, Changan and Tung, Frederick and Vedula, Naveen and Mori, Greg},
title = {Constraint-Aware Deep Neural Network Compression},
booktitle = {The European Conference on Computer Vision (ECCV)},
month = {September},
year = {2018}
}
You can’t perform that action at this time.