Skip to content

[ICML 2018] "Deep k-Means: Re-Training and Parameter Sharing with Harder Cluster Assignments for Compressing Deep Convolutions"

Notifications You must be signed in to change notification settings

VITA-Group/Deep-K-Means-pytorch

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

45 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

PyTorch Code for 'Deep k-Means: Re-Training and Parameter Sharing with Harder Cluster Assignments for Compressing Deep Convolutions'

Introduction

PyTorch Implementation of our ICML 2018 paper "Deep k-Means: Re-Training and Parameter Sharing with Harder Cluster Assignments for Compressing Deep Convolutions".

[Poster]
[PPT]

In our paper, we proposed a simple yet effective scheme for compressing convolutions though applying k-means clustering on the weights, compression is achieved through weight-sharing, by only recording K cluster centers and weight assignment indexes.

We then introduced a novel spectrally relaxed k-means regularization, which tends to make hard assignments of convolutional layer weights to K learned cluster centers during re-training.

We additionally propose an improved set of metrics to estimate energy consumption of CNN hardware implementations, whose estimation results are verified to be consistent with previously proposed energy estimation tool extrapolated from actual hardware measurements.

We finally evaluated Deep k-Means across several CNN models in terms of both compression ratio and energy consumption reduction, observing promising results without incurring accuracy loss.

PyTorch Model

  • Wide ResNet
  • LeNet-Caffe-5

Dependencies

Python 3.5

Testing Deep k-Means

  • Wide ResNet
python WideResNet_Deploy.py

Filters Visualization

Sample Visualization of Wide ResNet (Conv2)

Pre-Trained Model (Before Comp.) Pre-Trained Model (After Comp.)
Deep k-Means Re-Trained Model (Before Comp.) Deep k-Means Re-Trained Model (After Comp.)

Citation

If you find this code useful, please cite the following paper:

@article{deepkmeans,
    title={Deep k-Means: Re-Training and Parameter Sharing with Harder Cluster Assignments for Compressing Deep Convolutions},
    author={Junru Wu, Yue Wang, Zhenyu Wu, Zhangyang Wang, Ashok Veeraraghavan, Yingyan Lin},
    journal={ICML},
    year={2018}
}

Acknowledgment

We would like to thanks the arthor of libKMCUDA, a CUDA based k-means library, without which we won't be able to do large-scale k-means efficiently.

About

[ICML 2018] "Deep k-Means: Re-Training and Parameter Sharing with Harder Cluster Assignments for Compressing Deep Convolutions"

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages