Skip to content
PyTorch Implementation of [1611.06440] Pruning Convolutional Neural Networks for Resource Efficient Inference
Branch: master
Clone or download
Latest commit 92d87cc Dec 2, 2018
Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
README.md README Jun 23, 2017
dataset.py First commit Jun 20, 2017
finetune.py Update finetune.py Dec 2, 2018
prune.py README Jun 23, 2017

README.md

PyTorch implementation of [1611.06440 Pruning Convolutional Neural Networks for Resource Efficient Inference]

This demonstrates pruning a VGG16 based classifier that classifies a small dog/cat dataset.

This was able to reduce the CPU runtime by x3 and the model size by x4.

For more details you can read the blog post.

At each pruning step 512 filters are removed from the network.

Usage

This repository uses the PyTorch ImageFolder loader, so it assumes that the images are in a different directory for each category.

Train

......... dogs

......... cats

Test

......... dogs

......... cats

The images were taken from here but you should try training this on your own data and see if it works!

Training: python finetune.py --train

Pruning: python finetune.py --prune

TBD

  • Change the pruning to be done in one pass. Currently each of the 512 filters are pruned sequentually. for layer_index, filter_index in prune_targets: model = prune_vgg16_conv_layer(model, layer_index, filter_index)

    This is inefficient since allocating new layers, especially fully connected layers with lots of parameters, is slow.

    In principle this can be done in a single pass.

  • Change prune_vgg16_conv_layer to support additional architectures. The most immediate one would be VGG with batch norm.

You can’t perform that action at this time.