Skip to content

IIGROUP/CNN-FCF

Repository files navigation

This is a Pytorch implementation of our paper "Compressing Convolutional Neural Networks via Factorized Convolutional Filters" published in CVPR 2019.

Above is the overview of the workflow of filter pruning on l-th layer, where the dotted green cubes indicate the pruned filters. (Top): Traditional pruning consists of three sequential stages: pre-training, selecting filters according to a ranking criterion, and fine-tuning. (Bottom): Our method conducts the filter learning and filter selection jointly, through training factorized convolutional filters.

Table of Contents

Requirements

  • Anaconda
  • Python 3.6
  • PyTorch 0.3.1
  • TorchVision 0.2.0
  • OSQP

Inference checkpoint files

The inference models files can be found in google drive, which can be used to reproduce the results of our paper.

Training fcf models

Training CIFAR-10

sh ./scripts/train_cifar_fcf_resnet*.sh

Training ImageNet

sh ./scripts/train_imagenet_fcf_resnet*.sh

Note: * in CIFAR-10 indicates 20,32,56,110, and 34,50 in ImageNet.

Finetuning

Due to the numerical reason, there are still small changes after optimization, so we usually use finetuning to recover the model performance.

Finetuning CIFAR-10

sh ./scripts/finetune_cifar_resnet*.sh

Finetuning ImageNet

sh ./scripts/finetune_imagenet_resnet*.sh

Inference

Reproduce the CIFAR-10 results in our paper

sh ./scripts/inference_cifar_resnet*.sh

Reproduce the ImageNet results in our paper

sh ./scripts/inference_imagenet_resnet*.sh

Running time analysis

We now analyze the running time reduction rate of our method. Considering that the convolution operation of each filter on GPU is independently, and dozens of process are conducted in parallel, we can not get the realtime reduction rate on GPU. The following experiments are conducted on CPU with ResNet34.

Single layer

We first present the single layer running time reduction rate, our customized convolution is composed by squeeze, conv, expand, we also give the proportion of these three operations in the customized convolution, respectively.

Theoretical flops ↓ Standard running time ↓ Customized running time ↓ Customized convolution
Squeeze Conv Expand
26.04% 17.63% 13.42% 2.76% 92.52% 4.72%
43.75% 34.71% 30.64% 2.91% 91.74% 5.35%
57.75% 42.19% 40.88% 3.01% 91.16% 5.82%
75.00% 65.70% 59.20% 2.27% 92.04% 5.69%

Note:

  1. Theoretical flops ↓ is denoted as the theoretical flops reduction rate.
  2. Standard running time ↓ is denoted as the standard convolution running time reduction rate.
  3. Customized running time ↓ is denoted as customized convolution running time reduction rate.

As shown on the table, the realtime reduction rate is always lower than the theoretical flops reduction rate, which maybe due to the IO delay, buffer transfer corresponding to the hardware machine. Our customized convolution will cost additional running time for doing the tensor squeeze and expand operations, so the customized convolution realtime ↓ will be a little lower than the standard convolution realtime ↓.

The model

We present the running time of the pruned model conresponds to the reference model, the reduction rates are shown as follows. In addition to the whole model, we also give the flops ↓ and realtime ↓ of the total pruned convolution layers, because we only prune the convolution layers in ResNet structures to obttain a sparse pruned model.

Model flops ↓ Model running time ↓ Convolution layers flops ↓ Convolution layers running time ↓
26.83% 10.90% 27.95% 16.13%
41.37% 16.86% 43.10% 23.77%
54.87% 31.06% 57.16% 41.12%
66.05% 42.59% 68.80% 55.09%

As shown on the table, the convolution layers running time ↓ is lower than the theoretical convolution layers flops ↓, the reason is similar to the single layer results. Moreover, due to the time cost in the BN, Relu and Fully-connected layers, the model running time ↓ is lower than convolution layers running time ↓. In general, the running time reduction of the pruned convolution layers is consistent with the theoretical flops ↓.

Citation

@InProceedings{Li_2019_CVPR,
author = {Li, Tuanhui and Wu, Baoyuan and Yang, Yujiu and Fan, Yanbo and Zhang, Yong and Liu, Wei},
title = {Compressing Convolutional Neural Networks via Factorized Convolutional Filters},
booktitle = {The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2019}
}

About

[CVPR 2019] Compressing Convolutional Neural Networks via Factorized Convolutional Filters.

Topics

Resources

License

Stars

Watchers

Forks