Skip to content

Latest commit

 

History

History
88 lines (65 loc) · 7.26 KB

README.md

File metadata and controls

88 lines (65 loc) · 7.26 KB

AutoSlim

AutoSlim: Towards One-Shot Architecture Search for Channel Numbers

Abstract

We study how to set channel numbers in a neural network to achieve better accuracy under constrained resources (e.g., FLOPs, latency, memory footprint or model size). A simple and one-shot solution, named AutoSlim, is presented. Instead of training many network samples and searching with reinforcement learning, we train a single slimmable network to approximate the network accuracy of different channel configurations. We then iteratively evaluate the trained slimmable model and greedily slim the layer with minimal accuracy drop. By this single pass, we can obtain the optimized channel configurations under different resource constraints. We present experiments with MobileNet v1, MobileNet v2, ResNet-50 and RL-searched MNasNet on ImageNet classification. We show significant improvements over their default channel configurations. We also achieve better accuracy than recent channel pruning methods and neural architecture search methods. Notably, by setting optimized channel numbers, our AutoSlim-MobileNet-v2 at 305M FLOPs achieves 74.2% top-1 accuracy, 2.4% better than default MobileNet-v2 (301M FLOPs), and even 0.2% better than RL-searched MNasNet (317M FLOPs). Our AutoSlim-ResNet-50 at 570M FLOPs, without depthwise convolutions, achieves 1.3% better accuracy than MobileNet-v1 (569M FLOPs).

pipeline

Introduction

Supernet pre-training on ImageNet

python ./tools/mmcls/train_mmcls.py \
  configs/pruning/autoslim/autoslim_mbv2_supernet_8xb256_in1k.py \
  --work-dir your_work_dir

Search for subnet on the trained supernet

python ./tools/mmcls/search_mmcls.py \
  configs/pruning/autoslim/autoslim_mbv2_search_8xb1024_in1k.py \
  your_pre-training_checkpoint_path \
  --work-dir your_work_dir

Subnet retraining on ImageNet

python ./tools/mmcls/train_mmcls.py \
  configs/pruning/autoslim/autoslim_mbv2_subnet_8xb256_in1k.py \
  --work-dir your_work_dir \
  --cfg-options algorithm.channel_cfg=configs/pruning/autoslim/AUTOSLIM_MBV2_530M_OFFICIAL.yaml,configs/pruning/autoslim/AUTOSLIM_MBV2_320M_OFFICIAL.yaml,configs/pruning/autoslim/AUTOSLIM_MBV2_220M_OFFICIAL.yaml

Split checkpoint

python ./tools/model_converters/split_checkpoint.py \
  configs/pruning/autoslim/autoslim_mbv2_subnet_8xb256_in1k.py \
  your_retraining_checkpoint_path \
  --channel-cfgs configs/pruning/autoslim/AUTOSLIM_MBV2_530M_OFFICIAL.yaml configs/pruning/autoslim/AUTOSLIM_MBV2_320M_OFFICIAL.yaml configs/pruning/autoslim/AUTOSLIM_MBV2_220M_OFFICIAL.yaml

Test a subnet

python ./tools/mmcls/test_mmcls.py \
  configs/pruning/autoslim/autoslim_mbv2_subnet_8xb256_in1k.py \
  your_splitted_checkpoint_path --metrics accuracy \
  --cfg-options algorithm.channel_cfg=configs/pruning/autoslim/AUTOSLIM_MBV2_530M_OFFICIAL.yaml  # or modify the config directly

Results and models

Subnet retrain

Supernet Params(M) Flops(G) Top-1 (%) Top-5 (%) Config Download Subnet Remark
MobileNet v2(x1.5) 6.5 0.53 74.23 91.74 config model | log channel official channel cfg
MobileNet v2(x1.5) 5.77 0.32 72.73 90.83 config model | log channel official channel cfg
MobileNet v2(x1.5) 4.13 0.22 71.39 90.08 config model | log channel official channel cfg

Note that we ran the official code and the Top-1 Acc of the models with official channel cfg are 73.8%, 72.5% and 71.1%. And there are 3 differences between our implementation and the official one.

  1. The implementation of Label Smooth is slightly different.
  2. Lighting is not used in our data pipeline. (Lighting is a kind of data augmentation which adjust images lighting using AlexNet-style PCA jitter.)
  3. We do not recalibrating BN statistics after training.

Citation

@article{yu2019autoslim,
  title={Autoslim: Towards one-shot architecture search for channel numbers},
  author={Yu, Jiahui and Huang, Thomas},
  journal={arXiv preprint arXiv:1903.11728},
  year={2019}
}