Official code and models in PyTorch for the paper Any-Precision Deep Neural Networks.
- Python 3.7
- PyTorch 1.1.0
- torchvision 0.2.1
- tensorboardX
- gpustat
Run the script below and dataset will download automatically.
./train_cifar10.sh
Run the script below and dataset will download automatically.
./train_svhn.sh
Before running the script below, one needs to manually download ImageNet and save it properly according to data_paths
in dataset/data.py
.
./train_imagenet.sh
To test a trained model, simply run the corresponding training script for one epoch with pretrained model loaded and without the training part.
Due to the following listed training hyperparameter changes, numbers below may be different from those in the paper.
- Init lr for any-precision models: 0.1 -> 0.5.
- We use ReLU for 32-bit model instead of Clamp (check here).
- We use tanh nonlinearity for 32-bit model for consistency with other precisions (check here).
Models | 1 bit | 2 bit | 4 bit | 8 bit | FP32 |
---|---|---|---|---|---|
Resnet-20 | 91.50 | 93.26 | 93.62 | 93.42 | 93.58 |
Resnet-20-Any (hard1) | 91.48 | 93.74 | 93.87 | 93.92 | 93.71 |
Resnet-20-Any (soft2) | 91.18 | 93.51 | 93.21 | 93.13 | 93.63 |
Resnet-20-Any (recursive3) | 91.89 | 93.90 | 93.86 | 93.75 | 94.11 |
1: Softmax Cross Entropy Loss
2: Softmax Cross Entropy Loss with FP32 prediction as supervision
3: Softmax Cross Entropy Loss with higher-precision model as supervision for lower-precision model
Models | 1 bit | 2 bit | 4 bit | 8 bit | FP32 |
---|---|---|---|---|---|
SVHN | 90.94 | 96.45 | 97.04 | 97.04 | 97.10 |
SVHN-Any (hard) | 88.98 | 95.54 | 96.71 | 96.72 | 96.60 |
SVHN-Any (soft) | 88.49 | 94.62 | 96.13 | 96.20 | 96.17 |
SVHN-Any (recursive) | 88.21 | 94.94 | 96.19 | 96.22 | 96.29 |
Models | 1 bit | 2 bit | 4 bit | 8 bit | FP32 |
---|---|---|---|---|---|
Resnet-50 | 57.834 | 68.744 | 74.125 | 74.965 | 75.955 |
Resnet-50-Any (recursive) | 58.77 | 71.66 | 73.84 | 74.07 | 74.63 |
4: Weight decay 1e-5
5: Weight decay 1e-4
If you find this repository helpful, please consider citing our paper:
@article{yu2019any,
title={Any-Precision Deep Neural Networks},
author={Yu, Haichao and Li, Haoxiang and Shi, Honghui and Huang, Thomas S and Hua, Gang},
journal={arXiv preprint arXiv:1911.07346},
year={2019}
}
Please feel free to contact Haichao Yu at haichao.yu@outlook.com for any issue.