Skip to content

ZouJiu1/Dorefa_Pact

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

6 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Dorefa、Pact

DoReFa-Net: Training Low Bitwidth Convolutional Neural Networks with Low Bitwidth Gradients
PACT: Parameterized Clipping Activation for Quantized Neural Networks

About pact

Pact is used to replace relu activation function, not in convolution
If using Pact in QuantConv2d, the precision will be very slow
The loaded network is trained with the proposed quantization scheme in which ReLU is replaced with the proposed parameterized clipping ActFn for each of its seven convolution layers.

commit log

2023-01-08, upload dorefa、pact

I'm not the author, I just complish an unofficial implementation of dorefa and pact.

pytorch==1.11.0+cu113

You should train 32-bit float model firstly, then you can finetune a low bit-width quantization QAT model by loading the trained 32-bit float model

Dataset used for training is CIFAR10 and model used is Resnet18 revised

The Train Results

For the below table all set a_bit=8, w_bit=8

version learning rate batchsize Accuracy models
Float 32bit <=66 0.1
<=86 0.01
<=99 0.001
<=112 0.0001
128 92.6 https://www.aliyundrive.com/s/6B2AZ45fFjx
dorefa <=31 0.1
<=51 0.01
<=71 0.001
128*7+30 95 https://www.alipan.com/s/WhZUqDUh4UB
pact <=31 0.1
<=51 0.01
<=71 0.001
128*7+30 95 https://www.alipan.com/s/F7ocFVSZwMb

References

https://github.com/ZouJiu1/LSQplus
https://github.com/666DZY666/micronet
https://github.com/hustzxd/LSQuantization
https://github.com/zhutmost/lsq-net
https://github.com/Zhen-Dong/HAWQ
https://github.com/KwangHoonAn/PACT
https://github.com/Jermmy/pytorch-quantization-demo

About

Dorefa、Pact quantization

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages