FAB: FPGA-Accelerated Fully-Pipelined Bottleneck Architecture with Batching for High-Performance MobileNetv2 Inference
You can download the pretrained weights and the extracted data from the following URL.
https://drive.google.com/drive/folders/1cTyFCJMDTP-DIKNH75waVvRhpdTzzI-m?usp=sharing
If the train dataset root is not set, the ImageNet task cannot be recognized, resulting in an FC layer size mismatch error.
python main_eval.py --common.config-file {config_url} --model.classification.pretrained ./base_weight/mobilenetv2-1.00.pt./config/classification/imagenet/mobilenetv2_ptq.yaml./config/classification/imagenet/mobilenetv2.yamlquant.quant : Determining whether to perform quantization
quant.quant_method : meaningless argument
quant.weight_bit : Weight quantization bit
quant.activation_bit : Activation quantization bit
quant.calibration_a : Layer wise calibration
quant.calibration_w : Layer wise calibration
quant.calibration_c : Channel wise calibration
quant.calib_iter : Calibration iter /cvnets/modules/mobilenetv2.py
/cvnets/ptq
main.c :
qact.c :
bottleneck.c :
quantizer.c :
conv.c :
utils.c :
fc.c: