BEVFusion's SparseConvolution module uses the mmdet3d/spconv
- Insert Q&DQ nodes to get fake-quant pytorch model
- PTQ calibration
- QAT Training
-
FuseBn can bring better performance to the model forward, so this operation needs to be completed before model calibration.
-
The Add and Concat layers have multiple inputs,need to use the same quantizer for all inputs, thus reducing the occurrence of Reformat.
-
The quantization of some layers will cause a significant drop in mAP, so it is necessary to disable the quantization of these layers after calibration.
- Here is the official configuration guide.
- Setup the bevfusion runtime environment:
# build image from dockerfile
cd CUDA-BEVFusion/bevfusion/docker
docker build . -t bevfusion
# creating containers and mapping volumes
nvidia-docker run -it -v `pwd`/../../../:/Lidar_AI_Solution \
-v /path/to/nuScenes:/data \
--shm-size 16g bevfusion
# install python dependency libraries
cd /Lidar_AI_Solution/CUDA-BEVFusion
pip install -r tool/requirements.txt
# install bevfusion
cd bevfusion
python setup.py develop
- download model.zip from ( Google Drive ) or ( Baidu Drive )
- download nuScenes-example-data.zip from ( Google Drive ) or ( Baidu Drive )
# download models and datas to CUDA-BEVFusion
cd CUDA-BEVFusion
# unzip models and datas
apt install unzip
unzip model.zip
unzip nuScenes-example-data.zip
# copy yaml to bevfusion
cp -r configs bevfusion
python qat/export-camera.py --ckpt=model/resnet50int8/bevfusion_ptq.pth
python qat/export-transfuser.py --ckpt=model/resnet50int8/bevfusion_ptq.pth
python qat/export-scn.py --ckpt=model/resnet50int8/bevfusion_ptq.pth --save=qat/onnx_int8/lidar.backbone.onnx
python qat/export-camera.py --ckpt=model/resnet50int8/bevfusion_ptq.pth --fp16
python qat/export-transfuser.py --ckpt=model/resnet50int8/bevfusion_ptq.pth --fp16
python qat/export-scn.py --ckpt=model/resnet50int8/bevfusion_ptq.pth --save=qat/onnx_fp16/lidar.backbone.onnx
- This code uses the nuScenes Dataset. You need to download it in order to run PTQ.
- You can follow the tips here to prepare the data.
python qat/ptq.py --config=bevfusion/configs/nuscenes/det/transfusion/secfpn/camera+lidar/resnet50/convfuser.yaml --ckpt=model/resnet50/bevfusion-det.pth --calibrate_batch 300
cd CUDA-BEVFusion
cp qat/test-mAP-for-cuda.py bevfusion/tools
cd bevfusion
mkdir data
ln -s /path/to/nuScenes data/nuscenes
python tools/test-mAP-for-cuda.py
The performance metrics of the PTQ INT8 model are already very close to fp16, and work on the QAT part will follow.