UniDistill: A Universal Cross-Modality Knowledge Distillation Framework for 3D Object Detection in Bird's-Eye View
This is the official implementation of UniDistill (CVPR2023 highlight✨, 10% of accepted papers). UniDistill offers a universal cross-modality knowledge distillation framework for different teacher and student modality combinations. The core idea is aligning the intermediate BEV features and response features that are produced by all BEV detectors.
Step 0. Install pytorch(v1.9.0).
Step 1. Install MMCV-full==1.4.2, MMDetection2D==2.20.2, MMDetection3D.
Step 2. Install requirements.
pip install -r requirements.txt
Step 3. Install UniDistill(gpu required).
python setup.py develop
Step 0. Download nuScenes official dataset.
Step 1. Create a folder /data/dataset/
and put the dataset in it.
The directory will be as follows.
├── data
│ ├── dataset
│ │ ├── maps
│ │ ├── samples
│ │ ├── sweeps
│ │ ├── v1.0-test
| | ├── v1.0-trainval
Step 2. Download the infos and put them in /data/dataset/
The directory will be as follows.
├── data
│ ├── dataset
│ │ ├── maps
│ │ ├── samples
│ │ ├── sweeps
│ │ ├── v1.0-test
| | ├── v1.0-trainval
| | ├── nuscenes_test_meta.pkl
| | ├── nuscenes_v1.0-trainval_meta.pkl
| | ├── test_info.pkl
| | ├── train_info.pkl
| | ├── val_info.pkl
Step 0. Download the checkpoint models
Step 1. Generate the result
If the modality of checkpoint is camera, run the following command:
CUDA_VISIBLE_DEVICES=0,1,2,3 python -m torch.distributed.launch --nproc_per_node=4 unidistill/exps/multisensor_fusion/nuscenes/BEVFusion/BEVFusion_nuscenes_centerhead_camera_exp.py -d 0-3 -b 1 -e 20 --sync_bn 1 --no-clearml --infer --ckpt <PATH_TO_CHECKPOINT>
If the modality of checkpoint is LiDAR, change the command as follow:
CUDA_VISIBLE_DEVICES=0,1,2,3 python -m torch.distributed.launch --nproc_per_node=4 unidistill/exps/multisensor_fusion/nuscenes/BEVFusion/BEVFusion_nuscenes_centerhead_camera_exp.py -d 0-3 -b 1 -e 20 --sync_bn 1 --no-clearml --infer --ckpt <PATH_TO_CHECKPOINT>
Step 2. Upload the result to the evaluation server
The result named "nuscenes_results.json" is in the folder "nuscenes" in the parent folder of the tested checkpoint.
Step 0. Download the checkpoint models as in "Testing"
Step 1. Generate the result
If the modality of checkpoint is camera, run the following command:
CUDA_VISIBLE_DEVICES=0,1,2,3 python -m torch.distributed.launch --nproc_per_node=4 unidistill/exps/multisensor_fusion/nuscenes/BEVFusion/BEVFusion_nuscenes_centerhead_camera_exp.py -d 0-3 -b 1 -e 20 --sync_bn 1 --no-clearml --eval --ckpt <PATH_TO_CHECKPOINT>
If the modality of checkpoint is LiDAR, change the command as follow:
CUDA_VISIBLE_DEVICES=0,1,2,3 python -m torch.distributed.launch --nproc_per_node=4 unidistill/exps/multisensor_fusion/nuscenes/BEVFusion/BEVFusion_nuscenes_centerhead_camera_exp.py -d 0-3 -b 1 -e 20 --sync_bn 1 --no-clearml --eval --ckpt <PATH_TO_CHECKPOINT>
Step 0. Train the teacher
Training of the detector of one :
CUDA_VISIBLE_DEVICES=0,1,2,3 python -m torch.distributed.launch --nproc_per_node=4 unidistill/exps/multisensor_fusion/nuscenes/BEVFusion/BEVFusion_nuscenes_centerhead_<MODALITY>_exp.py -d 0-3 -b 1 -e 20 --sync_bn 1 --no-clearml
Step 1. Train the student
Put the checkpoint of the teachers to unidistill/exps/multisensor_fusion/BEVFusion/tmp/
. Train the teacher of <MODALITY_1> and student of <MODALITY_2>
CUDA_VISIBLE_DEVICES=0,1,2,3 python -m torch.distributed.launch --nproc_per_node=4 unidistill/exps/multisensor_fusion/nuscenes/BEVFusion/BEVFusion_nuscenes_centerhead_<MODALITY_2>_exp_distill_<MODALITY_1>.py -d 0-3 -b 1 -e 20 --sync_bn 1 --no-clearml
If you find this project useful in your research, please consider citing:
@inproceedings{zhou2023unidistill,
title={UniDistill: A Universal Cross-Modality Knowledge Distillation Framework for 3D Object Detection in Bird’s-Eye View},
author={Shengchao Zhou and Weizhou Liu and Chen Hu and Shuchang Zhou and Chao Ma},
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
year={2023}
}