This is the official PyTorch implementation of the paper Li3DeTr: A LiDAR based 3D Detection Transformer, by Gopi Krishna Erabati and Helder Araujo in IEEE/CVF Winter Conference on Applications of Computer Vision (WACV) 2023.
Contents
Inspired by recent advances in vision transformers for object detection, we propose Li3DeTr, an end-to-end LiDAR based 3D Detection Transformer for autonomous driving, that inputs LiDAR point clouds and regresses 3D bounding boxes. The LiDAR local and global features are encoded using sparse convolution and multi-scale deformable attention respectively. In the decoder head, firstly, in the novel Li3DeTr cross-attention block, we link the LiDAR global features to 3D predictions leveraging the sparse set of object queries learnt from the data. Secondly, the object query interactions are formulated using multi-head self-attention. Finally, the decoder layer is repeated L dec number of times to refine the object queries. Inspired by DETR, we employ set-to-set loss to train the Li3DeTr network. Without bells and whistles, the Li3DeTr network achieves 61.3% mAP and 67.6% NDS surpassing the state-of-the-art methods with non-maximum suppression (NMS) on the nuScenes dataset and it also achieves competitive performance on the KITTI dataset. We also employ knowledge distillation (KD) using a teacher and student model that slightly improves the performance of our network.
LiDAR Backbone | mAP | NDS | Weights |
---|---|---|---|
VoxelNet | 61.3 | 67.6 | Model |
PointPillars | 53.8 | 63.0 | Model |
LiDAR Backbone | Easy | Mod. | Hard | Weights |
---|---|---|---|---|
VoxelNet | 87.6 | 76.8 | 73.9 | Model |
The code is tested on the following configuration:
Follow MMDetection3D to prepare the nuScenes dataset and symlink the data directory to data/
folder of this repository.
Note: Please use mmdet3d==0.18.0 version for data processing!
git clone https://github.com/gopi231091/Li3DeTr.git
cd Li3DeTr
- Download the backbone pretrained weights to
ckpts/
- Add the present working directory to PYTHONPATH
export PYTHONPATH=$(pwd):$PYTHONPATH
- To train the Li3DeTr on 2 GPUs, please run
tools/dist_train.sh configs/li3detr_voxel_adam_nus-3d.py 2 --work-dir {WORK_DIR}
- Downlaod the weights of the models accordingly.
- Add the present working directory to PYTHONPATH
export PYTHONPATH=$(pwd):$PYTHONPATH
- To evaluate the model using 2 GPUs, please run
tools/dist_test.sh configs/li3detr_voxel_adam_nus-3d.py /path/to/ckpt 2 --eval=bbox
We sincerely thank the contributors for their open-source code: MMCV, MMDetection and MMDetection3D.
Feel free to cite our article if you find our method useful.
@InProceedings{Erabati_2023_WACV,
author = {Erabati, Gopi Krishna and Araujo, Helder},
title = {Li3DeTr: A LiDAR Based 3D Detection Transformer},
booktitle = {Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)},
month = {January},
year = {2023},
pages = {4250-4259}
}