Skip to content

Official PyTorch implementation of the paper Li3DeTr: A LiDAR based 3D Detection Transformer

License

Notifications You must be signed in to change notification settings

gopi-erabati/Li3DeTr

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

13 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Li3DeTr: A LiDAR based 3D Detection Transformer

This is the official PyTorch implementation of the paper Li3DeTr: A LiDAR based 3D Detection Transformer, by Gopi Krishna Erabati and Helder Araujo in IEEE/CVF Winter Conference on Applications of Computer Vision (WACV) 2023.

Contents

  1. Abstract
  2. Results
  3. Usage
  4. Reference

Abstract

Inspired by recent advances in vision transformers for object detection, we propose Li3DeTr, an end-to-end LiDAR based 3D Detection Transformer for autonomous driving, that inputs LiDAR point clouds and regresses 3D bounding boxes. The LiDAR local and global features are encoded using sparse convolution and multi-scale deformable attention respectively. In the decoder head, firstly, in the novel Li3DeTr cross-attention block, we link the LiDAR global features to 3D predictions leveraging the sparse set of object queries learnt from the data. Secondly, the object query interactions are formulated using multi-head self-attention. Finally, the decoder layer is repeated L dec number of times to refine the object queries. Inspired by DETR, we employ set-to-set loss to train the Li3DeTr network. Without bells and whistles, the Li3DeTr network achieves 61.3% mAP and 67.6% NDS surpassing the state-of-the-art methods with non-maximum suppression (NMS) on the nuScenes dataset and it also achieves competitive performance on the KITTI dataset. We also employ knowledge distillation (KD) using a teacher and student model that slightly improves the performance of our network.

Li3DeTr

Results

Predictions on nuScenes dataset

li3detr_pred

nuScenes Dataset

LiDAR Backbone mAP NDS Weights
VoxelNet 61.3 67.6 Model
PointPillars 53.8 63.0 Model

KITTI Dataset (AP3D)

LiDAR Backbone Easy Mod. Hard Weights
VoxelNet 87.6 76.8 73.9 Model

Usage

Prerequisite

The code is tested on the following configuration:

Data

Follow MMDetection3D to prepare the nuScenes dataset and symlink the data directory to data/ folder of this repository.

Note: Please use mmdet3d==0.18.0 version for data processing!

Clone the repository

git clone https://github.com/gopi231091/Li3DeTr.git
cd Li3DeTr

Training

  1. Download the backbone pretrained weights to ckpts/
  2. Add the present working directory to PYTHONPATH export PYTHONPATH=$(pwd):$PYTHONPATH
  3. To train the Li3DeTr on 2 GPUs, please run

tools/dist_train.sh configs/li3detr_voxel_adam_nus-3d.py 2 --work-dir {WORK_DIR}

Testing

  1. Downlaod the weights of the models accordingly.
  2. Add the present working directory to PYTHONPATH export PYTHONPATH=$(pwd):$PYTHONPATH
  3. To evaluate the model using 2 GPUs, please run

tools/dist_test.sh configs/li3detr_voxel_adam_nus-3d.py /path/to/ckpt 2 --eval=bbox

Acknowlegement

We sincerely thank the contributors for their open-source code: MMCV, MMDetection and MMDetection3D.

Reference

Feel free to cite our article if you find our method useful.

@InProceedings{Erabati_2023_WACV,
    author    = {Erabati, Gopi Krishna and Araujo, Helder},
    title     = {Li3DeTr: A LiDAR Based 3D Detection Transformer},
    booktitle = {Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)},
    month     = {January},
    year      = {2023},
    pages     = {4250-4259}
}

About

Official PyTorch implementation of the paper Li3DeTr: A LiDAR based 3D Detection Transformer

Resources

License

Stars

Watchers

Forks