TransCAR is a Transformer-based Camera-And-Radar fusion solution for 3D object detection. The cross-attention layer within the transformer decoder can adaptively learn the soft-association between the radar features and vision-updated queries instead of hard-association based on sensor calibration only. Our model estimates a bounding box per query using set-to-set Hungarian loss, which enables the method to avoid non-maximum suppression. TransCAR improves the velocity estimation using the radar scans without temporal information.
Our implementations are built on top of MMdetection3D.
Please follow detr3d to prepare the Prerequisite and Data. This project is developed based on detr3d codebase, thanks for their excellent work!
We recommend to use conda to setup the environment, this is the installed packages list for our conda environment for reference.
After preparing the data following mmdet3d and installation of the environment. Please download the pre-trained detr3d weights for the initialization of the camera network. Then update the load_from
under projects/configs/detr3d/detr3d_res101_gridmask.py
to point to your downloaded pre-trained detr3d weights.
There are three different detr3d models, the one mentioned above is the smallest which is suitable for fast develop and debug, if you have a high-end GPU system with sufficient memory and compute power, you can use the other two bigger detr3d models (model1 pre-trained weights and model2 pre-trained weights). And then update the load_from
in the corresponding config files (detr3d_res101_gridmask_cbgs.py or detr3d_res101_gridmask_det_final_trainval_cbgs.py depending on the model that you choose).
For standard train/eval, please use the following line at the top of projects/mmdet3d_plugin/models/dense_heads/detr3d_head.py
, note that you need to change the dataroot
to point to your nuScenes data directory.
nusc = NuScenes(version='v1.0-trainval', dataroot='/home/xxx/nuscene_data/NUSCENES_DATASET_ROOT', verbose=True)
For fast testing and debugging using nuScenes mini dataset, please use the line below under projects/mmdet3d_plugin/models/dense_heads/detr3d_head.py
, note that you need to change the dataroot
to point to your nuScenes data directory.
nusc = NuScenes(version='v1.0-mini', dataroot='/home/xxx/nuscene_data/NUSCENES_DATASET_ROOT', verbose=True)
Run the command below to launch the training:
python tools/train.py /TransCAR/projects/configs/detr3d/detr3d_res101_gridmask.py
Following the directions in section Train to setup the NuScenes data object.
To evaluate our trained model, please download the weights (model_weights.pth) from here. then run the command below for evaluation:
python tools/test.py /TransCAR/projects/configs/detr3d/detr3d_res101_gridmask.py /path/to/trained/weights --eval=bbox
Example command:
python tools/test.py /TransCAR/projects/configs/detr3d/detr3d_res101_gridmask.py /path/to/model_weights.pth --eval=bbox
Download the weights accordingly.
Backbone | mAP | NDS | Download |
---|---|---|---|
DETR3D (baseline) | 34.7 | 42.2 | model |
TransCAR | 35.5 | 47.1 | model |
For best performance, we recommend using detr3d_vovnet_trainval version detr3d as the camera network (download the pre-trained weights here). Then use the line below under projects/mmdet3d_plugin/models/dense_heads/detr3d_head.py
.
nusc = NuScenes(version='v1.0-test', dataroot='/home/xxx/nuscene_data/NUSCENES_DATASET_ROOT', verbose=True)
Run the following command to generate the detection files (you can download the pre-trained TransCAR model weights here):
python tools/test.py /TransCAR/projects/configs/detr3d/detr3d_vovnet_gridmask_det_final_trainval_cbgs.py /dir/to/trained/weights/weights_final_test.pth --format-only --eval-options 'jsonfile_prefix=/dir/to/save/the/results'
Evaluation results on the nuScenes test set: mAP: 42.2; NDS: 52.2
Error: torch does not have "nan_to_num".
Solution: replace nan_to_num
with item assignment. For example,
loss_cls = torch.nan_to_num(loss_cls) # this could generate the above Error
change the above code into:
loss_cls[torch.isnan(loss_cls)]=0
If you find this work useful in your research, please consider citing:
@article{pang2023transcar,
title={TransCAR: Transformer-based Camera-And-Radar Fusion for 3D Object Detection},
author={Pang, Su and Morris, Daniel and Radha, Hayder},
journal={arXiv preprint arXiv:2305.00397},
year={2023}
}
Again, this work is developed based on detr3d, thanks for their good work!