Visualization of predicted radar offsets (cyan arrows) to object centers and detections on image and BEV. The radar association of the RADIANT corrects the localization error of the monocular methods, improving detection performance. Orange, magenta and cyan boxes are GT bounding boxes, monocular detections, and RADIANT detections, respectively.
radiant/
data/
nuscenes/
maps/
samples/
sweeps/
v1.0-trainval/
v1.0-test/
fusion_data/
lib/
scripts/
checkpoints/
- Create a conda environment
conda create -n radiant python=3.6
-
Install required packages (see requirements.txt)
-
Download nuScenes dataset (full dataset (v1.0)) into data/nuscenes/
The code of monocular components, i.e. FCOS3D and PGD, is adapted from OpenMMLab
cd radiant
python scripts/prepare_data_trainval.py
python scripts/prepare_data_test.py
Download weights of FCOS3D pretrained with nuScenes training or training + validation set
CUDA_VISIBLE_DEVICES=0,1,2,3 python scripts/train_radiant_fcos3d.py --resume --num_gpus 4 --samples_per_gpu 4 --epochs 12 --lr 0.001 --workers_per_gpu 2
# 1) Generate training data
CUDA_VISIBLE_DEVICES=0 python scripts/prepare_data_dwn_fcos3d.py
# 2) Train
CUDA_VISIBLE_DEVICES=0,1,2,3 python scripts/train_dwn.py --resume --workers_per_gpu 2 --samples_per_gpu 256 --num_gpus 4 --epochs 200
Download pretrained weights of camera/radar branch and DWN
# Evalutate fusion method FCOS3D + RADIANT
CUDA_VISIBLE_DEVICES=0 python scripts/train_radiant_fcos3d.py --do_eval
# Evalutate monocular method FCOS3D
CUDA_VISIBLE_DEVICES=0 python scripts/train_radiant_fcos3d.py --do_eval --eval_mono
Download pretrained weights of camera/radar branch and DWN
# Evalutate fusion method FCOS3D + RADIANT
CUDA_VISIBLE_DEVICES=0 python scripts/train_radiant_fcos3d.py \
--do_eval --eval_set test \
--dir_result data/nuscenes/fusion_data/train_result/radiant_fcos3d_test \
--path_checkpoint_dwn data/nuscenes/fusion_data/dwn_radiant_fcos3d_test/train_result/checkpoint.tar
# Evalutate monocular method FCOS3D
CUDA_VISIBLE_DEVICES=0 python scripts/train_radiant_fcos3d.py \
--do_eval --eval_mono --eval_set test \
--dir_result data/nuscenes/fusion_data/train_result/radiant_fcos3d_test \
--path_checkpoint_dwn data/nuscenes/fusion_data/dwn_radiant_fcos3d_test/train_result/checkpoint.tar
Download weights of PGD pretrained with nuScenes training set
CUDA_VISIBLE_DEVICES=0,1,2,3 python scripts/train_radiant_pgd.py --resume --num_gpus 4 --samples_per_gpu 4 --epochs 10 --lr 0.001 --workers_per_gpu 2
# 1) Generate training data
CUDA_VISIBLE_DEVICES=0 python scripts/prepare_data_dwn_pgd.py
# 2) Train
CUDA_VISIBLE_DEVICES=0,1,2,3 python scripts/train_dwn.py --resume --workers_per_gpu 2 --samples_per_gpu 256 --num_gpus 4 --epochs 200
--dir_data data/nuscenes/fusion_data/dwn_radiant_pgd
Download pretrained weights of camera/radar branch and DWN
# Evalutate fusion method PGD + RADIANT
CUDA_VISIBLE_DEVICES=0 python scripts/train_radiant_pgd.py --do_eval
# Evalutate monocular method PGD
CUDA_VISIBLE_DEVICES=0 python scripts/train_radiant_pgd.py --do_eval --eval_mono
# Evalutate fusion method PGD + RADIANT
CUDA_VISIBLE_DEVICES=0 python scripts/train_radiant_pgd.py --do_eval --eval_set test
# Evalutate monocular method PGD
CUDA_VISIBLE_DEVICES=0 python scripts/train_radiant_pgd.py --do_eval --eval_mono --eval_set test
@InProceedings{long2023radiant,
author = {Long, Yunfei and Kumar, Abhinav and Morris, Daniel and Liu, Xiaoming and Castro, Marcos and Chakravarty, Punarjay},
title = {RADIANT: Radar-Image Association Network for 3D Object Detection},
booktitle = {AAAI},
year = {2023}
}