If you use the code, please cite our paper:
@article{deng20243d,
title={3D Extended Object Tracking by Fusing Roadside Sparse Radar Point Clouds and Pixel Keypoints},
author={Jiayin Deng and Zhiqun Hu and Yuxuan Xia and Zhaoming Lu and Xiangming Wen},
journal={arXiv preprint arXiv:2404.17903},
year={2024},
doi={2404.17903}
}
-
Clone the repository and cd to it.
git clone https://github.com/RadarCameraFusionTeam-BUPT/ES-EOT-real-nuscenes.git cd ES-EOT
-
Install dependencies according to the installation part of github page.
Note: The detections of CRN and HVDetFusion are saved in json files, while the realtime detection is not released here.
-
Run the main file. (The result is written in a .npy file in the same folder as main.py)
cd ES-EOT python main.py
-
Calculate the ATE and ASE values.
python calculate_matrics.py
-
Show the tracking results in an animation.
python ShowRes.py
-
Show BEV detections in a .jpg picture
cd show python show_BEV_track.py
Note: Keypoints detections are stored in the data/turn_left/kps_json/
. However, if you wish to reuse the pre-trained model, follow these steps:
- Obtain the pre-trained model weights from model link, and move it into the
assets
folder.
-
Once you have the pre-trained model weights and dependencies installed, you can run the key points detection script
python predict.py data/turn_left/images --model assets/best.pt --render
The results are written into the
output
folder.