Skip to content

[ICRA 2024] SuperFusion: Multilevel LiDAR-Camera Fusion for Long-Range HD Map Generation

License

Notifications You must be signed in to change notification settings

haomo-ai/SuperFusion

Repository files navigation

SuperFusion

This repository contains the implementation of the ICRA 2024 paper:

SuperFusion: Multilevel LiDAR-Camera Fusion for Long-Range HD Map Generation

Hao Dong, Xianjing Zhang, Jintao Xu, Rui Ai, Weihao Gu, Huimin Lu, Juho Kannala and Xieyuanli Chen

Link to the arXiv version of the paper is available.

Pipeline overview of SuperFusion. Our method fuses camera and LiDAR data in three levels: the data-level fusion fuses depth information from LiDAR to improve the accuracy of image depth estimation, the feature-level fusion uses cross-attention for long-range LiDAR BEV feature prediction with the guidance of image features, and the BEV-level fusion aligns two branches to generate high-quality fused BEV features. Finally, the fused BEV features can support different heads, including semantic segmentation, instance embedding, and direction prediction, and finally post-processed to generate the HD map prediction.

SuperFusion_demo

Abstract

High-definition (HD) semantic map generation of the environment is an essential component of autonomous driving. Existing methods have achieved good performance in this task by fusing different sensor modalities, such as LiDAR and camera. However, current works are based on raw data or network feature-level fusion and only consider short-range HD map generation, limiting their deployment to realistic autonomous driving applications. In this paper, we focus on the task of building the HD maps in both short ranges, i.e., within 30 m, and also predicting long-range HD maps up to 90 m, which is required by downstream path planning and control tasks to improve the smoothness and safety of autonomous driving. To this end, we propose a novel network named SuperFusion, exploiting the fusion of LiDAR and camera data at multiple levels. We use LiDAR depth to improve image depth estimation and use image features to guide long-range LiDAR feature prediction. We benchmark our SuperFusion on the nuScenes dataset and a self-recorded dataset and show that it outperforms the state-of-the-art baseline methods with large margins on all intervals. Additionally, we apply the generated HD map to a downstream path planning task, demonstrating that the long-range HD maps predicted by our method can lead to better path planning for autonomous vehicles.

Code

Prepare

  1. Download nuScenes dataset (Full dataset and Map expansion) and unzip files. The folder should be like

  1. Install dependencies by running
pip install -r requirement.txt
  1. Download the pretrained DeepLabV3 model and place within the checkpoints directory

  2. Download the pretrained SuperFusion model and place within the runs directory

Training

python train.py --instance_seg --direction_pred --depth_sup --dataroot /path/to/nuScenes/ --pretrained --add_depth_channel

Evaluation

  1. Evaluate IoU
python evaluate_iou_split.py --dataroot /path/to/nuScenes/ --modelf runs/model.pt --instance_seg --direction_pred --depth_sup --add_depth_channel --pretrained
  1. Evaluate CD and AP
python export_pred_to_json.py --dataroot /path/to/nuScenes/ --modelf runs/model.pt --depth_sup --add_depth_channel --pretrained
python evaluate_json_split.py --result_path output.json --dataroot /path/to/nuScenes/

Visualization

python vis_prediction_gt.py --instance_seg --direction_pred --dataroot /path/to/nuScenes/
python vis_prediction.py --modelf runs/model.pt --instance_seg --direction_pred --depth_sup --pretrained --add_depth_channel --version v1.0-trainval --dataroot /path/to/nuScenes/

Long-range HD map generation on nuScenes dataset

Instance detection results on nuScenes dataset.

More examples of self-recorded dataset

Long-range HD map generation on self-recorded dataset

Instance detection results on self-recorded dataset.

Citation

If you use our implementation in your academic work, please cite the corresponding paper:

@article{dong2022SuperFusion,
	author   = {Hao Dong and Xianjing Zhang and Jintao Xu and Rui Ai and Weihao Gu and Huimin Lu and Juho Kannala and Xieyuanli Chen},
	title    = {{SuperFusion: Multilevel LiDAR-Camera Fusion for Long-Range HD Map Generation}},
	journal  = {arXiv preprint arXiv:2211.15656},
	year     = {2022},
}

Acknowledgement

Many thanks to the excellent open-source projects HDMapNet,LSS,AlignSeg and CaDDN.

About

[ICRA 2024] SuperFusion: Multilevel LiDAR-Camera Fusion for Long-Range HD Map Generation

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages