Skip to content

MediaBrain-SJTU/Where2comm

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

13 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

PWCPWCPWC

Where2comm

paper project

The CoPerception-UAV dataset is avaliable at here.

This repository contains the official PyTorch implementation of

Where2comm: Communication-Efficient Collaborative Perception via Spatial Confidence Maps
Yue Hu, Shaoheng Fang, Zixing Lei, Yiqi Zhong, Siheng Chen
Presented at Neurips 2022

Where2comm

Single agent detection v.s. collaborative perception

Main idea

Abstract: Multi-agent collaborative perception could significantly upgrade the perception performance by enabling agents to share complementary information with each other through communication. It inevitably results in a fundamental trade-off between perception performance and communication bandwidth. To tackle this bottleneck issue, we propose a spatial confidence map, which reflects the spatial heterogeneity of perceptual information. It empowers agents to only share spatially sparse, yet perceptually critical information, contributing to where to communicate.

Where2comm

Features

Citation

If you find this code useful in your research then please cite

@inproceedings{Where2comm:22,
  author    = {Yue Hu, Shaoheng Fang, Zixing Lei, Yiqi Zhong, Siheng Chen},
  title     = {Where2comm: Communication-Efficient Collaborative Perception via Spatial Confidence Maps},
  booktitle = {Thirty-sixth Conference on Neural Information Processing Systems (Neurips)},
  month     = {November},  
  year      = {2022}
}

Quick Start

Install

Please refer to the INSTALL.md for detailed documentations.

Download dataset DAIR-V2X

  1. Download raw data of DAIR-V2X.
  2. Download complemented annotation from Yifan Lu.

Train your model

We adopt the same setting as OpenCOOD which uses yaml file to configure all the parameters for training. To train your own model from scratch or a continued checkpoint, run the following commonds:

python opencood/tools/train.py --hypes_yaml ${CONFIG_FILE} [--model_dir  ${CHECKPOINT_FOLDER}]

Arguments Explanation:

  • hypes_yaml: the path of the training configuration file, e.g. opencood/hypes_yaml/second_early_fusion.yaml, meaning you want to train an early fusion model which utilizes SECOND as the backbone. See Tutorial 1: Config System to learn more about the rules of the yaml files.
  • model_dir (optional) : the path of the checkpoints. This is used to fine-tune the trained models. When the model_dir is given, the trainer will discard the hypes_yaml and load the config.yaml in the checkpoint folder.

Test the model

Before you run the following command, first make sure the validation_dir in config.yaml under your checkpoint folder refers to the testing dataset path, e.g. opv2v_data_dumping/test.

python opencood/tools/inference.py --model_dir ${CHECKPOINT_FOLDER} --fusion_method ${FUSION_STRATEGY} --save_vis_n ${amount}

Arguments Explanation:

  • model_dir: the path to your saved model.
  • fusion_method: indicate the fusion strategy, currently support 'early', 'late', 'intermediate', 'no'(indicate no fusion, single agent), 'intermediate_with_comm'(adopt intermediate fusion and output the communication cost).
  • save_vis_n: the amount of saving visualization result, default 10

The evaluation results will be dumped in the model directory.

Acknowledgements

Thank for the excellent cooperative perception codebases OpenCOOD and CoPerception.

Thank for the excellent cooperative perception datasets DAIR-V2X, OPV2V and V2X-SIM.

Thank for the dataset and code support by YiFan Lu.

Relevant Projects

Thanks for the insightful previous works in cooperative perception field.

V2vnet: Vehicle-to-vehicle communication for joint perception and prediction ECCV20 [Paper]

When2com: Multi-agent perception via communication graph grouping CVPR20 [Paper] [Code]

Who2com: Collaborative Perception via Learnable Handshake Communication ICRA20 [Paper]

Learning Distilled Collaboration Graph for Multi-Agent Perception Neurips21 [Paper] [Code]

V2X-Sim: A Virtual Collaborative Perception Dataset and Benchmark for Autonomous Driving RAL21 [Paper] [Website][Code]

OPV2V: An Open Benchmark Dataset and Fusion Pipeline for Perception with Vehicle-to-Vehicle Communication ICRA2022 [Paper] [Website] [Code]

V2X-ViT: Vehicle-to-Everything Cooperative Perception with Vision Transformer ECCV2022 [Paper] [Code] [Talk]

Self-Supervised Collaborative Scene Completion: Towards Task-Agnostic Multi-Robot Perception CoRL2022 [Paper]

CoBEVT: Cooperative Bird's Eye View Semantic Segmentation with Sparse Transformers CoRL2022 [Paper] [Code]

DAIR-V2X: A Large-Scale Dataset for Vehicle-Infrastructure Cooperative 3D Object Detection CVPR2022 [Paper] [Website] [Code]

Contact

If you have any problem with this code, please feel free to contact 18671129361@sjtu.edu.cn.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published