Skip to content
/ QUEST Public

[ICRA 2024] QUEST: Query Stream for Practical Cooperative Perception

Notifications You must be signed in to change notification settings

leofansq/QUEST

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

2 Commits
 
 
 
 

Repository files navigation

QUEST: Query Stream for Practical Cooperative Perception

query_cooperation

Aiming at interpretable and flexible cooperative perception, we propose the concept of query cooperation in this paper, which enables instance-level feature interaction among agents via the query stream. To specifically describe the query cooperation, a representative cooperative perception framework (QUEST) is proposed. It performs cross-agent query interaction by fusion and complementation, which are designed for co-aware objects and unaware objects respectively. Taking camera-based vehicle-infrastructure cooperative perception as a typical scenario, we generate the camera-centric cooperation labels of DAIR-V2X-Seq and evaluate the proposed framework on it. The experimental results not only demonstrate the effectiveness but also show the advantages of transmission flexibility and robustness to packet dropout. In addition, we discuss the pros and cons of query cooperation paradigm from the possible extensions and foreseeable limitations.

quest_architecture

For technical details, please refer to:

QUEST: Query Stream for Practical Cooperative Perception

Q&A

Q1: Will the official code be open-sourced?

Really glad that our work is valuable for you. Actually, we are not planning to open-source the QUEST because of the requirement of the enterprise partners, but it is applied to a unified framework for End-to-End Cooperative Autonomous Driving, called UniV2X.

The official code of UniV2X is open-sourced at UniV2X-Github, and your can find more implementation details of query cooperation in it.

For technical details of UniV2X, please refer to UniV2X-Paper

Q2: How can I use the generated camera-centric cooperative labels of DAIR-V2X-Seq?

Citation

If you find our work useful in your research, please consider citing:

@InProceedings{fan2023quest,
    author    = {Fan, Siqi and Yu, Haibao and Yang, Wenxian and Yuan, Jirui and Nie, Zaiqing},
    title     = {QUEST: Query Stream for Practical Cooperative Perception},
    booktitle = {IEEE International Conference on Robotics and Automation (ICRA)},
    month     = {May},
    year      = {2024},
    pages     = {}
}

Our Related Resources

Cooperative Autonomous Driving

  • [Dataset CVPR2022] DAIR-V2X: A Large-Scale Dataset for Vehicle-Infrastructure Cooperative 3D Object Detection
  • [Dataset CVPR2023] DAIR-V2X-Seq: A Large-Scale Sequential Dataset for Vehicle-Infrastructure Cooperative Perception and Forecasting
  • [Method ICRA2024] EMIFF: Enhanced Multi-scale Image Feature Fusion for Vehicle-Infrastructure Cooperative 3D Object Detection
  • [Method arxiv] UniV2X: End-to-End Autonomous Driving through V2X Cooperation

Roadside Perception

  • [Dataset CVPR2024] RCooper: A Real-world Large-scale Dataset for Roadside Cooperative Perception
  • [Method IROS2023] CBR: Calibration-free BEV Representation for Infrastructure Perception

Vehicle-side Perception

  • [Method CVPR2021] SCF-Net: Learning Spatial Contextual Features for Large-Scale Point Cloud Segmentation
  • [Method TIP2023] CPCL: Conservative-Progressive Collaborative Learning for Semi-supervised Semantic Segmentation
  • [Method TVT2021] FII-CenterNet: An Anchor-free Detector with Foreground Attention for Traffic Object Detection
  • [Method arxiv] SpiderMesh: Spatial-aware Demand-guided Recursive Meshing for RGB-T Semantic Segmentation

About

[ICRA 2024] QUEST: Query Stream for Practical Cooperative Perception

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published