Skip to content

OpenDriveLab/DriveLM

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

DriveLM: Driving with Graph Visual Question Answering

Autonomous Driving Challenge 2024 Driving-with-Language track is activated!

License: Apache2.0 arXiv Hugging Face

drivelm_nus_demo_v2_1.mp4

Highlights

πŸ”₯ We instantiate datasets (DriveLM-Data) built upon nuScenes and CARLA, and propose a VLM-based baseline approach (DriveLM-Agent) for jointly performing Graph VQA and end-to-end driving.

🏁 DriveLM serves as a main track in the CVPR 2024 Autonomous Driving Challenge. Everything you need for the challenge is HERE, including baseline, test data and submission format and evaluation pipeline!

Table of Contents

  1. Highlights
  2. Getting Started
  3. Current Endeavors and Future Horizons
  4. News and TODO List
  5. DriveLM-Data
  6. License and Citation
  7. Other Resources

Getting Started

To get started with DriveLM:

(back to top)

Current Endeavors and Future Directions

  • The advent of GPT-style multimodal models in real-world applications motivates the study of the role of language in driving.
  • Date below reflects the arXiv submission date.
  • If there is any missing work, please reach out to us!

DriveLM attempts to address some of the challenges faced by the community.

  • Lack of data: DriveLM-Data serves as a comprehensive benchmark for driving with language.
  • Embodiment: GVQA provides a potential direction for embodied applications of LLMs / VLMs.
  • Closed-loop: DriveLM-CARLA attempts to explore closed-loop planning with language.

(back to top)

News and TODO List

News

  • [2024/03/25] Challenge test server is online and the test questions are released. Chekc it out!
  • [2024/02/29] Challenge repo release. Baseline, data and submission format, evaluation pipeline. Have a look!
  • [2023/08/25] DriveLM-nuScenes demo released.
  • [2023/12/22] DriveLM-nuScenes full v1.0 and paper released.
  • [Early 2024] DriveLM-Agent inference code.
  • Note: We plan to release a simple, flexible training code that supports multi-view inputs as a starter kit for the AD challenge (stay tuned for details).

TODO List

  • DriveLM-Data
    • DriveLM-nuScenes
    • DriveLM-CARLA
  • DriveLM-Metrics
    • GPT-score
  • DriveLM-Agent
    • Inference code on DriveLM-nuScenes
    • Inference code on DriveLM-CARLA

(back to top)

DriveLM-Data

We facilitate the Perception, Prediction, Planning, Behavior, Motion tasks with human-written reasoning logic as a connection between them. We propose the task of GVQA on the DriveLM-Data.

πŸ“Š Comparison and Stats

DriveLM-Data is the first language-driving dataset facilitating the full stack of driving tasks with graph-structured logical dependencies.

Links to details about GVQA task, Dataset Features, and Annotation.

(back to top)

License and Citation

All assets and code in this repository are under the Apache 2.0 license unless specified otherwise. The language data is under CC BY-NC-SA 4.0. Other datasets (including nuScenes) inherit their own distribution licenses. Please consider citing our paper and project if they help your research.

@article{sima2023drivelm,
  title={DriveLM: Driving with Graph Visual Question Answering},
  author={Sima, Chonghao and Renz, Katrin and Chitta, Kashyap and Chen, Li and Zhang, Hanxue and Xie, Chengen and Luo, Ping and Geiger, Andreas and Li, Hongyang},
  journal={arXiv preprint arXiv:2312.14150},
  year={2023}
}
@misc{contributors2023drivelmrepo,
  title={DriveLM: Driving with Graph Visual Question Answering},
  author={DriveLM contributors},
  howpublished={\url{https://github.com/OpenDriveLab/DriveLM}},
  year={2023}
}

(back to top)

Other Resources

Twitter Follow

OpenDriveLab

Twitter Follow

Autonomous Vision Group

(back to top)