Skip to content

OpenDriveLab/OpenLane-V2

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

OpenLane-V2

The World's First Perception and Reasoning Benchmark for Scene Structure in Autonomous Driving.

OpenLane-V2 devkit LICENSE

CVPR 2024 AGC Track Mapless Driving

πŸ”₯ Welcome to track mapless driving of the Autonomous Grand Challenge in CVPR 2024!

Autonomous driving without HD maps demands a higher level of active scene understanding. This challenge aims to explore the boundaries of scene reasoning capabilities. Neural networks take multi-view images and Standard-definition (SD) Map as input, then provide not only perception results of lanes and traffic elements but also topology relationships among lanes and between lanes and traffic elements simultaneously.

Introducing OpenLane-V2 Update

We are happy to announce an important update to the OpenLane family, featuring two sets of additional data and annotations.

  • Map Element Bucket. We provide a diverse span of road elements (as a bucket) to build the driving scene - on par with all elements in HD Map. Armed with the newly introduced lane segment representations, we unify various map elements to incorporate comprehensive aspects of the captured static scenes to empower DriveAGI.
    πŸ”” The proposed lane segment representation is published with LaneSegNet in ICLR 2024!

  • Standard-definition (SD) Map. As a new sensor input, SD Map supplements multi-view images with topological and positional priors to strengthen structural acknowledge in the neural networks.

Table of Contents

News

Note

The difference between v1.x and v2.x is that we updated APIs and materials on lane segment and SD map in v2.x.

❗️Update on evaluation metrics led to differences in TOP scores between vx.1 (v1.1, v2.1) and vx.0 (v1.0, v2.0). We encourage the use of vx.1 metrics. For more details please see issue #76.

  • 2024/03/01 We are hosting CVPR 2024 Autonomous Grand Challenge.
  • 2023/11/01 Devkit v2.1.0 and v1.1.0 released.
  • 2023/08/28 Dataset subset_B released.
  • 2023/07/21 Dataset v2.0 and Devkit v2.0.0 released.
  • 2023/07/05 The test server of OpenLane Topology is re-opened.
  • 2023/06/01 The Challenge at the CVPR 2023 Workshop wraps up.
  • 2023/04/21 A baseline based on InternImage released. Check out here.
  • 2023/04/20 OpenLane-V2 paper is available on arXiv.
  • 2023/02/15 Dataset v1.0, Devkit v1.0.0, and baseline model released.
  • 2023/01/15 Initial OpenLane-V2 dataset sample v0.1 released.

(back to top)

Task and Evaluation

Driving Scene Topology

πŸ”₯ For CVPR 2024 AGC Track Mapless Driving!

Given sensor inputs, lane segments are required to be perceived, instead of lane centerlines in the task of OpenLane Topology. Besides, pedestrian crossings and road boundaries are also desired to build a comprehensive understanding of the driving scenes. The OpenLane-V2 UniScore (OLUS) is utilized to summarize model performance in all aspects.

OpenLane Topology

Given sensor inputs, participants are required to deliver not only perception results of lanes and traffic elements but also topology relationships among lanes and between lanes and traffic elements simultaneously. In this task, we use OpenLane-V2 Score (OLS) to evaluate model performance.

(back to top)

Leaderboard

CVPR 2024 AGC Mapless Driving

This is the ongoing challenge on CVPR2024, the leaderboard and the testserver will be open in late March!

OpenLane Topology Challenge at CVPR 2023 (Server remains active)

We maintain a leaderboard and test server on the task of scene structure perception and reasoning. If you wish to add new / modify results to the leaderboard, please drop us an email following the instructions here.

image

(back to top)

Highlights of OpenLane-V2

Unifying Map Representations

One of the superior formulations in the bucket is Lane Segment. It serves as a unifying and versatile representation of lanes, paving the way for multiple downstream applications. With the introduction of SD Map, the autonomous driving system is capable of utilizing these informative priors for achieving satisfactory performance in perception and reasoning.

The following table sums up a detailed comparison of different lane formulations to achieve various functionalities.

Lane Formulation Functionality
3D Space Laneline Cateogry Lane Direction Drivable Area Lane-level Drivable Area Lane-lane Topology Bind to Traffic Element Laneline-less
2D Laneline βœ…
3D Laneline βœ… βœ…
Online (pseudo) HD Map βœ… βœ…
Lane Centerline βœ… βœ… βœ… βœ… βœ…
Lane Segment (newly released) βœ… βœ… βœ… βœ… βœ… βœ… βœ… βœ…
  • 3D Space: whether the perceived entities are represented in the 3D space.
  • Laneline Category: categories of the visible laneline, such as solid and dash.
  • Lane Direction: the driving direction that vehicles need to follow in a particular lane.
  • Drivable Area: the entire area where vehicles are allowed to drive.
  • Lane-level Drivable Area: drivable area of a single lane, which restricts vehicles from trespassing neighboring lanes.
  • Lane-lane Topology: connectivity of lanes that builds the lane network to provide routing information.
  • Bind to Traffic Element: correspondence to traffic elements, which provide regulations according to traffic rules.
  • Laneline-less: the ability to provide guidance in areas where no visible laneline is available, such as intersections.

Introducing 3D Laneline

Previous datasets annotate lanes on images in the perspective view. Such a type of 2D annotation is insufficient to fulfill real-world requirements. Following the OpenLane-V1 practice, we annotate lanes in 3D space to reflect the geometric properties in the real 3D world.

Recognizing Extremely Small Traffic Elements

Not only preventing collision but also facilitating efficiency is essential. Vehicles follow predefined traffic rules for self-disciplining and cooperating with others to ensure a safe and efficient traffic system. Traffic elements on the roads, such as traffic lights and road signs, provide practical and real-time information.

Topology Reasoning between Lane and Road Elements

A traffic element is only valid for its corresponding lanes. Following the wrong signals would be catastrophic. Also, lanes have their predecessors and successors to build the map. Autonomous vehicles are required to reason about the topology relationships to drive in the right way.

(back to top)

Getting Started

(back to top)

License & Citation

Prior to using the OpenLane-V2 dataset, you should agree to the terms of use of the nuScenes and Argoverse 2 datasets respectively. OpenLane-V2 is distributed under CC BY-NC-SA 4.0 license. All code within this repository is under Apache License 2.0.

Please use the following citation when referencing OpenLane-V2:

@inproceedings{wang2023openlanev2,
  title={OpenLane-V2: A Topology Reasoning Benchmark for Unified 3D HD Mapping}, 
  author={Wang, Huijie and Li, Tianyu and Li, Yang and Chen, Li and Sima, Chonghao and Liu, Zhenbo and Wang, Bangjun and Jia, Peijin and Wang, Yuting and Jiang, Shengyin and Wen, Feng and Xu, Hang and Luo, Ping and Yan, Junchi and Zhang, Wei and Li, Hongyang},
  booktitle={NeurIPS},
  year={2023}
}

@article{li2023toponet,
  title={Graph-based Topology Reasoning for Driving Scenes},
  author={Li, Tianyu and Chen, Li and Wang, Huijie and Li, Yang and Yang, Jiazhi and Geng, Xiangwei and Jiang, Shengyin and Wang, Yuting and Xu, Hang and Xu, Chunjing and Yan, Junchi and Luo, Ping and Li, Hongyang},
  journal={arXiv preprint arXiv:2304.05277},
  year={2023}
}

@inproceedings{li2023lanesegnet,
  title={LaneSegNet: Map Learning with Lane Segment Perception for Autonomous Driving},
  author={Li, Tianyu and Jia, Peijin and Wang, Bangjun and Chen, Li and Jiang, Kun and Yan, Junchi and Li, Hongyang},
  booktitle={ICLR},
  year={2024}
}

(back to top)

Related Resources

Awesome

(back to top)