Skip to content

Code for "Bezier Everywhere All at Once: Learning Drivable Lanes as Bezier Graphs".

Notifications You must be signed in to change notification settings

driskai/BGFormer

Repository files navigation

Bézier Everywhere All at Once: Learning Drivable Lanes as Bézier Graphs

Downloading Data

To run our preprocessing steps, you will first need to download the raw Urban Lane Graph dataset. This can be done by following the download instructions here. Our experiments used the v1.1 version of the dataset.

Preprocessing Data

Preprocessing data for the Full-LGP is straightforward using preprocess.py. To obtain both the train and eval data run this file, specifying the raw_dataset_root argument:

python preprocess.py --raw_dataset_root <path/to/raw/dataset>

Note the generated eval data here is just for internal evaluation - full eval as reported in our paper is computed on the complete tiles. Note this will also produce an eval_full_lgp directory, which is used for aggregated evaluation later.

Succ-LGP is more involved. To generate the "raw" train data, you must first clone the LaneGNN Urban Lane Graph repo and use their code to generate the raw data, which should contain *-graph.gpickle and *-rgb.gpickle files (alongside a lot of other files, but these are the only two we need per sample).

Both train and eval processed datasets are then generated by running preprocess_successor.py. This will use the raw data preprocessed by LaneGNN for the train data, and the files provided in the urbanlanegraph-dataset-pub-v1.1/{city}/successor-lgp/eval/ directories for eval.

python successor_preprocess_data.py --raw_dataset_root <path/to/raw/dataset> --ulg_dataset_root <path/to/ulg/raw/dataset>

In the event that you want to regenerate data, make sure to first delete the contents of the /aerial_lane_bezier/dataset/processed_files (or /aerial_lane_bezier/dataset/successor_processed_files for Succ-LGP) directory to avoid errors.

Training

Start training by running the train.py script. To train Succ-LGP, run with --experiment_type set to successor. To log to Weights and Biases, run with --log_to_wandb. See the args help for more details and options.

python train.py --help

The argparse defaults are set such that to run the training as reported in our paper (and log to Weights and Biases), you should have to run:

python train.py --log_to_wandb --wandb_run_name "Full-LGP"

for Full-LGP and:

python train.py --experiment_type "successor" --epochs 150 --log_to_wandb --wandb_run_name "Succ-LGP"

for Succ-LGP.

For training across multiple GPUs, we used Huggingface Accelerate. This should work "as is", start training with:

accelerate launch --config_file <your/config/file.yaml> train.py <your-args>

Inference

We provide a basic script to run inference on a random selection of eval images.

To run inference on a random selection of 8 eval images for Full-LGP, run:

python inference.py --wandb_run_name "Full-LGP"

and for Succ-LGP, run:

python inference.py --experiment_type "successor" --wandb_run_name "Succ-LGP"

Both of these will output to an image called output.png.

Note these commands assume you used the --wandb_run_name values from the training steps.

Evaluation

Full-LGP

Per-Tile Evaluation

It's very easy to run quick evaluation on a "per tile" basis for the Full-LGP model; i.e. evaluate on 512x512 crops containing complete graphs, rather than the full 5000x5000 images. To do this, simply run:

python evaluate.py --wandb_run_name "Full-LGP"

This will produce a results.json file in the root directory. To get aggregated results:

import pandas as pd
import json

df = pd.DataFrame(json.load(open("results.json", "r")))
print(df.mean())

Aggregated Evaluation

To run the aggregation, use the aggregate.py script:

python aggregate.py --wandb_run_name "Full-LGP"

This produces a series of pickle files in the aggregated_outputs directory. To get the metrics and aggregated results, then run:

python evaluate_aggregated.py --raw_dataset_root <path/to/raw/dataset>

Succ-LGP

Evaluation on Succ-LGP is very similar to the tile version of Full-LGP. Run:

python evaluate.py --wandb_run_name "Succ-LGP" --split "eval_succ_lgp"

This will produce a results.json file in the root directory. To get aggregated results:

import pandas as pd
import json

df = pd.DataFrame(json.load(open("results.json", "r")))
print(df.mean())

About

Code for "Bezier Everywhere All at Once: Learning Drivable Lanes as Bezier Graphs".

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages