https://sites.google.com/view/contingency-planning/home
- Serve as the accompanying code for ICRA 2021 paper: Contingencies from Observations.
- A framework for running scenarios with PRECOG models in CARLA.
This repository requires CARLA 0.9.8. Please navigate to carla.org to download the correct packages, or do the following:
# Downloads hosted binaries
wget https://carla-releases.s3.eu-west-3.amazonaws.com/Linux/CARLA_0.9.8.tar.gz
# Unpack CARLA 0.9.8 download
tar -xvzf CARLA_0.9.8.tar.gz -C /path/to/your/desired/carla/install
Once downloaded, make sure that CARLAROOT
is set to point to your copy of CARLA:
export CARLAROOT=/path/to/your/carla/install
CARLAROOT
should point to the base directory, such that the output of ls $CARLAROOT
shows the following files:
CarlaUE4 CHANGELOG Engine Import LICENSE PythonAPI Tools
CarlaUE4.sh Dockerfile HDMaps ImportAssets.sh Manifest_DebugFiles_Linux.txt README VERSION
conda create -n precog python=3.6.6
conda activate precog
# make sure to source this every time after activating, and make sure $CARLAROOT is set beforehand
source precog_env.sh
pip install -r requirements.txt
Note that CARLAROOT
needs to be set and source precog_env.sh
needs to be run every time you activate the conda env in a new window/shell.
Before running any of the experiments, you need to launch the CARLA server:
cd $CARLAROOT
./CarlaUE4.sh
The dataset used to train the models in the paper can be downloaded at this link.
Alternatively, data can be generated in CARLA via the scenario_runner.py
script:
cd Experiment
python scenario_runner.py \
--enable-collecting \
--scenario 0 \
--location 0
Episode data will be stored to Experiment/Data folder.
Then run:
cd Experiment
python Utils prepare_data.py
This will convert the episode data objects into json file per frame, and store them in Data/JSON_output folder.
The CfO model/architecture code is contained in the precog folder, and is based on the PRECOG repository with several key differences:
- The architecture makes use of a CNN to process the LiDAR range map for contextual input instead of a feature map (see precog/bijection/social_convrnn.py).
- The social features also include velocity and acceleration information of the agents (see precog/bijection/social_convrnn.py).
- The plotting script visualizes samples in a fixed set of coordinates with LiDAR overlayed on top (see precog/plotting/plot.py).
Organize the json files into the following structure:
Custom_Dataset
---train
---feed_Episode_1_frame_90.json
...
---test
...
---val
...
Modify relevant precog/conf files to insert correct absolute paths.
Custom_Dataset.yaml
esp_infer_config.yaml
esp_train_config.yaml
shared_gpu.yaml
sgd_optimizer.yaml # set training hyperparameters
Then run:
export CUDA_VISIBLE_DEVICES=0;
python $PRECOGROOT/precog/esp_train.py \
dataset=Custom_Dataset \
main.eager=False \
bijection.params.A=2 \
optimizer.params.plot_before_train=True \
optimizer.params.save_before_train=True
To evaluate a trained model in the CARLA simulator, run:
cd Experiment
python scenario_runner.py \
--enable-inference \
--enable-control \
--enable-recording \
--checkpoint_path [absolute path to model checkpoint] \
--model_path [absolute path to model folder] \
--replan 4 \
--planner_type 0 \
--scenario 0 \
--location 0
A checkpoint of the model used in the paper is provided in Model/esp_train_results.
The example script test.sh will run the experiments from the paper and generate a video for each one. For reference, when using a Titan RTX GPU and Intel i9-10900k CPU each episode takes approximately 10 minutes to run, and the entire script takes several hours to run to completion.
Install the MFP baseline repo, and set MFPROOT
to point to your copy:
export MFPROOT=/your/copy/of/mfp
Use the scenario_runner_mfp.py
script to run the MFP model inside of the CARLA scenarios:
# left turn
python scenario_runner_mfp.py \
--enable-inference \
--enable-control \
--enable-recording \
--replan 4 \
--scenario 0 \
--location 0 \
--mfp_control \
--mfp_checkpoint CARLA_left_turn_scenario
# right turn
python scenario_runner_mfp.py \
--enable-inference \
--enable-control \
--enable-recording \
--replan 4 \
--scenario 2 \
--location 0 \
--mfp_control \
--mfp_checkpoint CARLA_right_turn_scenario
# overtake
python scenario_runner_mfp.py \
--enable-inference \
--enable-control \
--enable-recording \
--replan 4 \
--scenario 1 \
--location 0 \
--mfp_control \
--mfp_checkpoint CARLA_overtake_scenario
To cite this work, use:
@inproceedings{rhinehart2021contingencies,
title={Contingencies from Observations: Tractable Contingency Planning with Learned Behavior Models},
author={Nicholas Rhinehart and Jeff He and Charles Packer and Matthew A. Wright and Rowan McAllister and Joseph E. Gonzalez and Sergey Levine},
booktitle={International Conference on Robotics and Automation (ICRA)},
organization={IEEE},
year={2021},
}