Skip to content

Code for our paper "Active Perception using Light Curtains for Autonomous Driving", ECCV 2020

License

Notifications You must be signed in to change notification settings

CMU-Light-Curtains/ObjectDetection

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

1 Commit
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Active Perception using Light Curtains for Autonomous Driving

Project Website: https://siddancha.github.io/projects/active-perception-light-curtains

drawing

This is the official code for our paper:

Siddharth Ancha, Yaadhav Raaj, Peiyun Hu, Srinivasa G. Narasimhan, and David Held.
Active Perception using Light Curtains for Autonomous Driving.
In European Conference on Computer Vision (ECCV), August 2020.

Installation

  1. Clone repository.
git clone git@github.com:siddancha/active-perception-light-curtains.git
  1. Install pylc
cd /path/to/second.pytorch/pylc
mkdir build && cd build
cmake -DCMAKE_BUILD_TYPE=Release .. && make
  1. Install spconv.

  2. Add required paths to the $PYTHONPATH.

export PYTHONPATH=$PYTHONPATH:/path/to/second.pytorch
export PYTHONPATH=$PYTHONPATH:/path/to/second.pytorch/pylc
export PYTHONPATH=$PYTHONAPTH:/path/to/spconv

Data preparation

Download the Virtual KITTI and SYNTHIA-AL datasets into folders called vkitti and synthia. Then, create their info files that contain their respective metadata using the following commands:

export DATADIR=/path/to/synthia/and/vkitti/datasets

# create info files for Virtual KITTI dataset
python ./data/vkitti_dataset.py create_vkitti_info_file
    --datapath=$DATASET/vkitti

# create info files for the SYNTHIA dataset
python ./data/synthia_dataset.py create_synthia_info_file
    --datapath=$DATASET/synthia

Training

To train a model, run the following commands:

cd second
python ./pytorch/train.py train
    --config_path=./configs/{dataset}/second/{experiment}.yaml
    --model_dir=/path/to/save/model
    --display_step=100

where dataset is either vkitti or synthia . We will be releasing our pre-trained models shortly.

Evaluation

To evaluate a model, run the following commands:

cd second
python ./pytorch/train.py evaluate
    --config_path=./configs/{dataset}/second/{experiment}.yaml
    --model_dir=/path/to/saved/model
    --result_path=/path/to/save/evaluation/results
    --info_path=/info/path/of/dataset/split

Launch all experiments on slurm

In order to facilitate reproducibiilty, we have created a script that launches all experiments included in our paper, on a compute clustered managed by slurm. In order to launch all experiments, simply run the following

cd second
python ./launch_all_exp.py

This will automatically schedule training of all experiments using sbatch commands provided in second/sbatch.py.

Notes

  • The codebase uses Python 3.7.
  • The codebase is built upon the SECOND repository.

About

Code for our paper "Active Perception using Light Curtains for Autonomous Driving", ECCV 2020

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published