Skip to content
Semantic Predictive Control
Branch: master
Clone or download
noahcao and fyu update README and add script to produce demo video (#5)
* add some intro and a script to merge eval images into a demo video
Latest commit eea68a3 Jun 24, 2019
Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
assets ready May 20, 2019
envs ready May 20, 2019
external_libs ready May 20, 2019
models ready May 20, 2019
scripts ready May 20, 2019
utils ready May 20, 2019
.DS_Store ready May 20, 2019
.gitignore ready May 20, 2019
.gitmodules ready May 20, 2019
README.md update README and add script to produce demo video (#5) Jun 24, 2019
args.py update README and add script to produce demo video (#5) Jun 24, 2019
evaluate.py fix bugs for model eval and demo production Jun 15, 2019
main.py update README and add script to produce demo video (#5) Jun 24, 2019
manager.py ready May 20, 2019
memory.py ready May 20, 2019
merge_demo.py update README and add script to produce demo video (#5) Jun 24, 2019
train.py ready May 20, 2019

README.md

Semantic Predictive Control for Explainable and Efficient Policy Learning

[paper] / [video demo]

Semantic predictive control (SPC) is a policy learning framework that predicts future semantic segmentation and events by aggregating multi-scale feature maps. It utilizes dense supervision from semantic segmentation for feature learning and greatly improves policy learning efficiency. The learned features are explainable as they depict future scenes with semantic segmentation and explicit events.

This repository contains a PyTorch implementation of SPC, as well as some training scripts to reproduce policy learning results reported in our paper.

Overview

Our model is composed of four sub-modules:
  1. The feature extraction module extracts multi-scale intermediate features from RGB observations;
  2. The extracted features are then concatenated with tiled actions and feed to the multi-scale prediction module that sequentially predicts future features;
  3. The information prediction module takes in the predicted latent feature representation and outputs corresponding future frame semantic segmentation and task-related signals, such as collision, off-road, and speed;
  4. The guidance network module that predicts action distribution for efficient sampling-based optimization.

Training

Our results in the paper can be reproduced with the provided scripts by running

cd scripts/
bash train_#ENVNAME.sh

Together with the training scripts, simulator environments need to be activated:

Carla

To train on carla, the carla simulator should be started at first, we give an example with default settings here:

### On Ubuntu
SDL_VIDEODRIVER=offscreen SDL_HINT_CUDA_DEVICE=0 ./CarlaUE4.sh -carla-settings=Example.CarlaSettings.ini -windowed -ResX=256 -ResY=256 -carla-server -carla-no-hud

### On Windows
CarlaUE4.exe -windowed -ResX=800 -ResY=600 -carla-server -carla-no-hud -carla-settings=Example.CarlaSettings.ini

By default, the message port of Carla Simulator is 2000 and the --port arg should be set as the same.

Evaluation

To evaluate the model and to produce demo with the latest model saved, we can simply run main.py with the flag --eval. Then, to transform saved snapshot images to a demo video, simply run the script:

python merge_demo.py
You can’t perform that action at this time.