Skip to content
Autonomous driving: Tensorflow implementation of the paper "End-to-end Driving via Conditional Imitation Learning"
Branch: master
Clone or download
markus-hinsche Speedup training through num_parallel_calls (#10)
Make training faster by using num_parallel_calls parameter: This means that the input pipeline's preprocessing will run computation in parallel now.

The speedup depends on the hardware obviously, but we measured a 2.6x speedup for our setup.
Latest commit 7485449 Jan 29, 2019
Type Name Latest commit message Commit time
Failed to load latest commit information.
data Add imitation-learning training code for Tensorflow Dec 10, 2018
img Add teaser gif to (#9) Jan 26, 2019
imitation Speedup training through num_parallel_calls (#10) Jan 28, 2019
.gitignore Add imitation-learning training code for Tensorflow Dec 10, 2018
Dockerfile Add imitation-learning training code for Tensorflow Dec 10, 2018
requirements.txt Upgrade pyyaml (#8) Jan 6, 2019

Imitation learning Build Status Tweet

This repository provides a Tensorflow implementation of the paper End-to-end Driving via Conditional Imitation Learning.

You can find a pre-trained network here. The repository at hand adds Tensorflow training code.

There are only a few changes to the setup in the paper:

  • We train less steps (we do 190k steps, the paper does 450k steps), but this is configurable.
  • The branches for the controller follow the order of the training data.
  • We take different weight hyperparameters for the outputs (steer, gas, brake, speed), since the hyperparameters suggested in the paper did not work for us.


This repository uses docker images. In order to use it, install docker. To build the image, use:

docker build --build-arg base_image=tensorflow/tensorflow:1.12.0-gpu -t imit-learn .

If you only need a CPU image, leave out base_image=tensorflow/tensorflow:1.12.0-gpu. So far, we only tested the setup with Python2, which tensorflow:1.12.0 is based on.

To run a container, use:

cd <root of this repository>

docker run -it --rm --name imit_learn \
    -v "$(pwd)/imitation:/imitation" \
    -v "$(pwd)/data:/data" \
    -v "$DOCKER_BASH_HISTORY:/root/.bash_history" \
    imit-learn bash

Download dataset (24GB). Unpack! Put them into data/imitation_learning/h5_files/AgentHuman.

If you don't wanna download all the data right away, you can try on a very small subset that is contained in this repository. To set it up, run:

cd <root of this repository>
mkdir data/imitation_learning/h5_files/
cp -r imitation/test/mock_data_181018/imitation_learning/h5_files/ data/imitation_learning/h5_files/


The preprocessing converts the downloaded h5 files into tfrecord files so we can easier use them for training with Tensorflow.

During preprocessing, the data is shuffled to a certain degree. More specifically speaking, it is shuffled the h5 files, but it is not shuffling the frames inside an h5 file.
Shuffling across files is achieved by using a big shuffle buffer during training.

Run preprocessing using:

mkdir -p /data/imitation_learning/preprocessed/
python /imitation/ --preproc_config_paths=config-preprocess-production.yaml

This might run for a while and slow consume a lot of CPU power. To simply check that the preprocessing code can run, set --preproc_config_paths=config-preprocess-debug.yaml.


In order for the training to run, training and validation data need to be in the right place as described above. To run training with best hyperparameters on the entire dataset, run:

python --config_paths=config-train-production.yaml

To debug training, use:

python --config_paths=config-train-debug.yaml


You can’t perform that action at this time.