Skip to content
The code & datasets for the paper INFER: INtermediate representations for FuturE pRediction
Jupyter Notebook Python
Branch: master
Clone or download
Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
images Modified README Mar 25, 2019
models Added code Mar 10, 2019
.gitignore
BaselineDataset.py Added code Mar 10, 2019
ConvLSTM.py Added code Mar 10, 2019
KittiDataset.py Added code Mar 10, 2019
LayerNorm2D.py Added code Mar 10, 2019
Model.py Added code Mar 10, 2019
README.md Added project page details May 16, 2019
args.py Added code Mar 10, 2019
baseline-transfer.ipynb Added code Mar 10, 2019
baseline.ipynb Added code Mar 10, 2019
infer-main.ipynb Added code Mar 10, 2019
infer-transfer.ipynb Added code Mar 10, 2019
oxford-test.ipynb Added code Mar 10, 2019
requirements.txt Added code Mar 10, 2019
train.py Added code Mar 10, 2019

README.md

INFER: INtermediate representations for FuturE pRediction

Shashank Srikanth, Junaid Ahmed Ansari, R. Karnik Ram, Sarthak Sharma, J. Krishna Murthy, and K. Madhava Krishna

Example image

This repository contains code and data required to reproduce the results of INFER: INtermediate representations for FuturE pRediction (arXiv)

Datasets

Example image In order to use this code, you need to download the intermediate representations datasets used by our network. The intermediate representation has 5 different channels as shown in the above figure which are generated using the semantic and instance segmentation of the image along with depth. Most of the channels are self explanatory and the other vehicles channel represents the position of the other vehicles in the scene.

The intermediate representations have been generated for all the 3 datasets: KITTI, Cityscapes & Oxford RobotCar

The link to the dataset consisting of intermediate representations is given here

You can find the corresponding semantic, instance segmenation and disparity here

Installations

The code has been tested with python3 and PyTorch 0.4.1.

In order to install all the required files, create a virtualenv and install the files given in requirements.txt file.

virtualenv -p python3.5 venv
source venv/bin/activate
pip install -r requirements.txt

Running the demo scripts

In order to run our evaluation script for KITTI, run the file infer-main.ipynb after changing the appropriate paths of repo_dir and data_dir in the code. repo_dir refers to the absolute path of the repository root in your PC. data_dir refers to the absolute path of the corresponding dataset.

The scipts for transfer to Cityscapes & Oxford can be run in the same way. You need to run different scripts in order to evaluate the models on different datasets as shown below:

  • infer-main.ipynb: KITTI results
  • infer-transfer.ipynb: Cityscapes transfer
  • oxford-test.ipynb: Oxford RobotCar transfer
  • baseline.ipynb: Baseline KITTI results
  • baseline-transfer.ipynb: Baseline Cityscapes transfer results

Training

Training the network on a single split of KITTI takes about 8-10 hours in an NVIDIA GeForce GTX 1080Ti GPU.

You can run the training code as follows:

python train.py -expID split-0 -nepochs 60 -dataDir /home/username/kitti -optMethod adam -initType default -lr 0.000100 -momentum 0.900000 -beta1 0.90000 -modelType skipLSTM -groundTruth True -imageWidth 256 -imageHeight 256 -scaleFactor False -gradClip 10 -seqLen 1 -csvDir /home/pravin.mali/merged/final-validation/ -trainPath train0.csv -valPath test0.csv -minMaxNorm False

The different parameters available are given in the file args.py. In order to try the various ablation studies, you can set the arguments like lane to be false

Pretrained Models

All other pretrained models are available on request.

Project Page

For more plots, tables and project video refer to the project page here

You can’t perform that action at this time.