Skip to content
Switch branches/tags


Failed to load latest commit information.
Latest commit message
Commit time

Kinematic 3D Object Detection in Monocular Video

Garrick Brazil, Gerard Pons-Moll, Xiaoming Liu, Bernt Schiele


Source code as detailed in arXiv report, in ECCV 2020. Please also visit our project page.

Much of the code in this project is a derivative of the code from M3D-RPN, such that setup/organization is very similar.

Our framework is implemented and tested with Ubuntu 16.04, Python 3, NVIDIA 1080 Ti GPU. Unless otherwise stated the below scripts and instructions assume working directory is the project root.

If you utilize this framework, please cite our ECCV 2020 paper.

    title={Kinematic 3D Object Detection in Monocular Video},
    author={Brazil, Garrick and Pons-Moll, Gerard and Liu, Xiaoming and Schiele, Bernt},
    booktitle={Proceedings of European Conference on Computer Vision},


  • Cuda & Python

    In this project we utilize Pytorch with Python 3, and a few Anaconda packages. Please review and follow this installation guide. However, feel free to try alternative versions or modes of installation.

  • KITTI Data

    Download the full KITTI detection dataset including:

    • left color images of object data set (12 GB)
    • 3 temporally preceding frames (left color) (36 GB)
    • camera calibration matrices of object data set (16 MB)
    • training labels of object data set (5 MB)

    Then place a softlink (or the actual data) in kinematic3d/data/kitti.

     cd kinematic3d
     ln -s /path/to/kitti data/kitti

    Then use the following scripts to extract the data splits, which use softlinks to the above directory for efficient storage.

    python data/kitti_split1/

    Next, build the KITTI devkit eval for each split.

     sh data/kitti_split1/devkit/cpp/
  • KITTI Raw

    Download the extra information such as extracted pose, calibration, and detection labels (when available) which we provide in Then extract this folder to data/kitti/raw_extra and data/kitti_split1/raw_extra.

    We use Matlab and slightly modified versions of the KITTI matlab devkit to extract this information. If you wish to re-extract or otherwise, then you will need to download the calibration and tracklets from appropriate dates within the raw dataset.

    Then you can modify and run the provided scripts in the matlab folder of the downloaded zip file:

    1. Specifically, the file matlab/run_demoVehiclePath.m will extract the poses ego-motion pose changes.

    2. The file matlab/convert_tracklets_to_detections.m will extract the tracklet boxes (this is only needed IF you want to compute ground truth velocity, etc. These tracklets are not directly used otherwise).

  • Misc

    Lastly, build the nms modules

    cd lib/nms


Training is split into a warmup, full, and pose. Review the configurations in scripts/config for details.

# First train the warmup
python scripts/ --config=kitti_3d_warmup

# Then train the model with uncertainty
python scripts/ --config=kitti_3d_uncertainty

# Lastly train the full pose estimation 
python scripts/ --config=kitti_3d_full

If your training is accidentally stopped, you can resume at a checkpoint based on the snapshot with the restore flag. For example to resume training starting at iteration 10k, use the following command.

python scripts/ --config=kitti_3d_uncertainty --restore=10000


We provide models for the main experiments on val1 / test data splits available to download here

Testing requires paths to the configuration file and model weights, exposed variables near the top scripts/ To test a configuration and model, simply update the variables and run the test file as below.

python scripts/ 

Similarly, we also provide a script to test only the 3D rpn as

python scripts/ 

We also provide code to help with the visualization (for example in video).

python scripts/ 


For questions feel free to post here or directly contact the authors {brazilga, liuxm}, {gpons, schiele}


No description, website, or topics provided.



No releases published


No packages published