Skip to content
PointPillars for KITTI object detection
Branch: master
Clone or download
Pull request Compare This branch is 10 commits ahead, 9 commits behind traveller59:master.
Type Name Latest commit message Commit time
Failed to load latest commit information.
images switch image format from pdf to png Dec 14, 2018
second Revert "pass voxel size and pcrange to PillarFeatureNet" Mar 18, 2019
torchplus release code Oct 4, 2018


Welcome to PointPillars.

This repo demonstrates how to reproduce the results from PointPillars: Fast Encoders for Object Detection from Point Clouds on the KITTI dataset by making the minimum required changes from the preexisting open source codebase SECOND.

This is not an official nuTonomy codebase, but it can be used to match the published PointPillars results.

Example Results

Getting Started

This is a fork of SECOND for KITTI object detection and the relevant subset of the original README is reproduced here.

Code Support

ONLY supports python 3.6+, pytorch 0.4.1+. Code has only been tested on Ubuntu 16.04/18.04.


1. Clone code

git clone

2. Install Python packages

It is recommend to use the Anaconda package manager.

First, use Anaconda to configure as many packages as possible.

conda create -n pointpillars python=3.7 anaconda
source activate pointpillars
conda install shapely pybind11 protobuf scikit-image numba pillow
conda install pytorch torchvision -c pytorch
conda install google-sparsehash -c bioconda

Then use pip for the packages missing from Anaconda.

pip install --upgrade pip
pip install fire tensorboardX

Finally, install SparseConvNet. This is not required for PointPillars, but the general SECOND code base expects this to be correctly configured.

git clone
cd SparseConvNet/
# NOTE: if bash fails, try bash instead

Additionally, you may need to install Boost geometry:

sudo apt-get install libboost-all-dev

3. Setup cuda for numba

You need to add following environment variables for numba to ~/.bashrc:

export NUMBAPRO_CUDA_DRIVER=/usr/lib/x86_64-linux-gnu/
export NUMBAPRO_NVVM=/usr/local/cuda/nvvm/lib64/
export NUMBAPRO_LIBDEVICE=/usr/local/cuda/nvvm/libdevice


Add second.pytorch/ to your PYTHONPATH.

Prepare dataset

1. Dataset preparation

Download KITTI dataset and create some directories first:

       ├── training    <-- 7481 train data
       |   ├── image_2 <-- for visualization
       |   ├── calib
       |   ├── label_2
       |   ├── velodyne
       |   └── velodyne_reduced <-- empty directory
       └── testing     <-- 7580 test data
           ├── image_2 <-- for visualization
           ├── calib
           ├── velodyne
           └── velodyne_reduced <-- empty directory

Note: PointPillar's protos use KITTI_DATASET_ROOT=/data/sets/kitti_second/.

2. Create kitti infos:

python create_kitti_info_file --data_path=KITTI_DATASET_ROOT

3. Create reduced point cloud:

python create_reduced_point_cloud --data_path=KITTI_DATASET_ROOT

4. Create groundtruth-database infos:

python create_groundtruth_database --data_path=KITTI_DATASET_ROOT

5. Modify config file

The config file needs to be edited to point to the above datasets:

train_input_reader: {
  database_sampler {
    database_info_path: "/path/to/kitti_dbinfos_train.pkl"
  kitti_info_path: "/path/to/kitti_infos_train.pkl"
  kitti_root_path: "KITTI_DATASET_ROOT"
eval_input_reader: {
  kitti_info_path: "/path/to/kitti_infos_val.pkl"
  kitti_root_path: "KITTI_DATASET_ROOT"


cd ~/second.pytorch/second
python ./pytorch/ train --config_path=./configs/pointpillars/car/xyres_16.proto --model_dir=/path/to/model_dir
  • If you want to train a new model, make sure "/path/to/model_dir" doesn't exist.
  • If "/path/to/model_dir" does exist, training will be resumed from the last checkpoint.
  • Training only supports a single GPU.
  • Training uses a batchsize=2 which should fit in memory on most standard GPUs.
  • On a single 1080Ti, training xyres_16 requires approximately 20 hours for 160 epochs.


cd ~/second.pytorch/second/
python pytorch/ evaluate --config_path= configs/pointpillars/car/xyres_16.proto --model_dir=/path/to/model_dir
  • Detection result will saved in model_dir/eval_results/step_xxx.
  • By default, results are stored as a result.pkl file. To save as official KITTI label format use --pickle_result=False.
You can’t perform that action at this time.
You signed in with another tab or window. Reload to refresh your session. You signed out in another tab or window. Reload to refresh your session.