Skip to content
SemanticKITTI API for visualizing dataset, processing data, and evaluating results.
Python Dockerfile Shell
Branch: master
Clone or download
Latest commit c1aad3f Sep 6, 2019

API for SemanticKITTI

This repository contains helper scripts to open, visualize, process, and evaluate results for point clouds and labels from the SemanticKITTI dataset.

Example of 3D pointcloud from sequence 13:

Example of 2D spherical projection from sequence 13:

Data organization

The data is organized in the following format:

          └── sequences/
                  ├── 00/
                  │   ├── poses.txt
                  │   ├── image_2/
                  │   ├── image_3/
                  │   ├── labels/
                  │   │     ├ 000000.label
                  │   │     └ 000001.label
                  │   └── velodyne/
                  │         ├ 000000.bin
                  │         └ 000001.bin
                  ├── 01/
                  ├── 02/
                  └── 21/
  • From KITTI Odometry:
    • image_2 and image_3 correspond to the rgb images for each sequence.
    • velodyne contains the pointclouds for each scan in each sequence. Each .bin scan is a list of float32 points in [x,y,z,remission] format. See to see how the points are read.
  • From SemanticKITTI:
    • labels contains the labels for each scan in each sequence. Each .label file contains a uint32 label for each point in the corresponding .bin scan. See to see how the labels are read.
    • poses.txt contain the manually looped-closed poses for each capture (in the camera frame) that were used in the annotation tools to aggregate all the point clouds.

The main configuration file for the data is in config/semantic-kitti.yaml. In this file you will find:

  • labels: dictionary which maps numeric labels in .label file to a string class. Example: 10: "car"
  • color_map: dictionary which maps numeric labels in .label file to a bgr color for visualization. Example 10: [245, 150, 100] # car, blue-ish
  • content: dictionary with content of each class in labels, as a ratio to the number of total points in the dataset. This can be obtained by running the ./ script, and is used to calculate the weights for the cross entropy in all baseline methods (in order handle class imbalance).
  • learning_map: dictionary which maps each class label to its cross entropy equivalent, for learning. This is done to mask undesired classes, map different classes together, and because the cross entropy expects a value in [0, numclasses - 1]. We also provide ./, a script that uses this dictionary to put the label files in the cross entropy format, so that you can use the labels directly in your training pipeline. Examples:
      0 : 0     # "unlabeled"
      1 : 0     # "outlier" to "unlabeled" -> gets ignored in training, with unlabeled
      10: 1     # "car"
      252: 1    # "moving-car" to "car" -> gets merged with static car class
  • learning_map_inv: dictionary with inverse of the previous mapping, allows to map back the classes only to the interest ones (for saving pointclouds predictions in original label format). We also provide ./, a script that uses this dictionary to put the label files in the original format, when instantiated with the --inverse flag.
  • learning_ignore: dictionary that cointains for each cross entropy class if it will be ignored during training and evaluation or not. For example, class unlabeled gets ignored in both training and evaluation.
  • split: contains 3 lists, with the sequence numbers for training, validation, and evaluation.

Dependencies for API:

System dependencies

$ sudo apt install python3-dev python3-pip python3-pyqt5.qtopengl # for visualization

Python dependencies

$ sudo pip3 install -r requirements.txt




To visualize the data, use the script. It will open an interactive opengl visualization of the pointclouds along with a spherical projection of each scan into a 64 x 1024 image.

$ ./ --sequence 00 --dataset /path/to/kitti/dataset/


  • sequence is the sequence to be accessed.
  • dataset is the path to the kitti dataset where the sequences directory is.


  • n is next scan,
  • b is previous scan,
  • esc or q exits.

In order to visualize your predictions instead, the --predictions option replaces visualization of the labels with the visualization of your predictions:

$ ./ --sequence 00 --dataset /path/to/kitti/dataset/ --predictions /path/to/your/predictions


To evaluate the predictions of a method, use the script. Important: The labels and the predictions need to be in the original label format, which means that if a method learns the cross-entropy mapped classes, they need to be passed through the learning_map_inv dictionary to be sent to the original dataset format. This is to prevent changes in the dataset interest classes from affecting intermediate outputs of approaches, since the original labels will stay the same. We provide the script to make this shift before the training, and once again before the evaluation, selecting which are the interest classes in the configuration file. The data needs to be either:

  • In a separate directory with this format:

              └── sequences
                  ├── 00
                  │   └── predictions
                  │         ├ 000000.label
                  │         └ 000001.label
                  ├── 01
                  ├── 02
                  └── 21

    And run:

    $ ./ --dataset /path/to/kitti/dataset/ --predictions /path/to/method_predictions --split train/valid/test # depending of desired split to evaluate
  • In the same directory as the dataset

              ├── poses
              └── sequences
                  ├── 00
                  │   ├── image_2
                  │   ├── image_3
                  │   ├── labels
                  │   │     ├ 000000.label
                  │   │     └ 000001.label
                  │   ├── predictions
                  │   │     ├ 000000.label
                  │   │     └ 000001.label
                  │   └── velodyne
                  │         ├ 000000.bin
                  │         └ 000001.bin
                  ├── 01
                  ├── 02
                  └── 21

    And run (which sets the predictions directory as the same directory as the dataset):

    $ ./ --dataset /path/to/kitti/dataset/ --split train/valid/test # depending of desired split to evaluate

If instead, the IoU vs distance is wanted, the evaluation is performed in the same way, but with the script. This will analize the IoU for a set of 5 distance ranges: {(0m:10m), [10m:20m), [20m:30m), [30m:40m), (40m:50m)}.


To ensure that your zip file is valid, we provide a small validation script that checks for the correct folder structure and consistent number of labels for each scan.

The submission folder expects to get an zip file containing the following folder structure (as the seperate case above)

  ├── description.txt (optional)
  ├── 11
  │   └── predictions
  │         ├ 000000.label
  │         ├ 000001.label
  │         ├ ...
  ├── 12
  │   └── predictions
  │         ├ 000000.label
  │         ├ 000001.label
  │         ├ ...
  ├── 13
  └── 21

In summary, you only have to provide the label files containing your predictions for every point of the scan and this is also checked by our validation script. If you want to have more information on our maintained leaderboard (coming soon!), you (currently) have to provide an additional description.txt file containing information, which we currently cannot access via the API.

method name: 
method description: 
project url: 
publication url: 
organization or affiliation: 


$ ./ /path/to/ /path/to/kitti/dataset

to check your

Note: We don't check if the labels are valid, since invalid labels are simply ignored by the evaluation script.


  • allows to evaluate the class content of the training set, in order to weigh the loss for training, handling imbalanced data.
  • returns the scan count for each sequence in the data.


  • generates a sequence of scans using the manually looped closed poses used in our labeling tool, and stores them as individual point clouds. If, for example, we want to generate a dataset cointaining, for each point cloud, the aggregation of itself with the previous 4 scans, then:

    $ ./ --dataset /path/to/kitti/dataset/ --sequence_length 5 --output /path/to/put/new/dataset 
  • allows to remap the labels to and from the cross-entropy format, so that the labels can be used for training, and the predictions can be used for evaluation. This file uses the learning_map and learning_map_inv dictionaries from the config file to map the labels and predictions.

Docker for API

If not installing the requirements is preferred, then a docker container is provided to run the scripts.

To build and run the container in an interactive session, which allows to run X11 apps (and GL), and copies this repo to the working directory, use

$ ./ /path/to/dataset

Where /path/to/dataset is the location of your semantic kitti dataset, and will be available inside the image in ~/data or /home/developer/data inside the container for further usage with the api. This is done by creating a shared volume, so it can be any directory containing data that is to be used by the API scripts.


If you use this dataset and/or this API in your work, please cite its paper

    author = {J. Behley and M. Garbade and A. Milioto and J. Quenzel and S. Behnke and C. Stachniss and J. Gall},
     title = {{SemanticKITTI: A Dataset for Semantic Scene Understanding of LiDAR Sequences}},
 booktitle = {Proc. of the IEEE/CVF International Conf.~on Computer Vision (ICCV)},
      year = {2019}

And the paper for the original KITTI dataset:

    author = {A. Geiger and P. Lenz and R. Urtasun},
     title = {{Are we ready for Autonomous Driving? The KITTI Vision Benchmark Suite}},
 booktitle = {Proc.~of the IEEE Conf.~on Computer Vision and Pattern Recognition (CVPR)},
     pages = {3354--3361},
      year = {2012}}
You can’t perform that action at this time.