Skip to content

ReBound Data Format

Andrew Edgley edited this page Feb 21, 2023 · 2 revisions

ReBound Data Format

This page describes the directory structure our application expects

Overview

One ReBound directory represents the data for one scene. It contains 5 inner directories: bounding, cameras, ego, pointcloud, pred_bounding. All points and annotation coordinates are stored in vehicle frame.

The functions in dataformat_utils.py are used to create ReBound directories from other proprietary data formats.

Bounding

The bounding directory contains one directory for every frame in a scene that contains annotations. The names of the directories are simply the name of the frame that the annotations correspond to. For example, if we have N frames, then there should exist directories names 0, 1, ..., N-1 inside the bounding directory. Inside each numbered directory are two json files: description.json and boxes.json.

Description.json

This file is not currently used, but just exists for expansion in the future

Boxes.json

This is a json file that stores annotation and bounding box data in vehicle frame. It does this by storing a list of "boxes" that each contain: origin, size, rotation, annotation, and confidence. For ground truth annotations, confidence should equal 100. The json file is setup as follows.

{"boxes": <object>[N]
  [
    {
     "origin":        <float>[3] coordinates of center in vehicle frame
     "size":          <float>[3] size of box [l,w,h]
     "rotation":      <float>[4] quaternion [w,x,y,z]
     "annotation":    <str>      name of annotation
     "confidence":    <int>      confidence as percentage in [0,100]
     "id":            <string>   unique identifier to track box across frames
     "internal_pts":  <int>      number of lidar points inside the box
     "data":          <dict>     stores additional data as needed
    },
    ... Rest of box list
  ]
}

Cameras

The cameras directory holds RGB sensor data. Each RGB sensor has its own directory inside the cameras directory. The name of the inner sensor directories should be the name of the sensor. For example "CAM_FRONT". Inside each specific sensor directory is a list of jpg pictures that correspond to each frame. The picture for the nth frame should be called n.jpg. Inside each sensor directory are also two json files that store the sensor's intrinsic and extrinsic data. The names of these json files are: extrinsics.json and intrinsics.json. Their structure is as follows:

extrinsics.json

{
    "translation": <float>[3] translation with respect to vehicle frame
    "rotation": <float>[4] quaternion [w,x,y,z]
}

intrinsics.json

{
    "matrix": <list[3]>[3] 3x3 intrinsic matrix
}

Ego

The ego directory stores the vehicle frame data for the scene. It simply has a list of numbered json files that correspond to each frame in a scene. Each json file should just be called n.json with n being the frame number indexed at 0.

{
    "translation": <float>[3] location of vehicle in global frame
    "rotation": <float>[4] rotation of vehicle: quaternion [w,x,y,z]
}

Pointcloud

The pointcloud directory is set up similarly to the cameras directory. For a vehicle with multiple lidar sensors, each sensor should have its own inner directory. The name of the inner directory should be the name of the sensor. Inside each sensor directory is a list of pcd files. The name of the pcd files should be a number representing the corresponding frame. The pcd files are setup using the pcd spec found here https://pointclouds.org/documentation/tutorials/pcd_file_format.html Note that the extrinsic data for the sensor is stored using the VIEWPOINT field within the pcd file. All pointcloud points should be in vehicle frame

Pred_bounding

The pred_bounding directory is setup almost exactly like the normal bounding directory, it is used to store the predicted bounding information for a scene. The only difference is that it contains a json file called annotation_map.json This is currently unused, but it defines what the correct mapping should be between predicted annotation names and ground truth annotation names. This is necessary since some models may not produce annotations that exactly match the provided ground truth annotations.

Additional Files

metadata.json

{
    "source-format": <str>     original format that the generic data was converted from
    "filenames":     <str>[N]  names of relevant files
}

timestamps.json

{
    "timestamps": <str>[N] info needed to get timestamps
}