Skip to content
FPV Drone Racing VIO competition.
Branch: master
Clone or download
Latest commit e1d3a6b Jul 29, 2019
Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
README.md Update README.md Jul 28, 2019

README.md

The FPV Drone Racing VIO Competition

The FPV Drone Racing VIO Competition

Description

The participants are required to run their VIO algorithms on sequences selected from the public UZH-FPV Drone Racing Dataset, which include images, IMU measurements, and event-based camera data recorded with a FPV drone racing quadrotor flown aggressively by an expert pilot. The goal is to estimate the quadrotor motion as accurately as possible, utilizing any desired sensor combinations. The winner will be selected based on the accuracy of the estimated trajectories (details follow below) and will be awarded 1,000 USD and will also be invited to present his approach at the IROS 2019 Workshop "Challenges in Vision-based Drone Navigation" taking place on November 8, 2019 in Macau.

Deadline

The deadline to submit the estimated trajectories and report is October 1, 2019. Follow this link to submit.

Table of Contents:

  1. Datasets
  2. Submission Format
  3. Evaluation Metric
  4. Questions

Datasets (selected from the UZH FPV Drone Racing Dataset)

Datasets Video Length Start Time Snapdragon Start time Davis Download Snapdragon Download Davis
indoor forward 11 https://youtu.be/mYKStE7e2aI 85.68 1540824001 s 1540824001 s bag, zip bag, zip
indoor forward 12 https://youtu.be/jNlDgN8fdKA 124.07 1540824296 s 1540824296 s bag, zip bag, zip
indoor 45deg 3 https://youtu.be/q6ELgSAjNMY 119.82 1623 s 1545305934 s bag, zip bag, zip
indoor 45deg 16 https://youtu.be/V4OnapxRLD4 58.72 133 s 1545315222 s bag, zip bag, zip
outdoor forward 9 https://youtu.be/ydaMA4Uta9A 314.41 372 s 1540102003 s bag, zip bag, zip
outdoor forward 10 https://youtu.be/G60gls4qeZ4 455.63 674 s 1540102304 s bag, zip bag, zip

Submission Format

Each participant should submit the estimated trajectories for the above datasets and a report describing the adopted method. Follow this link to submit.

Estimated Trajectories

The estimated trajectories should be stored in plain text files in the following format:

# timestamp tx ty tz qx qy qz qw
1.403636580013555527e+09 0.0 0.0 0.0 0.0 0.0 0.0 0.0
…… 

The file names should be the same as the names of the bag/zip. For example, the result for “seq1.bag” should be saved as “seq1.txt”. The file should be space separated. Each line stands for the pose at the specified timestamp. The timestamps are in the unit of second and used to establish temporal correspondences with the groundtruth. The first pose should be no later than the starting times specified above, and only poses after the starting times will be used for evaluation.

The pose is composed of translation (tx ty tz, in meters) and quaternion (in Hamilton quaternion, the w component is at the end). The pose should specify the pose of the IMU in the world frame. For example, after converting the pose to a transformation matrix Twi, one should be able to transform the homogeneous point coordinates in IMU frame to world frame as pw = Twi * pi.

Do not publish your trajectory estimates, as we might re-use some of the datasets for future competitions.

Report

In addition to the estimated trajectories, the participants are required to submit a short report (maximum 4 pages, 10MB, pdf) summarizing their approach. The reports of all teams will be published on the website after the competition. The format of the report is left to the discretion of the participants, however the report must specify the following information:

  • A brief overview of the approach:
    • Filter or optimization-based (or else)?
    • Is the method causal? (i.e. does not use information from the future to predict the pose at a given time).
    • Is bundle adjustment (BA) used? What type of BA, e.g. full BA or sliding window BA?
    • Is loop closing used?
  • Exact sensor modalities used (IMU, stereo or mono, event data?)
  • Total processing time for each sequence
  • Exact specifications of the hardware used
  • Whether the same set of parameters is used throughout all the sequences

The participants are welcome to include further details of their approach, eventual references to a paper describing the approach, or any other additional information.

Evaluation Metric

The submission will be ranked based on the accuracy. We use the same metric as adopted by KITTI. The average translation error (in percentage) over all possible subsequences of a set of certain lengths will be used for ranking the submitted results. We will use our publicly available trajectory evaluation toolbox to evaluate the estimated trajectories.

Questions

If you have a question about the challenge, please file a Github issue in this repository. This way the question and response will be visible to everyone. Subscribe to this issue to get notified about changes to this document.

You can’t perform that action at this time.