Skip to content

Latest commit

 

History

History
77 lines (65 loc) · 6.43 KB

REDS_Dataset.md

File metadata and controls

77 lines (65 loc) · 6.43 KB

Guide to Converting the REDS Dataset into the Event Format

Introduction

This document explains how to convert the REDS dataset to the event format. We will use the ESIM to simulate events and exposure intervals, and then use least-squares to fit polynomial coefficients to the sharp frames. The results will be stored in a few .hdf5 files.

The conversion process may take a few days to finish because of the use of leaset-squares fitting. Alternatively, consider downloading the conversion results directly from the links below. Warning: the event threshold in ESIM is randomly fluctuating. There will be subtle differences in the output events every time you run the convesion scripts.

Download Links

The conversion results are uploaded to Google Drive.

The rest of the guide can be skipped entirely if you choose download these .hdf5 files and put them in data/REDS/.

Step 1: Obtain the Original REDS Dataset

From this link, download the 16 .zip files under the tab REDS 120fps. Create a directory called data/REDS/raw and unzip the files here. The final directory structure should look like this:

<project root>
  |-- data
  |     |-- REDS
  |     |     |-- raw
  |     |     |     |-- train
  |     |     |     |     |-- 000
  |     |     |     |     |     |-- <many .png files>
  |     |     |     |     |-- 001
  |     |     |     |     |     |-- <many .png files>
  |     |     |     |     |-- ...
  |     |     |     |     |-- 239
  |     |     |     |     |     |-- <many .png files>
  |     |     |     |-- val
  |     |     |     |     |-- 000
  |     |     |     |     |     |-- <many .png files>
  |     |     |     |     |-- 001
  |     |     |     |     |     |-- <many .png files>
  |     |     |     |     |-- ...
  |     |     |     |     |-- 029
  |     |     |     |     |     |-- <many .png files>
  |-- <other files>

Step 2: Resizing

We will now convert the REDS dataset into grayscale, and downsample the frames to 180x240. This ensures the resolution is consistent with what the DAVIS240 offers. We will also create .csv files needed by ESIM that store the mapping between timestamps and frame names. Move scripts/resize_and_summarize.py to data/REDS/. In data/REDS, run python resize_and_summarize.py. After the script finishes, we should now have a new directory data/REDS/resized.

Step 3: Frame Interpolation

We will now use sepconv-slomo to increase the frame rate from 120 fps to 960 fps. Create a new directory outside our project root. From there, do git clone git@github.com:sniklaus/sepconv-slomo.git. Follow the instructions in sepconv-slomo and set up its required environment. After that, move scripts/run_8x.py and scripts/run_8x.sh to the sepconv-slomo root directory. In the sepconv-slomo root directory, change the path prefixes in run_8x.sh and run bash run_8x.sh.

Step 4: Event Generation

We will now generate the events from the videos. Follow these instructions to set up ESIM. After that, move scripts/create_bag.sh to data/REDS/. Add required="true" to this file and run bash create_bag.sh.

Step 5: Extract Event Bags

We will now extract the data from .bag files generated by ESIM. Move scripts/extract_events_from_rosbag.py (Credits: rpg_e2vid) and scripts/create_zip_png.sh to data/REDS/. From data/REDS, run bash create_zip_png.sh.

Step 6: Polynomial Fitting

Move scripts/create_hdf5.py to data/REDS/. From data/REDS, run

python create_hdf5.py --split train --start_video_idx 0 --end_video_idx 15 --out_name train_0.hdf5
python create_hdf5.py --split train --start_video_idx 15 --end_video_idx 30 --out_name train_1.hdf5
python create_hdf5.py --split train --start_video_idx 30 --end_video_idx 45 --out_name train_2.hdf5
python create_hdf5.py --split train --start_video_idx 45 --end_video_idx 60 --out_name train_3.hdf5
python create_hdf5.py --split train --start_video_idx 60 --end_video_idx 75 --out_name train_4.hdf5
python create_hdf5.py --split train --start_video_idx 75 --end_video_idx 90 --out_name train_5.hdf5
python create_hdf5.py --split train --start_video_idx 90 --end_video_idx 105 --out_name train_6.hdf5
python create_hdf5.py --split train --start_video_idx 105 --end_video_idx 120 --out_name train_7.hdf5
python create_hdf5.py --split train --start_video_idx 120 --end_video_idx 135 --out_name train_8.hdf5
python create_hdf5.py --split train --start_video_idx 135 --end_video_idx 150 --out_name train_9.hdf5
python create_hdf5.py --split train --start_video_idx 150 --end_video_idx 165 --out_name train_10.hdf5
python create_hdf5.py --split train --start_video_idx 165 --end_video_idx 180 --out_name train_11.hdf5
python create_hdf5.py --split train --start_video_idx 180 --end_video_idx 195 --out_name train_12.hdf5
python create_hdf5.py --split train --start_video_idx 195 --end_video_idx 210 --out_name train_13.hdf5
python create_hdf5.py --split train --start_video_idx 210 --end_video_idx 225 --out_name train_14.hdf5
python create_hdf5.py --split train --start_video_idx 225 --end_video_idx 240 --out_name train_15.hdf5
python create_hdf5.py --split val --start_video_idx 0 --end_video_idx 15 --out_name val_0.hdf5
python create_hdf5.py --split val --start_video_idx 15 --end_video_idx 30 --out_name val_1.hdf5

This will take a very long time on a desktop, so we recommend using a computing cluster and doing the computation in parallel. As an example, we have an HTCondor cluster in our organization. The scripts for submiting condor jobs are provided as scripts/condor_template.txt and scripts/submit_condor_jobs.sh.

If you would like to use this dataset as a benchmark to evaluate your model, please modify create_hdf5.py according to your requirements.

Congratulations! You have now finished converting the REDS dataset to the event format!