Expected dataset structure:
Dataset structure:
openvis
├── datasets
│ ├── ytvis_2019
│ │ ├── {train, valid, test}.json
│ │ ├── {train, valid, test}/
│ │ │ ├── JPEGImages/
│ │ │ ├── Annotations/
│ ├── ytvis_2021
│ │ ├── {train, valid, test}/
│ │ │ ├── JPEGImages/
│ │ │ ├── instances.json
│ ├── ovis
│ │ ├── annotations_{train, valid, test}.json/
│ │ ├── Images/
│ │ │ ├── {train, valid, test}/
│ ├── burst
│ │ ├── annotations/
│ │ │ ├── train/
│ │ │ │ ├── train.json
│ │ │ ├── {val, test}/
│ │ │ │ ├── all_classes.json
│ │ │ │ ├── common_classes.json
│ │ │ │ ├── uncommon_classes.json
│ │ ├── frames/
│ │ │ ├── {train, val, test}/
│ ├── lvvis
│ │ ├── {train, val}_instances.json
│ │ ├── {train, val}_ytvis_style.json
│ │ ├── {train, val}/
│ │ │ ├── JPEGImages/
│ ├── coco
│ │ ├── coco2ytvis2019_{train, val}.json
│ │ ├── coco2ytvis2021_{train, val}.json
│ │ ├── coco2ovis_{train, val}.json
│ │ ├── {train, val}2017/
│ │ ├── annotations/
│ │ │ ├── instances_{train, val}2017.json
openvis
├── datasets
│ ├── ytvis_2019
│ │ ├── {train, valid, test}.json
│ │ ├── {train, valid, test}/
│ │ │ ├── JPEGImages/
│ │ │ ├── Annotations/
│ ├── ytvis_2021
│ │ ├── {train, valid, test}/
│ │ │ ├── JPEGImages/
│ │ │ ├── instances.json
youtube-vis 2019:
- register 2nd youtube vos challenge
- download
{train,valid,test}.zip
from youtube-vis 2019 frames google drive - download
{train,valid,test}.json
from youtube-vis 2019 annotations google drive - unzip
{train,valid,test}.zip
todatasets/ytvis_2019/{train,valid,test}/
- put
{train,valid,test}.json
todatasets/ytvis_2019/{train,valid,test}.json
youtube-vis 2021:
- register 3rd youtube vos challenge
- download
{train,valid,test}.zip
from youtube-vis 2021 frames google drive - unzip
{train,valid,test}.zip
todatasets/ytvis_2021/{train,valid,test}/
openvis
├── datasets
│ ├── ovis
│ │ ├── annotations_{train, valid, test}.json/
│ │ ├── Images/
│ │ │ ├── {train, valid, test}/
- register Occluded Video Instance Segmentation
- download
{train,valid,test}.zip
from OVIS frames google drive - download
{train,valid,test}.json
from OVIS annotations google drive - unzip
{train,valid,test}.zip
todatasets/ovis/Images/{train,valid,test}/
- put
annotations/{train,valid,test}.json
todatasets/ovis/annotations_{train,valid,test}.json
openvis
├── datasets
│ ├── burst
│ │ ├── annotations/
│ │ │ ├── train/
│ │ │ │ ├── train.json
│ │ │ ├── {val, test}/
│ │ │ │ ├── all_classes.json
│ │ │ │ ├── common_classes.json
│ │ │ │ ├── uncommon_classes.json
│ │ ├── frames/
│ │ │ ├── {train, val, test}/
- download videos (except AVA & HACS videos) from TAO data:
wget "https://motchallenge.net/data/1-TAO_TRAIN.zip"
wget "https://motchallenge.net/data/2-TAO_VAL.zip"
wget "https://motchallenge.net/data/3-TAO_TEST.zip"
unzip 1-TAO_TRAIN.zip
unzip 2-TAO_VAL.zip
unzip 3-TAO_TEST.zip
- put uncompressed folders to
datasets/burst/frames/{train, val, test}
- sign in MOTchallenge and download AVA & HACS videos
- download annotations from BURST-benchmark:
wget https://omnomnom.vision.rwth-aachen.de/data/BURST/annotations.zip
- uncompress
annotations.zip
todatasets/burst/annotations
openvis
├── datasets
│ ├── lvvis
│ │ ├── {train, val}_instances.json
│ │ ├── {train, val}_ytvis_style.json
│ │ ├── {train, val}/
│ │ │ ├── JPEGImages/
- download videos from LV-VIS github;
- download
train.zip
from google drive; - download
val.zip
from google drive; - uncompressed
{train, val}.zip
todatasets/lvvis/{train, val}/JPEGImages/
;
- download
- download annotations from LV-VIS github;
- download
train_instances.json
from google drive; - download
val_instances.json
from google drive; - put
{train, val}_instances.json
todatasets/lvvis/{train, val}_instances.json
;
- download
- convert annotations to youtube-vis style:
python datasets/lvvis2ytvis.py
$ROOT
├── datasets
│ ├── coco
│ │ ├── coco2ytvis2019_{train, val}.json
│ │ ├── coco2ytvis2021_{train, val}.json
│ │ ├── coco2ovis_{train, val}.json
│ │ ├── {train, val}2017/
│ │ ├── annotations/
│ │ │ ├── instances_{train, val}2017.json
- download images from COCO homepage;
- download
train2017.zip
,val2017.zip
; - uncompress
{train, val}2017,zip
todatasets/coco/{train, val}2017/
;
- download
- download annotations from COCO homepage;
- download
annotations_trainval2017.zip
; - uncompress
annotations_trainval2017.zip
todatasets/coco/annotations
; 3 convert annotations to youtube-vis style:
- download
python datasets/coco2ytvis.py