Skip to content

Commit

Permalink
Merge 4859d4a into e4b8bfc
Browse files Browse the repository at this point in the history
  • Loading branch information
kennymckormick authored Nov 11, 2020
2 parents e4b8bfc + 4859d4a commit f5d6b74
Show file tree
Hide file tree
Showing 11 changed files with 438 additions and 4 deletions.
5 changes: 5 additions & 0 deletions docs/changelog.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,17 +6,22 @@

**New Features**
- Automatically add modelzoo statistics to readthedocs ([#327](https://github.com/open-mmlab/mmaction2/pull/327))
- Support GYM99 data preparation ([#331](https://github.com/open-mmlab/mmaction2/pull/331))

**Improvements**
- Support PyTorch 1.7 in CI ([#312](https://github.com/open-mmlab/mmaction2/pull/312))
- Add random seed for building filelists ([#323](https://github.com/open-mmlab/mmaction2/pull/323))
- Move docs about demo to `demo/README.md` ([#329](https://github.com/open-mmlab/mmaction2/pull/329))
- Remove redundant code in `tools/test.py` ([#310](https://github.com/open-mmlab/mmaction2/pull/310))

**Bug Fixes**
- Fix a bug in BaseDataset when `data_prefix` is None ([#314](https://github.com/open-mmlab/mmaction2/pull/314))
- Fix the bug of HVU object `num_classes` from 1679 to 1678 ([#307](https://github.com/open-mmlab/mmaction2/pull/307))
- Fix OmniSource training configs ([#321](https://github.com/open-mmlab/mmaction2/pull/321))
- Fix Issue #306: Bug of SampleAVAFrames ([#317](https://github.com/open-mmlab/mmaction2/pull/317))

**ModelZoo**
- Add SlowOnly model for GYM99, both RGB and Flow ([#336](https://github.com/open-mmlab/mmaction2/pull/336))

### v0.8.0 (31/10/2020)

Expand Down
6 changes: 2 additions & 4 deletions tools/data/activitynet/download.py
Original file line number Diff line number Diff line change
Expand Up @@ -2,13 +2,11 @@
# https://github.com/activitynet/ActivityNet/blob/master/Crawler/Kinetics/download.py # noqa: E501
# The code is licensed under the MIT licence.
import os
import ssl
import subprocess

import mmcv

import ssl # isort:skip

from joblib import Parallel, delayed # isort:skip
from joblib import Parallel, delayed

ssl._create_default_https_context = ssl._create_unverified_context
data_file = '../../../data/ActivityNet'
Expand Down
99 changes: 99 additions & 0 deletions tools/data/gym/download.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,99 @@
# This scripts is copied from
# https://github.com/activitynet/ActivityNet/blob/master/Crawler/Kinetics/download.py # noqa: E501
# The code is licensed under the MIT licence.
import argparse
import os
import ssl
import subprocess

import mmcv
from joblib import Parallel, delayed

ssl._create_default_https_context = ssl._create_unverified_context


def download(video_identifier,
output_filename,
num_attempts=5,
url_base='https://www.youtube.com/watch?v='):
"""Download a video from youtube if exists and is not blocked.
arguments:
---------
video_identifier: str
Unique YouTube video identifier (11 characters)
output_filename: str
File path where the video will be stored.
"""
# Defensive argument checking.
assert isinstance(video_identifier, str), 'video_identifier must be string'
assert isinstance(output_filename, str), 'output_filename must be string'
assert len(video_identifier) == 11, 'video_identifier must have length 11'

status = False

if not os.path.exists(output_filename):
command = [
'youtube-dl', '--quiet', '--no-warnings', '--no-check-certificate',
'-f', 'mp4', '-o',
'"%s"' % output_filename,
'"%s"' % (url_base + video_identifier)
]
command = ' '.join(command)
print(command)
attempts = 0
while True:
try:
subprocess.check_output(
command, shell=True, stderr=subprocess.STDOUT)
except subprocess.CalledProcessError:
attempts += 1
if attempts == num_attempts:
return status, 'Fail'
else:
break
# Check if the video was successfully saved.
status = os.path.exists(output_filename)
return status, 'Downloaded'


def download_wrapper(youtube_id, output_dir):
"""Wrapper for parallel processing purposes."""
# we do this to align with names in annotations
output_filename = os.path.join(output_dir, youtube_id + '.mp4')
if os.path.exists(output_filename):
status = tuple([youtube_id, True, 'Exists'])
return status

downloaded, log = download(youtube_id, output_filename)
status = tuple([youtube_id, downloaded, log])
return status


def main(input, output_dir, num_jobs=24):
# Reading and parsing ActivityNet.
youtube_ids = mmcv.load(input).keys()
# Creates folders where videos will be saved later.
if not os.path.exists(output_dir):
os.makedirs(output_dir)
# Download all clips.
if num_jobs == 1:
status_list = []
for index in youtube_ids:
status_list.append(download_wrapper(index, output_dir))
else:
status_list = Parallel(n_jobs=num_jobs)(
delayed(download_wrapper)(index, output_dir)
for index in youtube_ids)

# Save download report.
mmcv.dump(status_list, 'download_report.json')


if __name__ == '__main__':
description = 'Helper script for downloading GYM videos.'
p = argparse.ArgumentParser(description=description)
p.add_argument('input', type=str, help='The gym annotation file')
p.add_argument(
'output_dir', type=str, help='Output directory to save videos.')
p.add_argument('-n', '--num-jobs', type=int, default=24)
main(**vars(p.parse_args()))
14 changes: 14 additions & 0 deletions tools/data/gym/download_annotations.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,14 @@
#!/usr/bin/env bash

set -e

DATA_DIR="../../../data/gym/annotations"

if [[ ! -d "${DATA_DIR}" ]]; then
echo "${DATA_DIR} does not exist. Creating";
mkdir -p ${DATA_DIR}
fi

wget https://sdolivia.github.io/FineGym/resources/dataset/finegym_annotation_info_v1.0.json -O $DATA_DIR/annotation.json
wget https://sdolivia.github.io/FineGym/resources/dataset/gym99_train_element_v1.0.txt -O $DATA_DIR/gym99_train_org.txt
wget https://sdolivia.github.io/FineGym/resources/dataset/gym99_val_element.txt -O $DATA_DIR/gym99_val_org.txt
13 changes: 13 additions & 0 deletions tools/data/gym/download_videos.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,13 @@
#!/usr/bin/env bash

# set up environment
conda env create -f environment.yml
source activate gym
pip install --upgrade youtube-dl

DATA_DIR="../../../data/gym"
ANNO_DIR="../../../data/gym/annotations"
python download.py ${ANNO_DIR}/annotation.json ${DATA_DIR}/videos

source deactivate gym
conda remove -n gym --all
36 changes: 36 additions & 0 deletions tools/data/gym/environment.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,36 @@
name: kinetics
channels:
- anaconda
- menpo
- conda-forge
- defaults
dependencies:
- ca-certificates=2020.1.1
- certifi=2020.4.5.1
- ffmpeg=2.8.6
- libcxx=10.0.0
- libedit=3.1.20181209
- libffi=3.3
- ncurses=6.2
- openssl=1.1.1g
- pip=20.0.2
- python=3.7.7
- readline=8.0
- setuptools=46.4.0
- sqlite=3.31.1
- tk=8.6.8
- wheel=0.34.2
- xz=5.2.5
- zlib=1.2.11
- pip:
- decorator==4.4.2
- intel-openmp==2019.0
- joblib==0.15.1
- mkl==2019.0
- numpy==1.18.4
- olefile==0.46
- pandas==1.0.3
- python-dateutil==2.8.1
- pytz==2020.1
- six==1.14.0
- youtube-dl==2020.5.8
7 changes: 7 additions & 0 deletions tools/data/gym/extract_frames.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,7 @@
#!/usr/bin/env bash

cd ../
python build_rawframes.py ../../data/gym/subactions/ ../../data/gym/subaction_frames/ --level 1 --flow-type tvl1 --ext mp4 --task both --new-short 256
echo "Raw frames (RGB and tv-l1) Generated"

cd gym/
48 changes: 48 additions & 0 deletions tools/data/gym/generate_file_list.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,48 @@
import os
import os.path as osp

annotation_root = '../../../data/gym/annotations'
data_root = '../../../data/gym/subactions'
frame_data_root = '../../../data/gym/subaction_frames'

videos = os.listdir(data_root)
videos = set(videos)

train_file_org = osp.join(annotation_root, 'gym99_train_org.txt')
val_file_org = osp.join(annotation_root, 'gym99_val_org.txt')
train_file = osp.join(annotation_root, 'gym99_train.txt')
val_file = osp.join(annotation_root, 'gym99_val.txt')
train_frame_file = osp.join(annotation_root, 'gym99_train_frame.txt')
val_frame_file = osp.join(annotation_root, 'gym99_val_frame.txt')

train_org = open(train_file_org).readlines()
train_org = [x.strip().split() for x in train_org]
train = [x for x in train_org if x[0] + '.mp4' in videos]
if osp.exists(frame_data_root):
train_frames = []
for line in train:
length = len(os.listdir(osp.join(frame_data_root, line[0])))
train_frames.append([line[0], str(length // 3), line[1]])
train_frames = [' '.join(x) for x in train_frames]
with open(train_frame_file, 'w') as fout:
fout.write('\n'.join(train_frames))

train = [x[0] + '.mp4 ' + x[1] for x in train]
with open(train_file, 'w') as fout:
fout.write('\n'.join(train))

val_org = open(val_file_org).readlines()
val_org = [x.strip().split() for x in val_org]
val = [x for x in val_org if x[0] + '.mp4' in videos]
if osp.exists(frame_data_root):
val_frames = []
for line in val:
length = len(os.listdir(osp.join(frame_data_root, line[0])))
val_frames.append([line[0], str(length // 3), line[1]])
val_frames = [' '.join(x) for x in val_frames]
with open(val_frame_file, 'w') as fout:
fout.write('\n'.join(val_frames))

val = [x[0] + '.mp4 ' + x[1] for x in val]
with open(val_file, 'w') as fout:
fout.write('\n'.join(val))
106 changes: 106 additions & 0 deletions tools/data/gym/preparing_gym.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,106 @@
# Preparing GYM

## Introduction

```
@inproceedings{shao2020finegym,
title={Finegym: A hierarchical video dataset for fine-grained action understanding},
author={Shao, Dian and Zhao, Yue and Dai, Bo and Lin, Dahua},
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
pages={2616--2625},
year={2020}
}
```

For basic dataset information, please refer to the official [project](https://sdolivia.github.io/FineGym/) and the [paper](https://arxiv.org/abs/2004.06704).
We currently provide the data pre-processing pipeline for GYM99.
Before we start, please make sure that the directory is located at `$MMACTION2/tools/data/gym/`.

## Step 1. Prepare Annotations

First of all, you can run the following script to prepare annotations.

```shell
bash download_annotations.sh
```

## Step 2. Prepare Videos

Then, you can run the following script to prepare videos.
The codes are adapted from the [official crawler](https://github.com/activitynet/ActivityNet/tree/master/Crawler/Kinetics). Note that this might take a long time.

```shell
bash download_videos.sh
```

## Step 3. Trim Videos into Events.

First, you need to trim long videos into events based on the annotation of GYM with the following scripts.

```shell
python trim_event.py
```

## Step 4. Trim Events into Subactions.

Then, you need to trim events into subactions based on the annotation of GYM with the following scripts. We use the two stage trimming for better efficiency (trimming multiple short clips from a long video can be extremely inefficient, since you need to go over the video many times).

```shell
python trim_subaction.py
```

## Step 5. Extract RGB and Flow

This part is **optional** if you only want to use the video loader for RGB model training.

Before extracting, please refer to [install.md](/docs/install.md) for installing [denseflow](https://github.com/open-mmlab/denseflow).

Run the following script to extract both rgb and flow using "tvl1" algorithm.

```shell
bash extract_frames.sh
```

## Step 6. Generate file list for GYM99 based on extracted subactions.

You can use the following script to generate train / val lists for GYM99.

```shell
python generate_file_list.py
```

## Step 7. Folder Structure

After the whole data pipeline for GYM preparation. You can get the subaction clips, event clips, raw videos and GYM99 train/val lists.

In the context of the whole project (for GYM only), the full folder structure will look like:

```
mmaction2
├── mmaction
├── tools
├── configs
├── data
│ ├── gym
| | ├── annotations
| | | ├── gym99_train_org.txt
| | | ├── gym99_val_org.txt
| | | ├── gym99_train.txt
| | | ├── gym99_val.txt
| | | ├── annotation.json
| | | └── event_annotation.json
│ │ ├── videos
| | | ├── 0LtLS9wROrk.mp4
| | | ├── ...
| | | └── zfqS-wCJSsw.mp4
│ │ ├── events
| | | ├── 0LtLS9wROrk_E_002407_002435.mp4
| | | ├── ...
| | | └── zfqS-wCJSsw_E_006732_006824.mp4
│ │ └── subactions
| | ├── 0LtLS9wROrk_E_002407_002435_A_0003_0005.mp4
| | ├── ...
| | └── zfqS-wCJSsw_E_006244_006252_A_0000_0007.mp4
```

For training and evaluating on GYM, please refer to [getting_started](/docs/getting_started.md).
Loading

0 comments on commit f5d6b74

Please sign in to comment.