From c04b96db428221e5f15bd3c2377acfabc9d40bdb Mon Sep 17 00:00:00 2001
From: dreamerlin <528557675@qq.com>
Date: Wed, 23 Sep 2020 23:31:39 +0800
Subject: [PATCH 1/4] init docs
---
tools/data/ucf101_24/preparing_ucf101_24.md | 70 +++++++++++++++++++++
1 file changed, 70 insertions(+)
create mode 100644 tools/data/ucf101_24/preparing_ucf101_24.md
diff --git a/tools/data/ucf101_24/preparing_ucf101_24.md b/tools/data/ucf101_24/preparing_ucf101_24.md
new file mode 100644
index 0000000000..cec3508962
--- /dev/null
+++ b/tools/data/ucf101_24/preparing_ucf101_24.md
@@ -0,0 +1,70 @@
+# Preparing UCF101-24
+
+For basic dataset information, you can refer to the dataset [website](http://www.thumos.info/download.html).
+Before we start, please make sure that the directory is located at `$MMACTION2/tools/data/ucf101_24/`.
+
+## Download and Extract
+
+You can download the RGB frames, optical flow and ground truth annotations from [google drive](https://drive.google.com/drive/folders/1BvGywlAGrACEqRyfYbz3wzlVV3cDFkct).
+The data are provided from [MOC](https://github.com/MCG-NJU/MOC-Detector/blob/master/readme/Dataset.md), which is adapted from [act-detector](https://github.com/vkalogeiton/caffe/tree/act-detector) and [corrected-UCF101-Annots](https://github.com/gurkirt/corrected-UCF101-Annots).
+
+**Note**: The annotation of this UCF101-24 is from [here](https://github.com/gurkirt/corrected-UCF101-Annots), which is more correct.
+
+After downloading the `UCF101_v2.tar.gz` file and put it in `$MMACTION2/tools/data/ucf101_24/`, you can run the following command to extract.
+
+```shell script
+tar -zxvf UCF101_v2.tar.gz
+```
+
+## Check Directory Structure
+
+After extracting, you will get the `rgb-images` directory, `brox-images` directory and `UCF101v2-GT.pkl` for UCF101-24.
+
+In the context of the whole project (for UCF101-24 only), the folder structure will look like:
+
+```
+mmaction2
+├── mmaction
+├── tools
+├── configs
+├── data
+│ ├── ucf101_24
+│ | ├── brox-images
+│ | | ├── Basketball
+│ | | | ├── v_Basketball_g01_c01
+│ | | | | ├── 00001.jpg
+│ | | | | ├── 00002.jpg
+│ | | | | ├── ...
+│ | | | | ├── 00140.jpg
+│ | | | | ├── 00141.jpg
+│ | | ├── ...
+│ | | ├── WalkingWithDog
+│ | | | ├── v_WalkingWithDog_g01_c01
+│ | | | ├── ...
+│ | | | ├── v_WalkingWithDog_g25_c04
+│ | ├── rgb-images
+│ | | ├── Basketball
+│ | | | ├── v_Basketball_g01_c01
+│ | | | | ├── 00001.jpg
+│ | | | | ├── 00002.jpg
+│ | | | | ├── ...
+│ | | | | ├── 00140.jpg
+│ | | | | ├── 00141.jpg
+│ | | ├── ...
+│ | | ├── WalkingWithDog
+│ | | | ├── v_WalkingWithDog_g01_c01
+│ | | | ├── ...
+│ | | | ├── v_WalkingWithDog_g25_c04
+│ | ├── UCF101v2-GT.pkl
+
+```
+
+**Note**: The `UCF101v2-GT.pkl` exists as a cache, it contains 6 items as follows:
+1. `labels` (list): List of the 24 labels.
+2. `gttubes` (dict): Dictionary that contains the ground truth tubes for each video.
+ A **gttube** is dictionary that associates with each index of label and a list of tubes.
+ A **tube** is a numpy array with `nframes` rows and 5 columns, each col is in format like ` `.
+3. `nframes` (dict): Dictionary that contains the number of frames for each video, like `'HorseRiding/v_HorseRiding_g05_c02': 151`.
+4. `train_videos` (list): A list with `nsplits=1` elements, each one containing the list of training videos.
+5. `test_videos` (list): A list with `nsplits=1` elements, each one containing the list of testing videos.
+6. `resolution` (dict): Dictionary that outputs a tuple (h,w) of the resolution for each video, like `'FloorGymnastics/v_FloorGymnastics_g09_c03': (240, 320)`.
From 5c4b0720b6ed09d1b14406d53ed374f2c1142717 Mon Sep 17 00:00:00 2001
From: dreamerlin <528557675@qq.com>
Date: Sun, 27 Sep 2020 16:20:14 +0800
Subject: [PATCH 2/4] update
---
docs/data_preparation.md | 1 +
1 file changed, 1 insertion(+)
diff --git a/docs/data_preparation.md b/docs/data_preparation.md
index 3a9ecc7b8a..c54bb05ce7 100644
--- a/docs/data_preparation.md
+++ b/docs/data_preparation.md
@@ -23,6 +23,7 @@ To ease usage, we provide tutorials of data deployment for each dataset.
- [Moments in Time](http://moments.csail.mit.edu/): See [preparing_mit.md](/tools/data/mit/preparing_mit.md)
- [Multi-Moments in Time](http://moments.csail.mit.edu/challenge_iccv_2019.html): See [preparing_mmit.md](/tools/data/mmit/preparing_mmit.md)
- ActivityNet_feature: See [praparing_activitynet.md](/tools/data/activitynet/preparing_activitynet.md)
+- [UCF101-24](http://www.thumos.info/download.html): See [preparing_ucf101_24.md](/tools/data/ucf101_24/preparing_ucf101_24.md)
Now, you can switch to [getting_started.md](getting_started.md) to train and test the model.
From 8c2afe3fa2e70996c0599bfcde495a550e92b200 Mon Sep 17 00:00:00 2001
From: dreamerlin <528557675@qq.com>
Date: Wed, 30 Sep 2020 10:05:53 +0800
Subject: [PATCH 3/4] update changelog
---
docs/changelog.md | 5 +++--
1 file changed, 3 insertions(+), 2 deletions(-)
diff --git a/docs/changelog.md b/docs/changelog.md
index ed3b144ba2..9d34bfeef3 100644
--- a/docs/changelog.md
+++ b/docs/changelog.md
@@ -6,8 +6,9 @@
**New Features**
- Support to run real-time action recognition from web camera ([#171](https://github.com/open-mmlab/mmaction2/pull/171))
-- Support to export the pytorch models to onnx ones. ([#160](https://github.com/open-mmlab/mmaction2/pull/160))
-- Support to report mAP for ActivityNet with [CUHK17_activitynet_pred](http://activity-net.org/challenges/2017/evaluation.html). ([#176](https://github.com/open-mmlab/mmaction2/pull/176))
+- Support to export the pytorch models to onnx ones ([#160](https://github.com/open-mmlab/mmaction2/pull/160))
+- Support to report mAP for ActivityNet with [CUHK17_activitynet_pred](http://activity-net.org/challenges/2017/evaluation.html) ([#176](https://github.com/open-mmlab/mmaction2/pull/176))
+- Support UCF101-24 preparation ([#219](https://github.com/open-mmlab/mmaction2/pull/219))
**ModelZoo**
- Add finetuning setting for SlowOnly. ([#173](https://github.com/open-mmlab/mmaction2/pull/173))
From 3dd05af32367f7703f6ec0e9ff4abe5e697c8db0 Mon Sep 17 00:00:00 2001
From: lizz
Date: Wed, 30 Sep 2020 13:59:34 +0800
Subject: [PATCH 4/4] Update changelog.md
---
docs/changelog.md | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/docs/changelog.md b/docs/changelog.md
index 9d34bfeef3..62643c6a56 100644
--- a/docs/changelog.md
+++ b/docs/changelog.md
@@ -6,7 +6,7 @@
**New Features**
- Support to run real-time action recognition from web camera ([#171](https://github.com/open-mmlab/mmaction2/pull/171))
-- Support to export the pytorch models to onnx ones ([#160](https://github.com/open-mmlab/mmaction2/pull/160))
+- Support to export pytorch models to onnx ([#160](https://github.com/open-mmlab/mmaction2/pull/160))
- Support to report mAP for ActivityNet with [CUHK17_activitynet_pred](http://activity-net.org/challenges/2017/evaluation.html) ([#176](https://github.com/open-mmlab/mmaction2/pull/176))
- Support UCF101-24 preparation ([#219](https://github.com/open-mmlab/mmaction2/pull/219))