Skip to content
PyTorch implemented C3D, R3D, R2Plus1D models for video activity recognition.
Python
Branch: master
Clone or download
Latest commit ca37de9 Feb 22, 2019
Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
assets Update README Sep 7, 2018
dataloaders Update README Sep 7, 2018
network add directory tree Sep 16, 2018
.gitignore Fix bug Jul 31, 2018
LICENSE Initial commit Jul 31, 2018
README.md UPDATE Oct 20, 2018
inference.py fix error Feb 22, 2019
mypath.py
train.py Completed Version Sep 7, 2018

README.md

pytorch-video-recognition

Introduction

This repo contains several models for video action recognition, including C3D, R2Plus1D, R3D, inplemented using PyTorch (0.4.0). Currently, we train these models on UCF101 and HMDB51 datasets. More models and datasets will be available soon!

Note: An interesting online web game based on C3D model is in here.

Installation

The code was tested with Anaconda and Python 3.5. After installing the Anaconda environment:

  1. Clone the repo:

    git clone https://github.com/jfzhang95/pytorch-video-recognition.git
    cd pytorch-video-recognition
  2. Install dependencies:

    For PyTorch dependency, see pytorch.org for more details.

    For custom dependencies:

    conda install opencv
    pip install tqdm scikit-learn tensorboardX
  3. Download pretrained model from BaiduYun or GoogleDrive. Currently only support pretrained model for C3D.

  4. Configure your dataset and pretrained model path in mypath.py.

  5. You can choose different models and datasets in train.py.

    To train the model, please do:

    python train.py

Datasets:

I used two different datasets: UCF101 and HMDB.

Dataset directory tree is shown below

  • UCF101 Make sure to put the files as the following structure:
    UCF-101
    ├── ApplyEyeMakeup
    │   ├── v_ApplyEyeMakeup_g01_c01.avi
    │   └── ...
    ├── ApplyLipstick
    │   ├── v_ApplyLipstick_g01_c01.avi
    │   └── ...
    └── Archery
    │   ├── v_Archery_g01_c01.avi
    │   └── ...
    

After pre-processing, the output dir's structure is as follows:

ucf101
├── ApplyEyeMakeup
│   ├── v_ApplyEyeMakeup_g01_c01
│   │   ├── 00001.jpg
│   │   └── ...
│   └── ...
├── ApplyLipstick
│   ├── v_ApplyLipstick_g01_c01
│   │   ├── 00001.jpg
│   │   └── ...
│   └── ...
└── Archery
│   ├── v_Archery_g01_c01
│   │   ├── 00001.jpg
│   │   └── ...
│   └── ...

Note: HMDB dataset's directory tree is similar to UCF101 dataset's.

Experiments

These models were trained in machine with NVIDIA TITAN X 12gb GPU. Note that I splited train/val/test data for each dataset using sklearn. If you want to train models using official train/val/test data, you can look in dataset.py, and modify it to your needs.

Currently, I only train C3D model in UCF and HMDB datasets. The train/val/test accuracy and loss curves for each experiment are shown below:

  • UCF101

  • HMDB51

Experiments for other models will be updated soon ...

You can’t perform that action at this time.