Skip to content

Latest commit

 

History

History
95 lines (66 loc) · 8.94 KB

README.md

File metadata and controls

95 lines (66 loc) · 8.94 KB

I3D

Quo Vadis, Action Recognition? A New Model and the Kinetics Dataset

Non-local Neural Networks

Abstract

The paucity of videos in current action classification datasets (UCF-101 and HMDB-51) has made it difficult to identify good video architectures, as most methods obtain similar performance on existing small-scale benchmarks. This paper re-evaluates state-of-the-art architectures in light of the new Kinetics Human Action Video dataset. Kinetics has two orders of magnitude more data, with 400 human action classes and over 400 clips per class, and is collected from realistic, challenging YouTube videos. We provide an analysis on how current architectures fare on the task of action classification on this dataset and how much performance improves on the smaller benchmark datasets after pre-training on Kinetics. We also introduce a new Two-Stream Inflated 3D ConvNet (I3D) that is based on 2D ConvNet inflation: filters and pooling kernels of very deep image classification ConvNets are expanded into 3D, making it possible to learn seamless spatio-temporal feature extractors from video while leveraging successful ImageNet architecture designs and even their parameters. We show that, after pre-training on Kinetics, I3D models considerably improve upon the state-of-the-art in action classification, reaching 80.9% on HMDB-51 and 98.0% on UCF-101.

Results and Models

Kinetics-400

frame sampling strategy resolution gpus backbone pretrain top1 acc top5 acc testing protocol FLOPs params config ckpt log
32x2x1 224x224 8 ResNet50 (NonLocalDotProduct) ImageNet 74.80 92.07 10 clips x 3 crop 59.3G 35.4M config ckpt log
32x2x1 224x224 8 ResNet50 (NonLocalEmbedGauss) ImageNet 74.73 91.80 10 clips x 3 crop 59.3G 35.4M config ckpt log
32x2x1 224x224 8 ResNet50 (NonLocalGauss) ImageNet 73.97 91.33 10 clips x 3 crop 56.5 31.7M config ckpt log
32x2x1 224x224 8 ResNet50 ImageNet 73.47 91.27 10 clips x 3 crop 43.5G 28.0M config ckpt log
dense-32x2x1 224x224 8 ResNet50 ImageNet 73.77 91.35 10 clips x 3 crop 43.5G 28.0M config ckpt log
32x2x1 224x224 8 ResNet50 (Heavy) ImageNet 76.21 92.48 10 clips x 3 crop 166.3G 33.0M config ckpt log
  1. The gpus indicates the number of gpus we used to get the checkpoint. If you want to use a different number of gpus or videos per gpu, the best way is to set --auto-scale-lr when calling tools/train.py, this parameter will auto-scale the learning rate according to the actual batch size and the original batch size.
  2. The validation set of Kinetics400 we used consists of 19796 videos. These videos are available at Kinetics400-Validation. The corresponding data list (each line is of the format 'video_id, num_frames, label_index') and the label map are also available.

For more details on data preparation, you can refer to Kinetics400.

Train

You can use the following command to train a model.

python tools/train.py ${CONFIG_FILE} [optional arguments]

Example: train I3D model on Kinetics-400 dataset in a deterministic option with periodic validation.

python tools/train.py configs/recognition/i3d/i3d_imagenet-pretrained-r50_8xb8-32x2x1-100e_kinetics400-rgb.py \
    --seed=0 --deterministic

For more details, you can refer to the Training part in the Training and Test Tutorial.

Test

You can use the following command to test a model.

python tools/test.py ${CONFIG_FILE} ${CHECKPOINT_FILE} [optional arguments]

Example: test I3D model on Kinetics-400 dataset and dump the result to a pkl file.

python tools/test.py configs/recognition/i3d/i3d_imagenet-pretrained-r50_8xb8-32x2x1-100e_kinetics400-rgb.py \
    checkpoints/SOME_CHECKPOINT.pth --dump result.pkl

For more details, you can refer to the Test part in the Training and Test Tutorial.

Citation

@inproceedings{inproceedings,
  author = {Carreira, J. and Zisserman, Andrew},
  year = {2017},
  month = {07},
  pages = {4724-4733},
  title = {Quo Vadis, Action Recognition? A New Model and the Kinetics Dataset},
  doi = {10.1109/CVPR.2017.502}
}
@article{NonLocal2018,
  author =   {Xiaolong Wang and Ross Girshick and Abhinav Gupta and Kaiming He},
  title =    {Non-local Neural Networks},
  journal =  {CVPR},
  year =     {2018}
}