Skip to content

Latest commit

 

History

History
90 lines (67 loc) · 8.8 KB

README.md

File metadata and controls

90 lines (67 loc) · 8.8 KB

I3D

Introduction

[ALGORITHM]

@inproceedings{inproceedings,
  author = {Carreira, J. and Zisserman, Andrew},
  year = {2017},
  month = {07},
  pages = {4724-4733},
  title = {Quo Vadis, Action Recognition? A New Model and the Kinetics Dataset},
  doi = {10.1109/CVPR.2017.502}
}

[BACKBONE]

@article{NonLocal2018,
  author =   {Xiaolong Wang and Ross Girshick and Abhinav Gupta and Kaiming He},
  title =    {Non-local Neural Networks},
  journal =  {CVPR},
  year =     {2018}
}

Model Zoo

Kinetics-400

config resolution gpus backbone pretrain top1 acc top5 acc inference_time(video/s) gpu_mem(M) ckpt log json
i3d_r50_32x2x1_100e_kinetics400_rgb 340x256 8 ResNet50 ImageNet 72.68 90.78 1.7 (320x3 frames) 5170 ckpt log json
i3d_r50_32x2x1_100e_kinetics400_rgb short-side 256 8 ResNet50 ImageNet 73.27 90.92 x 5170 ckpt log json
i3d_r50_video_32x2x1_100e_kinetics400_rgb short-side 256p 8 ResNet50 ImageNet 72.85 90.75 x 5170 ckpt log json
i3d_r50_dense_32x2x1_100e_kinetics400_rgb 340x256 8x2 ResNet50 ImageNet 72.77 90.57 1.7 (320x3 frames) 5170 ckpt log json
i3d_r50_dense_32x2x1_100e_kinetics400_rgb short-side 256 8 ResNet50 ImageNet 73.48 91.00 x 5170 ckpt log json
i3d_r50_lazy_32x2x1_100e_kinetics400_rgb 340x256 8 ResNet50 ImageNet 72.32 90.72 1.8 (320x3 frames) 5170 ckpt log json
i3d_r50_lazy_32x2x1_100e_kinetics400_rgb short-side 256 8 ResNet50 ImageNet 73.24 90.99 x 5170 ckpt log json
i3d_nl_embedded_gaussian_r50_32x2x1_100e_kinetics400_rgb short-side 256p 8x4 ResNet50 ImageNet 74.71 91.81 x 6438 ckpt log json
i3d_nl_gaussian_r50_32x2x1_100e_kinetics400_rgb short-side 256p 8x4 ResNet50 ImageNet 73.37 91.26 x 4944 ckpt log json
i3d_nl_dot_product_r50_32x2x1_100e_kinetics400_rgb short-side 256p 8x4 ResNet50 ImageNet 73.92 91.59 x 4832 ckpt log json

Notes:

  1. The gpus indicates the number of gpu we used to get the checkpoint. It is noteworthy that the configs we provide are used for 8 gpus as default. According to the Linear Scaling Rule, you may set the learning rate proportional to the batch size if you use different GPUs or videos per GPU, e.g., lr=0.01 for 4 GPUs x 2 video/gpu and lr=0.08 for 16 GPUs x 4 video/gpu.
  2. The inference_time is got by this benchmark script, where we use the sampling frames strategy of the test setting and only care about the model inference time, not including the IO time and pre-processing time. For each setting, we use 1 gpu and set batch size (videos per gpu) to 1 to calculate the inference time.

For more details on data preparation, you can refer to Kinetics400 in Data Preparation.

Train

You can use the following command to train a model.

python tools/train.py ${CONFIG_FILE} [optional arguments]

Example: train I3D model on Kinetics-400 dataset in a deterministic option with periodic validation.

python tools/train.py configs/recognition/i3d/i3d_r50_32x2x1_100e_kinetics400_rgb.py \
    --work-dir work_dirs/i3d_r50_32x2x1_100e_kinetics400_rgb \
    --validate --seed 0 --deterministic

For more details, you can refer to Training setting part in getting_started.

Test

You can use the following command to test a model.

python tools/test.py ${CONFIG_FILE} ${CHECKPOINT_FILE} [optional arguments]

Example: test I3D model on Kinetics-400 dataset and dump the result to a json file.

python tools/test.py configs/recognition/i3d/i3d_r50_32x2x1_100e_kinetics400_rgb.py \
    checkpoints/SOME_CHECKPOINT.pth --eval top_k_accuracy mean_class_accuracy \
    --out result.json --average-clips prob

For more details, you can refer to Test a dataset part in getting_started.