Skip to content

Latest commit

 

History

History
97 lines (69 loc) · 9.14 KB

README.md

File metadata and controls

97 lines (69 loc) · 9.14 KB

CSN

Video Classification With Channel-Separated Convolutional Networks

Abstract

Group convolution has been shown to offer great computational savings in various 2D convolutional architectures for image classification. It is natural to ask: 1) if group convolution can help to alleviate the high computational cost of video classification networks; 2) what factors matter the most in 3D group convolutional networks; and 3) what are good computation/accuracy trade-offs with 3D group convolutional networks. This paper studies the effects of different design choices in 3D group convolutional networks for video classification. We empirically demonstrate that the amount of channel interactions plays an important role in the accuracy of 3D group convolutional networks. Our experiments suggest two main findings. First, it is a good practice to factorize 3D convolutions by separating channel interactions and spatiotemporal interactions as this leads to improved accuracy and lower computational cost. Second, 3D channel-separated convolutions provide a form of regularization, yielding lower training accuracy but higher test accuracy compared to 3D convolutions. These two empirical findings lead us to design an architecture -- Channel-Separated Convolutional Network (CSN) -- which is simple, efficient, yet accurate. On Sports1M, Kinetics, and Something-Something, our CSNs are comparable with or better than the state-of-the-art while being 2-3 times more efficient.

Results and Models

Kinetics-400

frame sampling strategy resolution gpus backbone pretrain top1 acc top5 acc testing protocol FLOPs params config ckpt log
32x2x1 224x224 8 ResNet152 (IR) IG65M 82.87 95.90 10 clips x 3 crop 97.63G 29.70M config ckpt log
32x2x1 224x224 8 ResNet152 (IR+BNFrozen) IG65M 82.84 95.92 10 clips x 3 crop 97.63G 29.70M config ckpt log
32x2x1 224x224 8 ResNet50 (IR+BNFrozen) IG65M 79.44 94.26 10 clips x 3 crop 55.90G 13.13M config ckpt log
32x2x1 224x224 x ResNet152 (IP) None 77.80 93.10 10 clips x 3 crop 109.9G 33.02M config infer_ckpt x
32x2x1 224x224 x ResNet152 (IR) None 76.53 92.28 10 clips x 3 crop 97.6G 29.70M config infer_ckpt x
32x2x1 224x224 x ResNet152 (IP+BNFrozen) IG65M 82.68 95.69 10 clips x 3 crop 109.9G 33.02M config infer_ckpt x
32x2x1 224x224 x ResNet152 (IP+BNFrozen) Sports1M 79.07 93.82 10 clips x 3 crop 109.9G 33.02M config infer_ckpt x
32x2x1 224x224 x ResNet152 (IR+BNFrozen) Sports1M 78.57 93.44 10 clips x 3 crop 109.9G 33.02M config infer_ckpt x
  1. The gpus indicates the number of gpus we used to get the checkpoint. If you want to use a different number of gpus or videos per gpu, the best way is to set --auto-scale-lr when calling tools/train.py, this parameter will auto-scale the learning rate according to the actual batch size and the original batch size.
  2. The validation set of Kinetics400 we used consists of 19796 videos. These videos are available at Kinetics400-Validation. The corresponding data list (each line is of the format 'video_id, num_frames, label_index') and the label map are also available.
  3. The infer_ckpt means those checkpoints are ported from VMZ.

For more details on data preparation, you can refer to Kinetics400.

Train

You can use the following command to train a model.

python tools/train.py ${CONFIG_FILE} [optional arguments]

Example: train CSN model on Kinetics-400 dataset in a deterministic option with periodic validation.

python tools/train.py configs/recognition/csn/ircsn_ig65m-pretrained-r152_8xb12-32x2x1-58e_kinetics400-rgb.py  \
    --seed=0 --deterministic

For more details, you can refer to the Training part in the Training and Test Tutorial.

Test

You can use the following command to test a model.

python tools/test.py ${CONFIG_FILE} ${CHECKPOINT_FILE} [optional arguments]

Example: test CSN model on Kinetics-400 dataset and dump the result to a pkl file.

python tools/test.py configs/recognition/csn/ircsn_ig65m-pretrained-r152_8xb12-32x2x1-58e_kinetics400-rgb.py \
    checkpoints/SOME_CHECKPOINT.pth --dump result.pkl

For more details, you can refer to the Test part in the Training and Test Tutorial.

Citation

@inproceedings{inproceedings,
author = {Wang, Heng and Feiszli, Matt and Torresani, Lorenzo},
year = {2019},
month = {10},
pages = {5551-5560},
title = {Video Classification With Channel-Separated Convolutional Networks},
doi = {10.1109/ICCV.2019.00565}
}
@inproceedings{ghadiyaram2019large,
  title={Large-scale weakly-supervised pre-training for video action recognition},
  author={Ghadiyaram, Deepti and Tran, Du and Mahajan, Dhruv},
  booktitle={Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition},
  pages={12046--12055},
  year={2019}
}