Self-Supervised Spatio-Temporal Representation Learning for Videos by Predicting Motion and Appearance Statistics
Tensorflow implementation of our CVPR 2019 paper Self-Supervised Spatio-Temporal Representation Learning for Videos by Predicting Motion and Appearance Statistics.
A journal (T-PAMI 2021) extension of this work can be found here, with extensive additional analysis and significant performance gain (~30%). The corresponding PyTorch implemetation is available here: https://github.com/laura-wang/video_repres_sts.
We realease partial of our training code on UCF101 dataset. It contains the self-supervised learning based on motion statistics (see more details in our paper).
The entire training protocol (both motion statistics and appearance statistics) is implemented in the pytorch version: https://github.com/laura-wang/video_repres_sts.
- tensorflow >= 1.9.0
- Python 3
- cv2
- scipy
You can download the original UCF101 dataset from the official website. And then extarct RGB images from videos and finally extract optical flow data using TVL1 method. But I recommend you to direclty download the pre-processed RGB and optical flow data of UCF101 provided by feichtenhofer.
Here we provide the first version of our training code with "placeholder" as data reading pipeline, so you don't need to write RGB/Optical flow data into tfrecord format. We also rewrite the training code using Dataset API, but currently we think the placeholder version is enough for you to get to understand motion statsitics.
Before python train.py
, remember to set right dataset directory in the list file, and then you can play with the motion statistics!
If you find this repository useful in your research, please consider citing:
@inproceedings{wang2019self,
title={Self-Supervised Spatio-Temporal Representation Learning for Videos by Predicting Motion and Appearance Statistics},
author={Wang, Jiangliu and Jiao, Jianbo and Bao, Linchao and He, Shengfeng and Liu, Yunhui and Liu, Wei},
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
pages={4006--4015},
year={2019}
}