The source code of ECCV18 'Flow-Grounded Spatial-Temporal Video Prediction from Still Images'.
Switch branches/tags
Nothing to show
Clone or download
Latest commit d426509 Sep 2, 2018
Permalink
Failed to load latest commit information.
SpynetLossNetwork init commit Sep 1, 2018
data init commit Sep 1, 2018
datasets init commit Sep 1, 2018
examples Add files via upload Sep 1, 2018
logs/DTexture init commit Sep 1, 2018
models init commit Sep 1, 2018
src init commit Sep 1, 2018
README.md Update README.md Sep 2, 2018
c1.png init commit Sep 1, 2018
download_models.sh init commit Sep 1, 2018
gif.py init commit Sep 1, 2018
test.lua init commit Sep 1, 2018
train_3DcVAE.lua init commit Sep 1, 2018
train_flow2rgb.lua init commit Sep 1, 2018
utils.lua init commit Sep 1, 2018

README.md

FlowGrounded-VideoPrediction

Torch implementation of our ECCV18 paper on video prediction based on one single still image.

In each panel from left to right: one single starting frame and the predicted sequence (next 16 frames).

Getting started

git clone https://github.com/Yijunmaverick/FlowGrounded-VideoPrediction
cd FlowGrounded-VideoPrediction

Preparation

  • Data

    • Put the video data (e.g., .mp4 or .avi) in a folder and put it under ./datasets/DTexture/raw/.
    • Run the following command to convert videos to frames and generate the metadata for training. The testing data are prepared in the same way. Make sure that the meta data for both training and testing are ready before experiments.
cd datasets/
sh data_process.sh
cd ..
  • SPyNet

    • We use the flows estimated by the great work SPyNet as the ground truth for training. Make sure that the SPyNet code is complied successfully and works well.
  • Pretrained models

    • Run the following command to download the pretrained VGG (for perceptual loss) and our models learned on the WavingFlag data for testing.
sh download_models.sh

Training

  • Train the 3DcVAE model for flow prediction:
th train_3DcVAE.lua --dataRoot datasets/DTexture
  • Train the flow2rgb model for frame generation:
th train_flow2rgb.lua --dataRoot datasets/DTexture

Testing

  • Test two steps (prediction + generation) together:
th test.lua --dataRoot datasets/DTexture
  • With ffmpeg installed, run the following command to convert the predicted frames to a gif or video:
python gif.py

Citation

@inproceedings{Prediction-ECCV-2018,
    author = {Li, Yijun and Fang, Chen and Yang, Jimei and Wang, Zhaowen and Lu, Xin and Yang, Ming-Hsuan},
    title = {Flow-Grounded Spatial-Temporal Video Prediction from Still Images},
    booktitle = {European Conference on Computer Vision},
    year = {2018}
}

Acknowledgement

  • Codes are heavily borrowed from DrNet.