Skip to content

LavieC/VideoImagination

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

5 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Video Imagination from a Single Image with Transformation Generation

This repository contains an implementation of Video Imagination from a Single Image with Transformation Generation. The framework can synthesize multiple imaginary video from a single image.

Imaginary Video Example

We randomly pick some imaginary videos synthesized by our framework. The input is a single image from UCF101 dataset, and the output imaginary video contains five frames. The following gif picture is a demo of synthesized imaginary video. The network may bring some delay, please wait a while fro the demonstration.

Imaginary Video

gif

Input image

im

Data

The framework can be trained on three datasets : moving MNIST, 2D shape, UCF101. No pre-process is needed except normalizing images to be in the range [0, 1]. The videos (or image tuples) needs to be convert to tfrecords at first.

Training

The code requires a TensorFlowr r1.0 installation

To train the framework, after you prepare the tfrecords, run main.py. This file will build model and graph, and train the networks.

Notes

The code is modified based on A Tensorflow Implementation of DCGAN. The on-the-fly 2D shape dataset generating codes are modified from the author of the dataset.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages