Skip to content
/ RSGAN Public

RECURRENT STACKED GENERATIVE ADVERSARIAL NETWORK FOR CONDITIONAL VIDEO GENERATION

License

Notifications You must be signed in to change notification settings

nayem/RSGAN

Repository files navigation

RSGAN

RECURRENT STACKED GENERATIVE ADVERSARIAL NETWORK FOR CONDITIONAL VIDEO GENERATION

Generating video frames based on a pre-condition is a challenging problem and requires understanding of per frame contents and visual dynamics and their relevacies to the pre-condition. In this paper, we propose a novel Recurrent Stacked Generative Adversarial Network (RSGAN) based model to generate video frames based on a given pre-condition. The pre-condition can be anything related to the generated video, like- action classes, sentence descriptor, fMRI signal, etc. In our knowledge, this is the first work to address the problem of conditional video generation using adversarial network.

Successful examples:

Bird-1, Bird-1

Bird-5, Bird-5

Flower-1, Flower-1

 

Bad examples:

People are standing, Standing-1

People pointing finger to other, Pointing-1

Someone is writing something, Writing-1

 

Poster [LINK], paper [LINK].

About

RECURRENT STACKED GENERATIVE ADVERSARIAL NETWORK FOR CONDITIONAL VIDEO GENERATION

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published