RECURRENT STACKED GENERATIVE ADVERSARIAL NETWORK FOR CONDITIONAL VIDEO GENERATION
Generating video frames based on a pre-condition is a challenging problem and requires understanding of per frame contents and visual dynamics and their relevacies to the pre-condition. In this paper, we propose a novel Recurrent Stacked Generative Adversarial Network (RSGAN) based model to generate video frames based on a given pre-condition. The pre-condition can be anything related to the generated video, like- action classes, sentence descriptor, fMRI signal, etc. In our knowledge, this is the first work to address the problem of conditional video generation using adversarial network.
Successful examples:
Bad examples:
People pointing finger to other,