Learn the Force We Can: Multi-Object Video Generation from Pixel-Level Interactions
Official repository of the paper
Project Website • Arxiv
Abstract: We propose a novel unsupervised method to autoregressively generate videos from a single frame and a sparse motion input. Our trained model can generate realistic object-to-object interactions and separate the dynamics and the extents of multiple objects despite only observing them under correlated motion activities. Key components in our method are the randomized conditioning scheme, the encoding of the input motion control, and the randomized and sparse sampling to break correlations. Our model, which we call YODA, has the ability to move objects without physically touching them. We show both qualitatively and quantitatively that YODA accurately follows the user control, while yielding a video quality that is on par with or better than state-of-the-art video generation prior work on several datasets.
Citation
For citation we recommend using the following bibref.
A. Davtyan, P. Favaro. Learn the Force We Can: Multi-Object Video Generation from Pixel-Level Interactions. Technical Report, 2023.
Code
Coming soon...