Skip to content

Official implementation of "VSTAR: Generative Temporal Nursing for Longer Dynamic Video Synthesis"

License

Notifications You must be signed in to change notification settings

boschresearch/VSTAR

Repository files navigation

VSTAR: Generative Temporal Nursing for Longer Dynamic Video Synthesis

🔥 Official implementation of "VSTAR: Generative Temporal Nursing for Longer Dynamic Video Synthesis"

🚀TL;DR: VSTAR enables pretrained text-to-video models to generate longer videos with dynamic visual evolution in a single pass, without finetuning needed.


Getting Started

Our environment is built on top of VideoCrafter2:

conda create -n vstar python=3.10.6 pip jupyter jupyterlab matplotlib
conda activate vstar
pip install -r requirements.txt

Download pretrained Videocafter2 320x512 checkpoint from here and store it in the checkpoint folder.

Inference

Run inference_VSTAR.ipynb for testing.

License

This project is open-sourced under the AGPL-3.0 license. See the LICENSE file for details.

For a list of other open source components included in this project, see the file 3rd-party-licenses.txt.

Purpose of the project

This software is a research prototype, solely developed for and published as part of the publication cited above.

Contact

Please feel free to open an issue or contact personally if you have questions, need help, or need explanations. Don't hesitate to write an email to the following email address: liyumeng07@outlook.com

About

Official implementation of "VSTAR: Generative Temporal Nursing for Longer Dynamic Video Synthesis"

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published