Skip to content

Latest commit

 

History

History
53 lines (28 loc) · 2.47 KB

README.md

File metadata and controls

53 lines (28 loc) · 2.47 KB

ModelScope text2video Extension for AUTOMATIC1111's StableDiffusion WebUI

Auto1111 extension consisting of implementation of ModelScope text2video using only Auto1111 webui dependencies and downloadable models (so no logins required anywhere)

8gbs vram should be enough to run on GPU with low vram vae on at 256x256 (and we are already getting reports of people launching 192x192 videos with 4gbs of vram). 24 frames length 256x256 video definitely fits into 12gbs of NVIDIA GeForce RTX 2080 Ti. We will appreciate any help with this extension, especially pull-requests.

Update 2023-03-26: prompt weights implemented!

Test examples:

Prompt: flowers turning into lava

out.mp4

Prompt: cinematic explosion by greg rutkowski

vid.mp4

Prompt: really attractive anime girl skating, by makoto shinkai, cinematic lighting

gosh.mp4

Where to get the weights

Download the following files from the original HuggingFace repository. Alternatively, download half-precision fp16 pruned weights (they are smaller and use less vram on loading):

  • VQGAN_autoencoder.pth
  • configuration.json
  • open_clip_pytorch_model.bin
  • text2video_pytorch_model.pth

And put them in stable-diffusion-webui/models/ModelScope/t2v. Create those 2 folders if they are missing.

Screenshots

Screenshot 2023-03-20 at 15-52-21 Stable Diffusion

Screenshot 2023-03-20 at 15-52-15 Stable Diffusion

Dev resources

HuggingFace space:

https://huggingface.co/spaces/damo-vilab/modelscope-text-to-video-synthesis

The model PyTorch implementation from ModelScope:

https://github.com/modelscope/modelscope/tree/master/modelscope/models/multi_modal/video_synthesis

Google Colab from the devs:

https://colab.research.google.com/drive/1uW1ZqswkQ9Z9bp5Nbo5z59cAn7I0hE6R?usp=sharing