TPU version (~x6 faster than standard colab GPUs):
Example - morphing between "blueberry spaghetti" and "strawberry spaghetti"
berry_good_spaghetti.2.mp4
The in-browser Colab demo allows you to generate videos by interpolating the latent space of Stable Diffusion.
You can either dream up different versions of the same prompt, or morph between different text prompts (with seeds set for each for reproducibility).
The app is built with Gradio, which allows you to interact with the model in a web app. Here's how I suggest you use it:
-
Use the "Images" tab to generate images you like.
- Find two images you want to morph between
- These images should use the same settings (guidance scale, scheduler, height, width)
- Keep track of the seeds/settings you used so you can reproduce them
-
Generate videos using the "Videos" tab
- Using the images you found from the step above, provide the prompts/seeds you recorded
- Set the
num_interpolation_steps
- for testing you can use a small number like 3 or 5, but to get great results you'll want to use something larger (60-200 steps). - You can set the
output_dir
to the directory you wish to save to
Install the package
pip install -U stable_diffusion_videos
Authenticate with Hugging Face
huggingface-cli login
Note: For Apple M1 architecture, use torch.float32
instead, as torch.float16
is not available on MPS.
from stable_diffusion_videos import StableDiffusionWalkPipeline
import torch
pipeline = StableDiffusionWalkPipeline.from_pretrained(
"CompVis/stable-diffusion-v1-4",
torch_dtype=torch.float16,
revision="fp16",
).to("cuda")
video_path = pipeline.walk(
prompts=['a cat', 'a dog'],
seeds=[42, 1337],
num_interpolation_steps=3,
height=512, # use multiples of 64 if > 512. Multiples of 8 if < 512.
width=512, # use multiples of 64 if > 512. Multiples of 8 if < 512.
output_dir='dreams', # Where images/videos will be saved
name='animals_test', # Subdirectory of output_dir where images/videos will be saved
guidance_scale=8.5, # Higher adheres to prompt more, lower lets model take the wheel
num_inference_steps=50, # Number of diffusion steps per image generated. 50 is good default
)
New! Music can be added to the video by providing a path to an audio file. The audio will inform the rate of interpolation so the videos move to the beat 🎶
from stable_diffusion_videos import StableDiffusionWalkPipeline
import torch
pipeline = StableDiffusionWalkPipeline.from_pretrained(
"CompVis/stable-diffusion-v1-4",
torch_dtype=torch.float16,
revision="fp16",
).to("cuda")
# Seconds in the song.
audio_offsets = [146, 148] # [Start, end]
fps = 30 # Use lower values for testing (5 or 10), higher values for better quality (30 or 60)
# Convert seconds to frames
num_interpolation_steps = [(b-a) * fps for a, b in zip(audio_offsets, audio_offsets[1:])]
video_path = pipeline.walk(
prompts=['a cat', 'a dog'],
seeds=[42, 1337],
num_interpolation_steps=num_interpolation_steps,
audio_filepath='audio.mp3',
audio_start_sec=audio_offsets[0],
fps=fps,
height=512, # use multiples of 64 if > 512. Multiples of 8 if < 512.
width=512, # use multiples of 64 if > 512. Multiples of 8 if < 512.
output_dir='dreams', # Where images/videos will be saved
guidance_scale=7.5, # Higher adheres to prompt more, lower lets model take the wheel
num_inference_steps=50, # Number of diffusion steps per image generated. 50 is good default
)
from stable_diffusion_videos import StableDiffusionWalkPipeline, Interface
import torch
pipeline = StableDiffusionWalkPipeline.from_pretrained(
"CompVis/stable-diffusion-v1-4",
torch_dtype=torch.float16,
revision="fp16",
).to("cuda")
interface = Interface(pipeline)
interface.launch()
First, download the pre-trained weights:
cog run scripts/download_weights
Run a prediction. Separate each prompt with a |
cog predict -i prompts="a cat | a dog | a horse"
This work built off of a script shared by @karpathy. The script was modified to this gist, which was then updated/modified to this repo.
You can file any issues/feature requests here
Enjoy 🤗
You can also 4x upsample your images with Real-ESRGAN!
It's included when you pip install the latest version of stable-diffusion-videos
!
You'll be able to use upsample=True
in the walk
function, like this:
pipeline.walk(['a cat', 'a dog'], [234, 345], upsample=True)
The above may cause you to run out of VRAM. No problem, you can do upsampling separately.
To upsample an individual image:
from stable_diffusion_videos import RealESRGANModel
model = RealESRGANModel.from_pretrained('nateraw/real-esrgan')
enhanced_image = model('your_file.jpg')
Or, to do a whole folder:
from stable_diffusion_videos import RealESRGANModel
model = RealESRGANModel.from_pretrained('nateraw/real-esrgan')
model.upsample_imagefolder('path/to/images/', 'path/to/output_dir')