Simple and easy stable diffusion inference with LightningModule on GPU, CPU and MPS (Possibly all devices supported by Lightning).
Name | Variant | Image Size |
---|---|---|
sdxl-base-1.0 | SDXL 1.0 | 1024 |
sd1 | Stable Diffusion 1.5 | 512 |
sd1.5 | Stable Diffusion 1.5 | 512 |
sd1.4 | Stable Diffusion 1.4 | 512 |
sd2_base | SD 2.0 trained on image size 512 | 512 |
sd2_high | SD 2.0 trained on image size 768 | 768 |
pip install "sd_inference@git+https://github.com/aniketmaurya/stable_diffusion_inference@main"
pip install sgm @ git+https://github.com/Stability-AI/generative-models.git@main
pip install clip @ git+https://github.com/openai/CLIP.git
pip install "sd_inference@git+https://github.com/aniketmaurya/stable_diffusion_inference@main"
pip install -e git+https://github.com/CompVis/taming-transformers.git@master#egg=taming-transformers -q
pip install -U "clip@ git+https://github.com/openai/CLIP.git@main" -q
from stable_diffusion_inference import create_text2image
# text2image = create_text2image("sd1")
# text2image = create_text2image("sd2_high") # for SD 2.0 with 768 image size
text2image = create_text2image("sd2_base") # for SD 2.0 with 512 image size
image = text2image("cats in hats", image_size=512, inference_steps=50)
image.save("cats in hats.png")
from stable_diffusion_inference import SDXL
checkpoint_path = "/data/aniket/stabilityai/stable-diffusion-xl-base-1.0/sd_xl_base_1.0.safetensors"
version = "SDXL-base-1.0"
text2image = SDXL(checkpoint_path=checkpoint_path,version=version, low_vram=True)
prompt = "Llama in a jungle, Lightning, AI themed, purple colors, detailed, 8k"
images = text2image(prompt)
images[0]