Stable Diffusion web UI
-
Updated
Nov 6, 2024 - Python
Stable Diffusion web UI
🤗 Diffusers: State-of-the-art diffusion models for image and audio generation in PyTorch and FLAX.
Awesome pre-trained models toolkit based on PaddlePaddle. (400+ models including Image, Text, Audio, Video and Cross-Modal with Easy Inference & Serving)【安全加固,暂停交互,请耐心等待】
Just playing with getting VQGAN+CLIP running locally, rather than having to use colab.
Stable Diffusion web UI
Autoregressive Model Beats Diffusion: 🦙 Llama for Scalable Image Generation
PALLAIDIUM - a generative AI movie studio integrated in the Blender Video Editor.
Beautiful and Easy to use Stable Diffusion WebUI
Templating language written for Stable Diffusion workflows. Available as an extension for the Automatic1111 WebUI.
A colab friendly toolkit to generate 3D mesh model / video / nerf instance / multiview images of colourful 3D objects by text and image prompts input, based on dreamfields.
Just playing with getting CLIP Guided Diffusion running locally, rather than having to use colab.
T-GATE: Temporally Gating Attention to Accelerate Diffusion Model for Free!
Open reproduction of MUSE for fast text2image generation.
web UI for GPU-accelerated ONNX pipelines like Stable Diffusion, even on Windows and AMD
[ICLR2023] Discrete Contrastive Diffusion for Cross-Modal Music and Image Generation (CDCD).
CLIP-GEN: Language-Free Training of a Text-to-Image Generator with CLIP
Stable Diffusion UI: Diffusers (CUDA/ONNX)
Yet Another Stable Diffusion Discord Bot
Local image generation using VQGAN-CLIP or CLIP guided diffusion
Official code repo for "Editing Implicit Assumptions in Text-to-Image Diffusion Models"
Add a description, image, and links to the text2image topic page so that developers can more easily learn about it.
To associate your repository with the text2image topic, visit your repo's landing page and select "manage topics."