Pinned Loading
-
YetAnotherStableDiffusion
YetAnotherStableDiffusion PublicStableDiffusion scripts based on huggingface diffusers.
-
UMAP maps of internal CLIP text repr...
UMAP maps of internal CLIP text representations (direct viewable links) 1## UMAP maps of internal OpenCLIP ViT-H/14 text representations. Colors are derived from k_means clustering on the unprocessed (1024-dimensional) CLIP feature tensors.
23This variant of CLIP is used in StableDiffusion version 2.x. (https://huggingface.co/laion/CLIP-ViT-H-14-laion2B-s32B-b79K)
4##
5"by \<artist\>":
-
VQGAN-CLIP
VQGAN-CLIP PublicForked from nerdyrodent/VQGAN-CLIP
Just playing with getting VQGAN+CLIP running locally, rather than having to use colab.
Python 1
-
glide-text2im
glide-text2im PublicForked from nerdyrodent/glide-text2im
GLIDE: a diffusion-based text-conditional image synthesis model. Now with example files for local running.
Python
-
-
Convert original Stable Diffusion ch...
Convert original Stable Diffusion checkpoints and safetensors to diffusers 1# Stable Diffusion model conversion script
2## Convert from the 'original implementation' to Huggingface diffusers. Both .safetensors and .ckpt checkpoints are supported.
3The script is adapted from the diffusers conversion script: https://github.com/huggingface/diffusers/blob/main/scripts/convert_original_stable_diffusion_to_diffusers.py
45**Usage:**
Something went wrong, please refresh the page to try again.
If the problem persists, check the GitHub status page or contact support.
If the problem persists, check the GitHub status page or contact support.