Skip to content

Commit 7e76ac4

Browse files
committed
update readme
1 parent 0ed52c4 commit 7e76ac4

File tree

1 file changed

+33
-0
lines changed

1 file changed

+33
-0
lines changed

examples/community/README.md

Lines changed: 33 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -45,6 +45,7 @@ FABRIC - Stable Diffusion with feedback Pipeline | pipeline supports feedback fr
4545
sketch inpaint - Inpainting with non-inpaint Stable Diffusion | sketch inpaint much like in automatic1111 | [Masked Im2Im Stable Diffusion Pipeline](#stable-diffusion-masked-im2im) | - | [Anatoly Belikov](https://github.com/noskill) |
4646
prompt-to-prompt | change parts of a prompt and retain image structure (see [paper page](https://prompt-to-prompt.github.io/)) | [Prompt2Prompt Pipeline](#prompt2prompt-pipeline) | - | [Umer H. Adil](https://twitter.com/UmerHAdil) |
4747
| Latent Consistency Pipeline | Implementation of [Latent Consistency Models: Synthesizing High-Resolution Images with Few-Step Inference](https://arxiv.org/abs/2310.04378) | [Latent Consistency Pipeline](#latent-consistency-pipeline) | - | [Simian Luo](https://github.com/luosiallen) |
48+
| Latent Consistency Img2img Pipeline | Img2img pipeline for Latent Consistency Models | [Latent Consistency Img2Img Pipeline](#latent-consistency-img2img-pipeline) | - | [Logan Zoellner](https://github.com/nagolinc) |
4849

4950

5051
To load a custom pipeline you just need to pass the `custom_pipeline` argument to `DiffusionPipeline`, as one of the files in `diffusers/examples/community`. Feel free to send a PR with your own pipelines, we will merge them quickly.
@@ -2185,3 +2186,35 @@ images = pipe(prompt=prompt, num_inference_steps=num_inference_steps, guidance_s
21852186
For any questions or feedback, feel free to reach out to [Simian Luo](https://github.com/luosiallen).
21862187

21872188
You can also try this pipeline directly in the [🚀 official spaces](https://huggingface.co/spaces/SimianLuo/Latent_Consistency_Model).
2189+
2190+
2191+
2192+
### Latent Consistency Img2img Pipeline
2193+
2194+
This pipeline extends the Latent Consistency Pipeline to allow it to take an input image.
2195+
2196+
```py
2197+
from diffusers import DiffusionPipeline
2198+
import torch
2199+
2200+
pipe = DiffusionPipeline.from_pretrained("SimianLuo/LCM_Dreamshaper_v7", custom_pipeline="latent_consistency_img2img")
2201+
2202+
# To save GPU memory, torch.float16 can be used, but it may compromise image quality.
2203+
pipe.to(torch_device="cuda", torch_dtype=torch.float32)
2204+
```
2205+
2206+
- 2. Run inference with as little as 4 steps:
2207+
2208+
```py
2209+
prompt = "Self-portrait oil painting, a beautiful cyborg with golden hair, 8k"
2210+
2211+
2212+
input_image=Image.open("myimg.png")
2213+
2214+
strength = 0.5
2215+
2216+
# Can be set to 1~50 steps. LCM support fast inference even <= 4 steps. Recommend: 1~8 steps.
2217+
num_inference_steps = 4
2218+
2219+
images = pipe(prompt=prompt, image=input_image, strength=strength, num_inference_steps=num_inference_steps, guidance_scale=8.0, lcm_origin_steps=50, output_type="pil").images
2220+
```

0 commit comments

Comments
 (0)