Skip to content

Commit

Permalink
[Community] Support StableDiffusionCanvasPipeline (#3590)
Browse files Browse the repository at this point in the history
* added StableDiffusionCanvasPipeline pipeline

* Added utils codes to pipe_utils file.

* make style

* delete mixture.py and Text2ImageRegion class

* make style

* Added the codes to the readme.md file.

* Moved functions from pipeline_utils to mix_canvas
  • Loading branch information
kadirnar committed Jun 7, 2023
1 parent 803d653 commit cd61869
Show file tree
Hide file tree
Showing 3 changed files with 539 additions and 403 deletions.
38 changes: 36 additions & 2 deletions examples/community/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -1601,7 +1601,7 @@ pipe_images = mixing_pipeline(

![image_mixing_result](https://huggingface.co/datasets/TheDenk/images_mixing/resolve/main/boromir_gigachad.png)

### Stable Diffusion Mixture
### Stable Diffusion Mixture Tiling

This pipeline uses the Mixture. Refer to the [Mixture](https://arxiv.org/abs/2302.02412) paper for more details.

Expand Down Expand Up @@ -1672,4 +1672,38 @@ mask_image = Image.open(BytesIO(response.content)).convert("RGB")
prompt = "a mecha robot sitting on a bench"
image = pipe(prompt, image=input_image, mask_image=mask_image, strength=0.75,).images[0]
image.save('tensorrt_inpaint_mecha_robot.png')
```
```

### Stable Diffusion Mixture Canvas

This pipeline uses the Mixture. Refer to the [Mixture](https://arxiv.org/abs/2302.02412) paper for more details.

```python
from PIL import Image
from diffusers import LMSDiscreteScheduler, DiffusionPipeline
from diffusers.pipelines.pipeline_utils import Image2ImageRegion, Text2ImageRegion, preprocess_image


# Load and preprocess guide image
iic_image = preprocess_image(Image.open("input_image.png").convert("RGB"))

# Creater scheduler and model (similar to StableDiffusionPipeline)
scheduler = LMSDiscreteScheduler(beta_start=0.00085, beta_end=0.012, beta_schedule="scaled_linear", num_train_timesteps=1000)
pipeline = DiffusionPipeline.from_pretrained("CompVis/stable-diffusion-v1-4", scheduler=scheduler).to("cuda:0", custom_pipeline="mixture_canvas")
pipeline.to("cuda")

# Mixture of Diffusers generation
output = pipeline(
canvas_height=800,
canvas_width=352,
regions=[
Text2ImageRegion(0, 800, 0, 352, guidance_scale=8,
prompt=f"best quality, masterpiece, WLOP, sakimichan, art contest winner on pixiv, 8K, intricate details, wet effects, rain drops, ethereal, mysterious, futuristic, UHD, HDR, cinematic lighting, in a beautiful forest, rainy day, award winning, trending on artstation, beautiful confident cheerful young woman, wearing a futuristic sleeveless dress, ultra beautiful detailed eyes, hyper-detailed face, complex, perfect, model,聽 textured, chiaroscuro, professional make-up, realistic, figure in frame, "),
Image2ImageRegion(352-800, 352, 0, 352, reference_image=iic_image, strength=1.0),
],
num_inference_steps=100,
seed=5525475061,
)["images"][0]
```
![Input_Image](https://huggingface.co/datasets/kadirnar/diffusers_readme_images/resolve/main/input_image.png)
![mixture_canvas_results](https://huggingface.co/datasets/kadirnar/diffusers_readme_images/resolve/main/canvas.png)

0 comments on commit cd61869

Please sign in to comment.