@@ -37,6 +37,7 @@ If a community doesn't work as expected, please open an issue and ping the autho
37
37
| TensorRT Stable Diffusion Image to Image Pipeline | Accelerates the Stable Diffusion Image2Image Pipeline using TensorRT | [ TensorRT Stable Diffusion Image to Image Pipeline] ( #tensorrt-image2image-stable-diffusion-pipeline ) | - | [ Asfiya Baig] ( https://github.com/asfiyab-nvidia ) |
38
38
| Stable Diffusion IPEX Pipeline | Accelerate Stable Diffusion inference pipeline with BF16/FP32 precision on Intel Xeon CPUs with [ IPEX] ( https://github.com/intel/intel-extension-for-pytorch ) | [ Stable Diffusion on IPEX] ( #stable-diffusion-on-ipex ) | - | [ Yingjie Han] ( https://github.com/yingjie-han/ ) |
39
39
| CLIP Guided Images Mixing Stable Diffusion Pipeline | Сombine images using usual diffusion models. | [ CLIP Guided Images Mixing Using Stable Diffusion] ( #clip-guided-images-mixing-with-stable-diffusion ) | - | [ Karachev Denis] ( https://github.com/TheDenk ) |
40
+ | TensorRT Stable Diffusion Inpainting Pipeline | Accelerates the Stable Diffusion Inpainting Pipeline using TensorRT | [ TensorRT Stable Diffusion Inpainting Pipeline] ( #tensorrt-inpainting-stable-diffusion-pipeline ) | - | [ Asfiya Baig] ( https://github.com/asfiyab-nvidia ) |
40
41
41
42
To load a custom pipeline you just need to pass the ` custom_pipeline ` argument to ` DiffusionPipeline ` , as one of the files in ` diffusers/examples/community ` . Feel free to send a PR with your own pipelines, we will merge them quickly.
42
43
``` py
@@ -1630,3 +1631,45 @@ image = pipeline(
1630
1631
```
1631
1632
![ mixture_tiling_results] ( https://huggingface.co/datasets/kadirnar/diffusers_readme_images/resolve/main/mixture_tiling.png )
1632
1633
1634
+ ### TensorRT Inpainting Stable Diffusion Pipeline
1635
+
1636
+ The TensorRT Pipeline can be used to accelerate the Inpainting Stable Diffusion Inference run.
1637
+
1638
+ NOTE: The ONNX conversions and TensorRT engine build may take up to 30 minutes.
1639
+
1640
+ ``` python
1641
+ import requests
1642
+ from io import BytesIO
1643
+ from PIL import Image
1644
+ import torch
1645
+ from diffusers import PNDMScheduler
1646
+ from diffusers.pipelines.stable_diffusion import StableDiffusionImg2ImgPipeline
1647
+
1648
+ # Use the PNDMScheduler scheduler here instead
1649
+ scheduler = PNDMScheduler.from_pretrained(" stabilityai/stable-diffusion-2-inpainting" , subfolder = " scheduler" )
1650
+
1651
+
1652
+ pipe = StableDiffusionImg2ImgPipeline.from_pretrained(" stabilityai/stable-diffusion-2-inpainting" ,
1653
+ custom_pipeline = " stable_diffusion_tensorrt_inpaint" ,
1654
+ revision = ' fp16' ,
1655
+ torch_dtype = torch.float16,
1656
+ scheduler = scheduler,
1657
+ )
1658
+
1659
+ # re-use cached folder to save ONNX models and TensorRT Engines
1660
+ pipe.set_cached_folder(" stabilityai/stable-diffusion-2-inpainting" , revision = ' fp16' ,)
1661
+
1662
+ pipe = pipe.to(" cuda" )
1663
+
1664
+ url = " https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo.png"
1665
+ response = requests.get(url)
1666
+ input_image = Image.open(BytesIO(response.content)).convert(" RGB" )
1667
+
1668
+ mask_url = " https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo_mask.png"
1669
+ response = requests.get(mask_url)
1670
+ mask_image = Image.open(BytesIO(response.content)).convert(" RGB" )
1671
+
1672
+ prompt = " a mecha robot sitting on a bench"
1673
+ image = pipe(prompt, image = input_image, mask_image = mask_image, strength = 0.75 ,).images[0 ]
1674
+ image.save(' tensorrt_inpaint_mecha_robot.png' )
1675
+ ```
0 commit comments