Skip to content

Latest commit

 

History

History
56 lines (41 loc) · 2.7 KB

depth2img.mdx

File metadata and controls

56 lines (41 loc) · 2.7 KB

Text-guided depth-to-image generation

[[open-in-colab]]

The [StableDiffusionDepth2ImgPipeline] lets you pass a text prompt and an initial image to condition the generation of new images. In addition, you can also pass a depth_map to preserve the image structure. If no depth_map is provided, the pipeline automatically predicts the depth via an integrated depth-estimation model.

Start by creating an instance of the [StableDiffusionDepth2ImgPipeline]:

import torch
import requests
from PIL import Image

from diffusers import StableDiffusionDepth2ImgPipeline

pipe = StableDiffusionDepth2ImgPipeline.from_pretrained(
    "stabilityai/stable-diffusion-2-depth",
    torch_dtype=torch.float16,
).to("cuda")

Now pass your prompt to the pipeline. You can also pass a negative_prompt to prevent certain words from guiding how an image is generated:

url = "http://images.cocodataset.org/val2017/000000039769.jpg"
init_image = Image.open(requests.get(url, stream=True).raw)
prompt = "two tigers"
n_prompt = "bad, deformed, ugly, bad anatomy"
image = pipe(prompt=prompt, image=init_image, negative_prompt=n_prompt, strength=0.7).images[0]
image
Input Output

Play around with the Spaces below and see if you notice a difference between generated images with and without a depth map!

<iframe src="https://radames-stable-diffusion-depth2img.hf.space" frameborder="0" width="850" height="500" ></iframe>