Skip to content

Latest commit

History

History
40 lines (27 loc) 路 1.72 KB

File metadata and controls

40 lines (27 loc) 路 1.72 KB

Depth-to-image

The Stable Diffusion model can also infer depth based on an image using MiDaS. This allows you to pass a text prompt and an initial image to condition the generation of new images as well as a depth_map to preserve the image structure.

Make sure to check out the Stable Diffusion Tips section to learn how to explore the tradeoff between scheduler speed and quality, and how to reuse pipeline components efficiently!

If you're interested in using one of the official checkpoints for a task, explore the CompVis, Runway, and Stability AI Hub organizations!

StableDiffusionDepth2ImgPipeline

[[autodoc]] StableDiffusionDepth2ImgPipeline - all - call - enable_attention_slicing - disable_attention_slicing - enable_xformers_memory_efficient_attention - disable_xformers_memory_efficient_attention - load_textual_inversion - load_lora_weights - save_lora_weights

StableDiffusionPipelineOutput

[[autodoc]] pipelines.stable_diffusion.StableDiffusionPipelineOutput