1
0
mirror of https://github.com/huggingface/diffusers.git synced 2026-01-29 07:22:12 +03:00
Files
diffusers/docs/source/en/api/pipelines/stable_diffusion/depth2img.md
Ali Imran 1b456bd5d5 docs: cleanup of runway model (#12503)
* cleanup of runway model

* quality fixes
2025-10-17 14:10:50 -07:00

1.8 KiB

Depth-to-image

LoRA

The Stable Diffusion model can also infer depth based on an image using MiDaS. This allows you to pass a text prompt and an initial image to condition the generation of new images as well as a depth_map to preserve the image structure.

Tip

Make sure to check out the Stable Diffusion Tips section to learn how to explore the tradeoff between scheduler speed and quality, and how to reuse pipeline components efficiently!

If you're interested in using one of the official checkpoints for a task, explore the CompVis and Stability AI Hub organizations!

StableDiffusionDepth2ImgPipeline

autodoc StableDiffusionDepth2ImgPipeline - all - call - enable_attention_slicing - disable_attention_slicing - enable_xformers_memory_efficient_attention - disable_xformers_memory_efficient_attention - load_textual_inversion - load_lora_weights - save_lora_weights

StableDiffusionPipelineOutput

autodoc pipelines.stable_diffusion.StableDiffusionPipelineOutput