* Fix typos, update, trim trailing whitespace * Trim trailing whitespaces * Update docs/source/en/optimization/memory.md Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com> * Update docs/source/en/optimization/memory.md Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com> * Update _toctree.yml * Update adapt_a_model.md * Reverse * Reverse * Reverse * Update dreambooth.md * Update instructpix2pix.md * Update lora.md * Update overview.md * Update t2i_adapters.md * Update text2image.md * Update text_inversion.md * Update create_dataset.md * Update create_dataset.md * Update create_dataset.md * Update create_dataset.md * Update coreml.md * Delete docs/source/en/training/create_dataset.md * Original create_dataset.md * Update create_dataset.md * Delete docs/source/en/training/create_dataset.md * Add original file * Delete docs/source/en/training/create_dataset.md * Add original one * Delete docs/source/en/training/text2image.md * Delete docs/source/en/training/instructpix2pix.md * Delete docs/source/en/training/dreambooth.md * Add original files --------- Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
3.1 KiB
Speed up inference
There are several ways to optimize 🤗 Diffusers for inference speed. As a general rule of thumb, we recommend using either xFormers or torch.nn.functional.scaled_dot_product_attention in PyTorch 2.0 for their memory-efficient attention.
In many cases, optimizing for speed or memory leads to improved performance in the other, so you should try to optimize for both whenever you can. This guide focuses on inference speed, but you can learn more about preserving memory in the Reduce memory usage guide.
The results below are obtained from generating a single 512x512 image from the prompt a photo of an astronaut riding a horse on mars with 50 DDIM steps on a Nvidia Titan RTX, demonstrating the speed-up you can expect.
| latency | speed-up | |
|---|---|---|
| original | 9.50s | x1 |
| fp16 | 3.61s | x2.63 |
| channels last | 3.30s | x2.88 |
| traced UNet | 3.21s | x2.96 |
| memory efficient attention | 2.63s | x3.61 |
Use TensorFloat-32
On Ampere and later CUDA devices, matrix multiplications and convolutions can use the TensorFloat-32 (TF32) mode for faster, but slightly less accurate computations. By default, PyTorch enables TF32 mode for convolutions but not matrix multiplications. Unless your network requires full float32 precision, we recommend enabling TF32 for matrix multiplications. It can significantly speeds up computations with typically negligible loss in numerical accuracy.
import torch
torch.backends.cuda.matmul.allow_tf32 = True
You can learn more about TF32 in the Mixed precision training guide.
Half-precision weights
To save GPU memory and get more speed, try loading and running the model weights directly in half-precision or float16:
import torch
from diffusers import DiffusionPipeline
pipe = DiffusionPipeline.from_pretrained(
"runwayml/stable-diffusion-v1-5",
torch_dtype=torch.float16,
use_safetensors=True,
)
pipe = pipe.to("cuda")
prompt = "a photo of an astronaut riding a horse on mars"
image = pipe(prompt).images[0]
Don't use torch.autocast in any of the pipelines as it can lead to black images and is always slower than pure float16 precision.