diff --git a/docs/source/en/_toctree.yml b/docs/source/en/_toctree.yml index 89af55ed2a..1c21d4cd9f 100644 --- a/docs/source/en/_toctree.yml +++ b/docs/source/en/_toctree.yml @@ -139,8 +139,6 @@ - sections: - local: optimization/fp16 title: Speed up inference - - local: using-diffusers/distilled_sd - title: Distilled Stable Diffusion inference - local: optimization/memory title: Reduce memory usage - local: optimization/torch2.0 diff --git a/docs/source/en/optimization/fp16.md b/docs/source/en/optimization/fp16.md index 7a2cf93498..b21b613688 100644 --- a/docs/source/en/optimization/fp16.md +++ b/docs/source/en/optimization/fp16.md @@ -12,27 +12,23 @@ specific language governing permissions and limitations under the License. # Speed up inference -There are several ways to optimize 🤗 Diffusers for inference speed. As a general rule of thumb, we recommend using either [xFormers](xformers) or `torch.nn.functional.scaled_dot_product_attention` in PyTorch 2.0 for their memory-efficient attention. +There are several ways to optimize Diffusers for inference speed, such as reducing the computational burden by lowering the data precision or using a lightweight distilled model. There are also memory-efficient attention implementations, [xFormers](xformers) and [scaled dot product attetntion](https://pytorch.org/docs/stable/generated/torch.nn.functional.scaled_dot_product_attention.html) in PyTorch 2.0, that reduce memory usage which also indirectly speeds up inference. Different speed optimizations can be stacked together to get the fastest inference times. - +> [!TIP] +> Optimizing for inference speed or reduced memory usage can lead to improved performance in the other category, so you should try to optimize for both whenever you can. This guide focuses on inference speed, but you can learn more about lowering memory usage in the [Reduce memory usage](memory) guide. -In many cases, optimizing for speed or memory leads to improved performance in the other, so you should try to optimize for both whenever you can. This guide focuses on inference speed, but you can learn more about preserving memory in the [Reduce memory usage](memory) guide. +The inference times below are obtained from generating a single 512x512 image from the prompt "a photo of an astronaut riding a horse on mars" with 50 DDIM steps on a NVIDIA A100. - +| setup | latency | speed-up | +|----------|---------|----------| +| baseline | 5.27s | x1 | +| tf32 | 4.14s | x1.27 | +| fp16 | 3.51s | x1.50 | +| combined | 3.41s | x1.54 | -The results below are obtained from generating a single 512x512 image from the prompt `a photo of an astronaut riding a horse on mars` with 50 DDIM steps on a Nvidia Titan RTX, demonstrating the speed-up you can expect. +## TensorFloat-32 -| | latency | speed-up | -| ---------------- | ------- | ------- | -| original | 9.50s | x1 | -| fp16 | 3.61s | x2.63 | -| channels last | 3.30s | x2.88 | -| traced UNet | 3.21s | x2.96 | -| memory efficient attention | 2.63s | x3.61 | - -## Use TensorFloat-32 - -On Ampere and later CUDA devices, matrix multiplications and convolutions can use the [TensorFloat-32 (TF32)](https://blogs.nvidia.com/blog/2020/05/14/tensorfloat-32-precision-format/) mode for faster, but slightly less accurate computations. By default, PyTorch enables TF32 mode for convolutions but not matrix multiplications. Unless your network requires full float32 precision, we recommend enabling TF32 for matrix multiplications. It can significantly speeds up computations with typically negligible loss in numerical accuracy. +On Ampere and later CUDA devices, matrix multiplications and convolutions can use the [TensorFloat-32 (tf32)](https://blogs.nvidia.com/blog/2020/05/14/tensorfloat-32-precision-format/) mode for faster, but slightly less accurate computations. By default, PyTorch enables tf32 mode for convolutions but not matrix multiplications. Unless your network requires full float32 precision, we recommend enabling tf32 for matrix multiplications. It can significantly speed up computations with typically negligible loss in numerical accuracy. ```python import torch @@ -40,11 +36,11 @@ import torch torch.backends.cuda.matmul.allow_tf32 = True ``` -You can learn more about TF32 in the [Mixed precision training](https://huggingface.co/docs/transformers/en/perf_train_gpu_one#tf32) guide. +Learn more about tf32 in the [Mixed precision training](https://huggingface.co/docs/transformers/en/perf_train_gpu_one#tf32) guide. ## Half-precision weights -To save GPU memory and get more speed, try loading and running the model weights directly in half-precision or float16: +To save GPU memory and get more speed, set `torch_dtype=torch.float16` to load and run the model weights directly with half-precision weights. ```Python import torch @@ -56,19 +52,76 @@ pipe = DiffusionPipeline.from_pretrained( use_safetensors=True, ) pipe = pipe.to("cuda") - -prompt = "a photo of an astronaut riding a horse on mars" -image = pipe(prompt).images[0] ``` - - -Don't use [`torch.autocast`](https://pytorch.org/docs/stable/amp.html#torch.autocast) in any of the pipelines as it can lead to black images and is always slower than pure float16 precision. - - +> [!WARNING] +> Don't use [torch.autocast](https://pytorch.org/docs/stable/amp.html#torch.autocast) in any of the pipelines as it can lead to black images and is always slower than pure float16 precision. ## Distilled model -You could also use a distilled Stable Diffusion model and autoencoder to speed up inference. During distillation, many of the UNet's residual and attention blocks are shed to reduce the model size. The distilled model is faster and uses less memory while generating images of comparable quality to the full Stable Diffusion model. +You could also use a distilled Stable Diffusion model and autoencoder to speed up inference. During distillation, many of the UNet's residual and attention blocks are shed to reduce the model size by 51% and improve latency on CPU/GPU by 43%. The distilled model is faster and uses less memory while generating images of comparable quality to the full Stable Diffusion model. -Learn more about in the [Distilled Stable Diffusion inference](../using-diffusers/distilled_sd) guide! +> [!TIP] +> Read the [Open-sourcing Knowledge Distillation Code and Weights of SD-Small and SD-Tiny](https://huggingface.co/blog/sd_distillation) blog post to learn more about how knowledge distillation training works to produce a faster, smaller, and cheaper generative model. + +The inference times below are obtained from generating 4 images from the prompt "a photo of an astronaut riding a horse on mars" with 25 PNDM steps on a NVIDIA A100. Each generation is repeated 3 times with the distilled Stable Diffusion v1.4 model by [Nota AI](https://hf.co/nota-ai). + +| setup | latency | speed-up | +|------------------------------|---------|----------| +| baseline | 6.37s | x1 | +| distilled | 4.18s | x1.52 | +| distilled + tiny autoencoder | 3.83s | x1.66 | + +Let's load the distilled Stable Diffusion model and compare it against the original Stable Diffusion model. + +```py +from diffusers import StableDiffusionPipeline +import torch + +distilled = StableDiffusionPipeline.from_pretrained( + "nota-ai/bk-sdm-small", torch_dtype=torch.float16, use_safetensors=True, +).to("cuda") +prompt = "a golden vase with different flowers" +generator = torch.manual_seed(2023) +image = distilled("a golden vase with different flowers", num_inference_steps=25, generator=generator).images[0] +image +``` + +
+
+ +
original Stable Diffusion
+
+
+ +
distilled Stable Diffusion
+
+
+ +### Tiny AutoEncoder + +To speed inference up even more, replace the autoencoder with a [distilled version](https://huggingface.co/sayakpaul/taesdxl-diffusers) of it. + +```py +import torch +from diffusers import AutoencoderTiny, StableDiffusionPipeline + +distilled = StableDiffusionPipeline.from_pretrained( + "nota-ai/bk-sdm-small", torch_dtype=torch.float16, use_safetensors=True, +).to("cuda") +distilled.vae = AutoencoderTiny.from_pretrained( + "sayakpaul/taesd-diffusers", torch_dtype=torch.float16, use_safetensors=True, +).to("cuda") + +prompt = "a golden vase with different flowers" +generator = torch.manual_seed(2023) +image = distilled("a golden vase with different flowers", num_inference_steps=25, generator=generator).images[0] +image +``` + +
+
+ +
distilled Stable Diffusion + Tiny AutoEncoder
+
+
diff --git a/docs/source/en/using-diffusers/distilled_sd.md b/docs/source/en/using-diffusers/distilled_sd.md deleted file mode 100644 index c4c5f7ad19..0000000000 --- a/docs/source/en/using-diffusers/distilled_sd.md +++ /dev/null @@ -1,133 +0,0 @@ - - -# Distilled Stable Diffusion inference - -[[open-in-colab]] - -Stable Diffusion inference can be a computationally intensive process because it must iteratively denoise the latents to generate an image. To reduce the computational burden, you can use a *distilled* version of the Stable Diffusion model from [Nota AI](https://huggingface.co/nota-ai). The distilled version of their Stable Diffusion model eliminates some of the residual and attention blocks from the UNet, reducing the model size by 51% and improving latency on CPU/GPU by 43%. - - - -Read this [blog post](https://huggingface.co/blog/sd_distillation) to learn more about how knowledge distillation training works to produce a faster, smaller, and cheaper generative model. - - - -Let's load the distilled Stable Diffusion model and compare it against the original Stable Diffusion model: - -```py -from diffusers import StableDiffusionPipeline -import torch - -distilled = StableDiffusionPipeline.from_pretrained( - "nota-ai/bk-sdm-small", torch_dtype=torch.float16, use_safetensors=True, -).to("cuda") - -original = StableDiffusionPipeline.from_pretrained( - "CompVis/stable-diffusion-v1-4", torch_dtype=torch.float16, use_safetensors=True, -).to("cuda") -``` - -Given a prompt, get the inference time for the original model: - -```py -import time - -seed = 2023 -generator = torch.manual_seed(seed) - -NUM_ITERS_TO_RUN = 3 -NUM_INFERENCE_STEPS = 25 -NUM_IMAGES_PER_PROMPT = 4 - -prompt = "a golden vase with different flowers" - -start = time.time_ns() -for _ in range(NUM_ITERS_TO_RUN): - images = original( - prompt, - num_inference_steps=NUM_INFERENCE_STEPS, - generator=generator, - num_images_per_prompt=NUM_IMAGES_PER_PROMPT - ).images -end = time.time_ns() -original_sd = f"{(end - start) / 1e6:.1f}" - -print(f"Execution time -- {original_sd} ms\n") -"Execution time -- 45781.5 ms" -``` - -Time the distilled model inference: - -```py -start = time.time_ns() -for _ in range(NUM_ITERS_TO_RUN): - images = distilled( - prompt, - num_inference_steps=NUM_INFERENCE_STEPS, - generator=generator, - num_images_per_prompt=NUM_IMAGES_PER_PROMPT - ).images -end = time.time_ns() - -distilled_sd = f"{(end - start) / 1e6:.1f}" -print(f"Execution time -- {distilled_sd} ms\n") -"Execution time -- 29884.2 ms" -``` - -
-
- -
original Stable Diffusion (45781.5 ms)
-
-
- -
distilled Stable Diffusion (29884.2 ms)
-
-
- -## Tiny AutoEncoder - -To speed inference up even more, use a tiny distilled version of the [Stable Diffusion VAE](https://huggingface.co/sayakpaul/taesdxl-diffusers) to denoise the latents into images. Replace the VAE in the distilled Stable Diffusion model with the tiny VAE: - -```py -from diffusers import AutoencoderTiny - -distilled.vae = AutoencoderTiny.from_pretrained( - "sayakpaul/taesd-diffusers", torch_dtype=torch.float16, use_safetensors=True, -).to("cuda") -``` - -Time the distilled model and distilled VAE inference: - -```py -start = time.time_ns() -for _ in range(NUM_ITERS_TO_RUN): - images = distilled( - prompt, - num_inference_steps=NUM_INFERENCE_STEPS, - generator=generator, - num_images_per_prompt=NUM_IMAGES_PER_PROMPT - ).images -end = time.time_ns() - -distilled_tiny_sd = f"{(end - start) / 1e6:.1f}" -print(f"Execution time -- {distilled_tiny_sd} ms\n") -"Execution time -- 27165.7 ms" -``` - -
-
- -
distilled Stable Diffusion + Tiny AutoEncoder (27165.7 ms)
-
-