diff --git a/docs/source/en/optimization/fp16.mdx b/docs/source/en/optimization/fp16.mdx index 596312a0ff..4081cfc6ef 100644 --- a/docs/source/en/optimization/fp16.mdx +++ b/docs/source/en/optimization/fp16.mdx @@ -60,8 +60,10 @@ image = pipe(prompt).images[0] ``` + It is strongly discouraged to make use of [`torch.autocast`](https://pytorch.org/docs/stable/amp.html#torch.autocast) in any of the pipelines as it can lead to black images and is always slower than using pure float16 precision. + ## Sliced attention for additional memory savings diff --git a/docs/source/en/optimization/torch2.0.mdx b/docs/source/en/optimization/torch2.0.mdx index 2bcf3fa821..05a4043d26 100644 --- a/docs/source/en/optimization/torch2.0.mdx +++ b/docs/source/en/optimization/torch2.0.mdx @@ -18,6 +18,7 @@ Starting from version `0.13.0`, Diffusers supports the latest optimization from ## Installation + To benefit from the accelerated attention implementation and `torch.compile()`, you just need to install the latest versions of PyTorch 2.0 from pip, and make sure you are on diffusers 0.13.0 or later. As explained below, diffusers automatically uses the optimized attention processor ([`AttnProcessor2_0`](https://github.com/huggingface/diffusers/blob/1a5797c6d4491a879ea5285c4efc377664e0332d/src/diffusers/models/attention_processor.py#L798)) (but not `torch.compile()`) when PyTorch 2.0 is available. @@ -153,7 +154,7 @@ for _ in range(3): image = pipe(prompt=prompt, image=init_image).images[0] ``` -#### Stable Diffusion - inpatining +#### Stable Diffusion - inpainting ```python from diffusers import StableDiffusionInpaintPipeline