From 62cbde8d41ac39e4b3a1f5bbbbc546cc93f1d84d Mon Sep 17 00:00:00 2001 From: Sayak Paul Date: Fri, 13 Jun 2025 07:17:03 +0530 Subject: [PATCH] [docs] mention fp8 benefits on supported hardware. (#11699) * mention fp8 benefits on supported hardware. * Update docs/source/en/quantization/torchao.md Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com> --------- Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com> --- docs/source/en/quantization/torchao.md | 3 +++ 1 file changed, 3 insertions(+) diff --git a/docs/source/en/quantization/torchao.md b/docs/source/en/quantization/torchao.md index 95b30a6e01..555dd7a47a 100644 --- a/docs/source/en/quantization/torchao.md +++ b/docs/source/en/quantization/torchao.md @@ -65,6 +65,9 @@ transformer = torch.compile(transformer, mode="max-autotune", fullgraph=True) For speed and memory benchmarks on Flux and CogVideoX, please refer to the table [here](https://github.com/huggingface/diffusers/pull/10009#issue-2688781450). You can also find some torchao [benchmarks](https://github.com/pytorch/ao/tree/main/torchao/quantization#benchmarks) numbers for various hardware. +> [!TIP] +> The FP8 post-training quantization schemes in torchao are effective for GPUs with compute capability of at least 8.9 (RTX-4090, Hopper, etc.). FP8 often provides the best speed, memory, and quality trade-off when generating images and videos. We recommend combining FP8 and torch.compile if your GPU is compatible. + torchao also supports an automatic quantization API through [autoquant](https://github.com/pytorch/ao/blob/main/torchao/quantization/README.md#autoquantization). Autoquantization determines the best quantization strategy applicable to a model by comparing the performance of each technique on chosen input types and shapes. Currently, this can be used directly on the underlying modeling components. Diffusers will also expose an autoquant configuration option in the future. The `TorchAoConfig` class accepts three parameters: