1
0
mirror of https://github.com/huggingface/diffusers.git synced 2026-01-27 17:22:53 +03:00
Files
diffusers/docs/source/en/api/quantization.md
Steven Liu dfa48831e2 [docs] quant_kwargs (#11712)
* draft

* update
2025-07-29 10:23:16 -07:00

1.4 KiB

Quantization

Quantization techniques reduce memory and computational costs by representing weights and activations with lower-precision data types like 8-bit integers (int8). This enables loading larger models you normally wouldn't be able to fit into memory, and speeding up inference.

Learn how to quantize models in the Quantization guide.

PipelineQuantizationConfig

autodoc quantizers.PipelineQuantizationConfig

BitsAndBytesConfig

autodoc quantizers.quantization_config.BitsAndBytesConfig

GGUFQuantizationConfig

autodoc quantizers.quantization_config.GGUFQuantizationConfig

QuantoConfig

autodoc quantizers.quantization_config.QuantoConfig

TorchAoConfig

autodoc quantizers.quantization_config.TorchAoConfig

DiffusersQuantizer

autodoc quantizers.base.DiffusersQuantizer