From 8c938fb410e79a0d04d727b68edf28e4036c0ca5 Mon Sep 17 00:00:00 2001 From: Aryan Date: Thu, 3 Jul 2025 04:21:57 +0530 Subject: [PATCH] [docs] Add a note of `_keep_in_fp32_modules` (#11851) * update * Update docs/source/en/using-diffusers/schedulers.md Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com> * Update schedulers.md --------- Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com> --- docs/source/en/using-diffusers/schedulers.md | 12 ++++++++++++ 1 file changed, 12 insertions(+) diff --git a/docs/source/en/using-diffusers/schedulers.md b/docs/source/en/using-diffusers/schedulers.md index a3efbf2e80..aabb9dd31c 100644 --- a/docs/source/en/using-diffusers/schedulers.md +++ b/docs/source/en/using-diffusers/schedulers.md @@ -242,3 +242,15 @@ unet = UNet2DConditionModel.from_pretrained( ) unet.save_pretrained("./local-unet", variant="non_ema") ``` + +Use the `torch_dtype` argument in [`~ModelMixin.from_pretrained`] to specify the dtype to load a model in. + +```py +from diffusers import AutoModel + +unet = AutoModel.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", subfolder="unet", torch_dtype=torch.float16 +) +``` + +You can also use the [torch.Tensor.to](https://docs.pytorch.org/docs/stable/generated/torch.Tensor.to.html) method to convert to the specified dtype on the fly. It converts *all* weights unlike the `torch_dtype` argument that respects the `_keep_in_fp32_modules`. This is important for models whose layers must remain in fp32 for numerical stability and best generation quality (see example [here](https://github.com/huggingface/diffusers/blob/f864a9a352fa4a220d860bfdd1782e3e5af96382/src/diffusers/models/transformers/transformer_wan.py#L374)).