From 2a19302e6522ff2106933843bd0d88d82c3ea5a8 Mon Sep 17 00:00:00 2001 From: DN6 Date: Tue, 13 May 2025 14:50:13 +0530 Subject: [PATCH] update --- .../api/models/hidream_image_transformer.md | 19 ++++++++++++++----- 1 file changed, 14 insertions(+), 5 deletions(-) diff --git a/docs/source/en/api/models/hidream_image_transformer.md b/docs/source/en/api/models/hidream_image_transformer.md index 3c84a2afad..b7af6f4c89 100644 --- a/docs/source/en/api/models/hidream_image_transformer.md +++ b/docs/source/en/api/models/hidream_image_transformer.md @@ -26,24 +26,33 @@ transformer = HiDreamImageTransformer2DModel.from_pretrained("HiDream-ai/HiDream GGUF checkpoints for the `HiDreamImageTransformer2DModel` can we be loaded using `~FromOriginalModelMixin.from_single_file` ```python -from diffusers import HiDreamImageTransformer2DModel +from diffusers import GGUFQuantizationConfig, HiDreamImageTransformer2DModel ckpt_path = "https://huggingface.co/city96/HiDream-I1-Dev-gguf/blob/main/hidream-i1-dev-Q2_K.gguf" -transformer = HiDreamImageTransformer2DModel.from_single_file(ckpt_path, torch_dtype=torch.bfloat16) +transformer = HiDreamImageTransformer2DModel.from_single_file( + ckpt_path, + quantization_config=GGUFQuantizationConfig(compute_dtype=torch.bfloat16), + torch_dtype=torch.bfloat16 +) ``` If you are trying to use a GGUF checkpoint for the `HiDream-ai/HiDream-E1-Full` model, you will have to pass in a `config` argument to properly configure the model. This is because the HiDream I1 and E1 models share the same state dict keys, so it is currently not possible to automatically infer the model type from the checkpoint itself. ```python -from diffusers import HiDreamImageTransformer2DModel +from diffusers import GGUFQuantizationConfig, HiDreamImageTransformer2DModel ckpt_path = "https://huggingface.co/ND911/HiDream_e1_full_bf16-ggufs/blob/main/hidream_e1_full_bf16-Q2_K.gguf" -transformer = HiDreamImageTransformer2DModel.from_single_file(ckpt_path, config="HiDream-ai/HiDream-E1-Full", subfolder="transformer", torch_dtype=torch.bfloat16) +transformer = HiDreamImageTransformer2DModel.from_single_file( + ckpt_path, + quantization_config=GGUFQuantizationConfig(compute_dtype=torch.bfloat16), + config="HiDream-ai/HiDream-E1-Full", + subfolder="transformer", + torch_dtype=torch.bfloat16 +) ``` - ## HiDreamImageTransformer2DModel [[autodoc]] HiDreamImageTransformer2DModel