naykun
cc48b9368f
Performance Improve for Qwen Image Edit ( #12190 )
...
* fix(qwen-image-edit):
- update condition reshaping logic to improve editing performance
* fix(qwen-image-edit):
- remove _auto_resize
2025-08-19 08:45:18 -04:00
naykun
dba4e007fe
Emergency fix for Qwen-Image-Edit ( #12188 )
...
fix(qwen-image):
shape calculation fix
2025-08-19 14:42:26 +05:30
Linoy Tsaban
8d1de40891
[Wan 2.2 LoRA] add support for 2nd transformer lora loading + wan 2.2 lightx2v lora ( #12074 )
...
* add alpha
* load into 2nd transformer
* Update src/diffusers/loaders/lora_conversion_utils.py
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
* Update src/diffusers/loaders/lora_conversion_utils.py
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
* pr comments
* pr comments
* pr comments
* fix
* fix
* Apply style fixes
* fix copies
* fix
* fix copies
* Update src/diffusers/loaders/lora_pipeline.py
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
* revert change
* revert change
* fix copies
* up
* fix
---------
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
Co-authored-by: linoy <linoy@hf.co >
2025-08-19 08:32:39 +05:30
Sayak Paul
8cc528c5e7
[chore] add lora button to qwenimage docs ( #12183 )
...
up
2025-08-19 07:13:24 +05:30
Taechai
3c50f0cdad
Update README.md ( #12182 )
...
* Update README.md
Specify the full dir
* Update examples/dreambooth/README.md
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com >
---------
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com >
2025-08-18 13:02:49 -07:00
Sayak Paul
555b6cc34f
[LoRA] feat: support more Qwen LoRAs from the community. ( #12170 )
...
* feat: support more Qwen LoRAs from the community.
* revert unrelated changes.
* Revert "revert unrelated changes."
This reverts commit 82dea555dc .
2025-08-18 20:56:28 +05:30
Sayak Paul
5b53f67f06
[docs] Clarify guidance scale in Qwen pipelines ( #12181 )
...
* add clarification regarding guidance_scale in QwenImage
* propagate.
2025-08-18 20:10:23 +05:30
MQY
9918d13eba
fix(training_utils): wrap device in list for DiffusionPipeline ( #12178 )
...
- Modify offload_models function to handle DiffusionPipeline correctly
- Ensure compatibility with both single and multiple module inputs
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
2025-08-18 13:56:17 +05:30
Sayak Paul
e824660436
fix: caching allocator behaviour for quantization. ( #12172 )
...
* fix: caching allocator behaviour for quantization.
* up
* Update src/diffusers/models/model_loading_utils.py
Co-authored-by: Aryan <aryan@huggingface.co >
---------
Co-authored-by: Aryan <aryan@huggingface.co >
2025-08-18 13:16:18 +05:30
Leo Jiang
03be15e890
[Docs] typo error in qwen image ( #12144 )
...
typo error in qwen image
Co-authored-by: Jη³ι‘΅ <jiangshuo9@h-partners.com >
Co-authored-by: Aryan <aryan@huggingface.co >
2025-08-18 11:55:42 +05:30
Junyu Chen
85cbe589a7
Minor modification to support DC-AE-turbo ( #12169 )
...
* minor modification to support dc-ae-turbo
* minor
2025-08-18 11:37:36 +05:30
Sayak Paul
4d9b82297f
[qwen] Qwen image edit followups ( #12166 )
...
* add docs.
* more docs.
* xfail full compilation for Qwen for now.
* tests
* up
* up
* up
* reviewer feedback.
2025-08-18 08:33:07 +05:30
Lambert
76c809e2ef
remove silu for CogView4 ( #12150 )
...
* CogView4: remove SiLU in final AdaLN (match Megatron); add switch to AdaLayerNormContinuous; split temb_raw/temb_blocks
* CogView4: remove SiLU in final AdaLN (match Megatron); add switch to AdaLayerNormContinuous; split temb_raw/temb_blocks
* CogView4: remove SiLU in final AdaLN (match Megatron); add switch to AdaLayerNormContinuous; split temb_raw/temb_blocks
* CogView4: use local final AdaLN (no SiLU) per review; keep generic AdaLN unchanged
* re-add configs as normal files (no LFS)
* Apply suggestions from code review
* Apply style fixes
---------
Co-authored-by: ζ¦εζΆ΅ <lambert@wujiahandeMacBook-Pro.local >
Co-authored-by: Aryan <contact.aryanvs@gmail.com >
Co-authored-by: Aryan <aryan@huggingface.co >
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2025-08-18 08:02:01 +05:30
naykun
e682af2027
Qwen Image Edit Support ( #12164 )
...
* feat(qwen-image):
add qwen-image-edit support
* fix(qwen image):
- compatible with torch.compile in new rope setting
- fix init import
- add prompt truncation in img2img and inpaint pipe
- remove unused logic and comment
- add copy statement
- guard logic for rope video shape tuple
* fix(qwen image):
- make fix-copies
- update doc
2025-08-16 19:24:29 -10:00
Steven Liu
a58a4f665b
[docs] Quickstart ( #12128 )
...
* start
* feedback
* feedback
* feedback
2025-08-15 13:48:01 -07:00
Yao Matrix
8701e8644b
make test_gguf all pass on xpu ( #12158 )
...
Signed-off-by: Yao, Matrix <matrix.yao@intel.com >
2025-08-16 01:00:31 +05:30
Sayak Paul
58bf268261
support hf_quantizer in cache warmup. ( #12043 )
...
* support hf_quantizer in cache warmup.
* reviewer feedback
* up
* up
2025-08-14 18:57:33 +05:30
Sayak Paul
1b48db4c8f
[core] respect local_files_only=True when using sharded checkpoints ( #12005 )
...
* tighten compilation tests for quantization
* feat: model_info but local.
* up
* Revert "tighten compilation tests for quantization"
This reverts commit 8d431dc967 .
* up
* reviewer feedback.
* reviewer feedback.
* up
* up
* empty
* update
---------
Co-authored-by: DN6 <dhruv.nair@gmail.com >
2025-08-14 14:50:51 +05:30
Sayak Paul
46a0c6aa82
feat: cuda device_map for pipelines. ( #12122 )
...
* feat: cuda device_map for pipelines.
* up
* up
* empty
* up
2025-08-14 10:31:24 +05:30
Steven Liu
421ee07e33
[docs] Parallel loading of shards ( #12135 )
...
* initial
* feedback
* Update docs/source/en/using-diffusers/loading.md
---------
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
2025-08-14 09:39:40 +05:30
Sayak Paul
123506ee59
make parallel loading flag a part of constants. ( #12137 )
2025-08-14 09:36:47 +05:30
Alrott SlimRG
8c48ec05ed
Fix bf15/fp16 for pipeline_wan_vace.py ( #12143 )
...
* Fix bf15/fp16 for pipeline_wan_vace.py
* Update pipeline_wan_vace.py
* try removing xfail decorator
---------
Co-authored-by: Aryan <aryan@huggingface.co >
2025-08-14 05:04:00 +05:30
Steven Liu
a6d2fc2c1d
[docs] Refresh effective and efficient doc ( #12134 )
...
* refresh
* init
* feedback
2025-08-13 11:14:21 -07:00
Sam Yuan
bc2762cce9
try to use deepseek with an agent to auto i18n to zh ( #12032 )
...
* try to use deepseek with an agent to auto i18n to zh
Signed-off-by: SamYuan1990 <yy19902439@126.com >
* add two more docs
Signed-off-by: SamYuan1990 <yy19902439@126.com >
* fix, updated some prompt for better translation
Signed-off-by: SamYuan1990 <yy19902439@126.com >
* Try to passs CI check
Signed-off-by: SamYuan1990 <yy19902439@126.com >
* fix up for human review process
Signed-off-by: SamYuan1990 <yy19902439@126.com >
* fix up
Signed-off-by: SamYuan1990 <yy19902439@126.com >
* fix review comments
Signed-off-by: SamYuan1990 <yy19902439@126.com >
---------
Signed-off-by: SamYuan1990 <yy19902439@126.com >
2025-08-13 08:26:24 -07:00
Sayak Paul
baa9b582f3
[core] parallel loading of shards ( #12028 )
...
* checking.
* checking
* checking
* up
* up
* up
* Apply suggestions from code review
Co-authored-by: Dhruv Nair <dhruv.nair@gmail.com >
* up
* up
* fix
* review feedback.
---------
Co-authored-by: Dhruv Nair <dhruv.nair@gmail.com >
2025-08-13 10:33:20 +05:30
Nguyα»
n Trα»ng TuαΊ₯n
da096a4999
Add QwenImage Inpainting and Img2Img pipeline ( #12117 )
...
* feat/qwenimage-img2img-inpaint
* Update qwenimage.md to reflect new pipelines and add # Copied from convention
* tiny fix for passing ruff check
* reformat code
* fix copied from statement
* fix copied from statement
* copy and style fix
* fix dummies
---------
Co-authored-by: TuanNT-ZenAI <tuannt.zenai@gmail.com >
Co-authored-by: DN6 <dhruv.nair@gmail.com >
2025-08-13 09:41:50 +05:30
Leo Jiang
480fb357a3
[Bugfix] typo fix in NPU FA ( #12129 )
...
[Bugfix] typo error in npu FA
Co-authored-by: Jη³ι‘΅ <jiangshuo9@h-partners.com >
Co-authored-by: Aryan <aryan@huggingface.co >
2025-08-12 22:12:19 +05:30
Steven Liu
38740ddbd8
[docs] Modular diffusers ( #11931 )
...
* start
* draft
* state, pipelineblock, apis
* sequential
* fix links
* new
* loop, auto
* fix
* pipeline
* guiders
* components manager
* reviews
* update
* update
* update
---------
Co-authored-by: DN6 <dhruv.nair@gmail.com >
2025-08-12 18:50:20 +05:30
IrisRainbowNeko
72282876b2
Add low_cpu_mem_usage option to from_single_file to align with from_pretrained ( #12114 )
...
* align meta device of from_single_file with from_pretrained
* update docstr
* Apply style fixes
---------
Co-authored-by: IrisRainbowNeko <rainbow-neko@outlook.com >
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2025-08-12 16:36:55 +05:30
Dhruv Nair
3552279a23
[Modular] Add experimental feature warning for Modular Diffusers ( #12127 )
...
update
2025-08-12 10:25:02 +05:30
Steven Liu
f8ba5cd77a
[docs] Cache link ( #12105 )
...
cache
2025-08-11 11:03:59 -07:00
Sayak Paul
c9c8217306
[chore] complete the licensing statement. ( #12001 )
...
complete the licensing statement.
2025-08-11 22:15:15 +05:30
Aryan
135df5be9d
[tests] Add inference test slices for SD3 and remove unnecessary tests ( #12106 )
...
* update
* nuke LoC for inference slices
2025-08-11 18:36:09 +05:30
Sayak Paul
4a9dbd56f6
enable compilation in qwen image. ( #12061 )
...
* update
* update
* update
* enable compilation in qwen image.
* add tests
---------
Co-authored-by: Aryan <aryan@huggingface.co >
2025-08-11 14:37:37 +05:30
Dhruv Nair
630d27fe5b
[Modular] More Updates for Custom Code Loading ( #11969 )
...
* update
* update
* update
* update
* update
* update
* update
* update
* update
* update
* update
* update
* update
* update
---------
Co-authored-by: YiYi Xu <yixu310@gmail.com >
2025-08-11 13:26:58 +05:30
Sayak Paul
f442955c6e
[lora] support loading loras from lightx2v/Qwen-Image-Lightning ( #12119 )
...
* feat: support qwen lightning lora.
* add docs.
* fix
2025-08-11 09:27:10 +05:30
Sayak Paul
ff9a387618
[core] add modular support for Flux I2I ( #12086 )
...
* start
* encoder.
* up
* up
* up
* up
* up
* up
2025-08-11 07:23:23 +05:30
Sayak Paul
03c3f69aa5
[docs] diffusers gguf checkpoints ( #12092 )
...
* feat: support loading diffusers format gguf checkpoints.
* update
* update
* qwen
* up
* Apply suggestions from code review
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com >
Co-authored-by: Dhruv Nair <dhruv.nair@gmail.com >
* up
---------
Co-authored-by: DN6 <dhruv.nair@gmail.com >
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com >
2025-08-09 08:49:49 +05:30
Sayak Paul
f20aba3e87
[GGUF] feat: support loading diffusers format gguf checkpoints. ( #11684 )
...
* feat: support loading diffusers format gguf checkpoints.
* update
* update
* qwen
---------
Co-authored-by: DN6 <dhruv.nair@gmail.com >
2025-08-08 22:27:15 +05:30
YiYi Xu
ccf2c31188
[Modular] Fast Tests ( #11937 )
...
* rearrage the params to groups: default params /image params /batch params / callback params
* make style
* add names property to pipeline blocks
* style
* remove more unused func
* prepare_latents_inpaint always return noise and image_latents
* up
* up
* update
* update
* update
* update
* update
* update
* update
* update
---------
Co-authored-by: DN6 <dhruv.nair@gmail.com >
2025-08-08 19:42:13 +05:30
Sayak Paul
7b10e4ae65
[tests] device placement for non-denoiser components in group offloading LoRA tests ( #12103 )
...
up
2025-08-08 13:34:29 +05:30
Beinsezii
3c0531bc50
lora_conversion_utils: replace lora up/down with a/b even if transformer. in key ( #12101 )
...
lora_conversion_utils: replace lora up/down with a/b even if transformer. in key
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
2025-08-08 11:21:47 +05:30
Sayak Paul
a8e47978c6
[lora] adapt new LoRA config injection method ( #11999 )
...
* use state dict when setting up LoRA.
* up
* up
* up
* comment
* up
* up
2025-08-08 09:22:48 +05:30
YiYi Xu
50e18ee698
[qwen] device typo ( #12099 )
...
up
2025-08-07 12:27:39 -10:00
DefTruth
4b17fa2a2e
fix flux type hint ( #12089 )
...
fix-flux-type-hint
2025-08-07 13:00:15 +05:30
dg845
d45199a2f1
Implement Frequency-Decoupled Guidance (FDG) as a Guider ( #11976 )
...
* Initial commit implementing frequency-decoupled guidance (FDG) as a guider
* Update FrequencyDecoupledGuidance docstring to describe FDG
* Update project so that it accepts any number of non-batch dims
* Change guidance_scale and other params to accept a list of params for each freq level
* Add comment with Laplacian pyramid shapes
* Add function to import_utils to check if the kornia package is available
* Only import from kornia if package is available
* Fix bug: use pred_cond/uncond in freq space rather than data space
* Allow guidance rescaling to be done in data space or frequency space (speculative)
* Add kornia install instructions to kornia import error message
* Add config to control whether operations are upcast to fp64
* Add parallel_weights recommended values to docstring
* Apply style fixes
* make fix-copies
---------
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
Co-authored-by: Aryan <aryan@huggingface.co >
2025-08-07 11:21:02 +05:30
Sayak Paul
061163142d
[tests] tighten compilation tests for quantization ( #12002 )
...
* tighten compilation tests for quantization
* up
* up
2025-08-07 10:13:14 +05:30
Dhruv Nair
5780776c8a
Make prompt_2 optional in Flux Pipelines ( #12073 )
...
* update
* update
2025-08-06 15:40:12 -10:00
Aryan
f19421e27c
Helper functions to return skip-layer compatible layers ( #12048 )
...
update
Co-authored-by: Γlvaro Somoza <asomoza@users.noreply.github.com >
2025-08-06 07:55:16 -10:00
Aryan
69cdc25746
Fix group offloading synchronization bug for parameter-only GroupModule's ( #12077 )
...
* update
* update
* refactor
* fuck yeah
* make style
* Update src/diffusers/hooks/group_offloading.py
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
* Update src/diffusers/hooks/group_offloading.py
---------
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
2025-08-06 21:11:00 +05:30