Vương Đình Minh
d6fa3298fa
update: FluxKontextInpaintPipeline support ( #11820 )
...
* update: FluxKontextInpaintPipeline support
* fix: Refactor code, remove mask_image_latents and ruff check
* feat: Add test case and fix with pytest
* Apply style fixes
* copies
---------
Co-authored-by: YiYi Xu <yixu310@gmail.com >
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2025-07-01 23:34:27 -10:00
Ju Hoon Park
0e95aa853e
[From Single File] support from_single_file method for WanVACE3DTransformer ( #11807 )
...
* add `WandVACETransformer3DModel` in`SINGLE_FILE_LOADABLE_CLASSES`
* add rename keys for `VACE`
add rename keys for `VACE`
* fix typo
Sincere thanks to @nitinmukesh 🙇♂️
* support for `1.3B VACE` model
Sincere thanks to @nitinmukesh again🙇♂️
* update
* update
* Apply style fixes
---------
Co-authored-by: Dhruv Nair <dhruv.nair@gmail.com >
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2025-07-02 05:55:36 +02:00
Mikko Tukiainen
62e847db5f
Use real-valued instead of complex tensors in Wan2.1 RoPE ( #11649 )
...
* use real instead of complex tensors in Wan2.1 RoPE
* remove the redundant type conversion
* unpack rotary_emb
* register rotary embedding frequencies as non-persistent buffers
* Apply style fixes
---------
Co-authored-by: Aryan <aryan@huggingface.co >
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2025-07-01 13:57:19 -10:00
Aryan
a79c3af6bb
[single file] Cosmos ( #11801 )
...
* update
* update
* update docs
2025-07-01 18:02:58 +05:30
Aryan
f064b3bf73
Remove print statement in SCM Scheduler ( #11836 )
...
remove print
2025-06-30 09:07:34 -10:00
Benjamin Bossan
3b079ec3fa
ENH: Improve speed of function expanding LoRA scales ( #11834 )
...
* ENH Improve speed of expanding LoRA scales
Resolves #11816
The following call proved to be a bottleneck when setting a lot of LoRA
adapters in diffusers:
cdaf84a708/src/diffusers/loaders/peft.py (L482)
This is because we would repeatedly call unet.state_dict(), even though
in the standard case, it is not necessary:
cdaf84a708/src/diffusers/loaders/unet_loader_utils.py (L55)
This PR fixes this by deferring this call, so that it is only run when
it's necessary, not earlier.
* Small fix
---------
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
2025-06-30 20:25:56 +05:30
Sayak Paul
bc34fa8386
[lora]feat: use exclude modules to loraconfig. ( #11806 )
...
* feat: use exclude modules to loraconfig.
* version-guard.
* tests and version guard.
* remove print.
* describe the test
* more detailed warning message + shift to debug
* update
* update
* update
* remove test
2025-06-30 20:08:53 +05:30
Sayak Paul
05e7a854d0
[lora] fix: lora unloading behvaiour ( #11822 )
...
* fix: lora unloading behvaiour
* fix
* update
2025-06-28 12:00:42 +05:30
Aryan
76ec3d1fee
Support dynamically loading/unloading loras with group offloading ( #11804 )
...
* update
* add test
* address review comments
* update
* fixes
* change decorator order to fix tests
* try fix
* fight tests
2025-06-27 23:20:53 +05:30
Sayak Paul
21543de571
remove syncs before denoising in Kontext ( #11818 )
2025-06-27 15:57:55 +05:30
Aryan
d7dd924ece
Kontext fixes ( #11815 )
...
fix
2025-06-26 13:03:44 -10:00
Sayak Paul
00f95b9755
Kontext training ( #11813 )
...
* support flux kontext
* make fix-copies
* add example
* add tests
* update docs
* update
* add note on integrity checker
* initial commit
* initial commit
* add readme section and fixes in the training script.
* add test
* rectify ckpt_id
* fix ckpt
* fixes
* change id
* update
* Update examples/dreambooth/train_dreambooth_lora_flux_kontext.py
Co-authored-by: Aryan <aryan@huggingface.co >
* Update examples/dreambooth/README_flux.md
---------
Co-authored-by: Aryan <aryan@huggingface.co >
Co-authored-by: linoytsaban <linoy@huggingface.co >
Co-authored-by: Linoy Tsaban <57615435+linoytsaban@users.noreply.github.com >
2025-06-26 19:31:42 +03:00
Aryan
eea76892e8
Flux Kontext ( #11812 )
...
* support flux kontext
* make fix-copies
* add example
* add tests
* update docs
* update
* add note on integrity checker
* make fix-copies issue
* add copied froms
* make style
* update repository ids
* more copied froms
2025-06-26 21:29:59 +05:30
Animesh Jain
d93381cd41
[rfc][compile] compile method for DiffusionPipeline ( #11705 )
...
* [rfc][compile] compile method for DiffusionPipeline
* Apply suggestions from code review
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
* Apply style fixes
* Update docs/source/en/optimization/fp16.md
* check
---------
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2025-06-26 08:41:38 +05:30
Dhruv Nair
3649d7b903
Follow up for Group Offload to Disk ( #11760 )
...
* update
* update
* update
---------
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
2025-06-26 07:24:24 +05:30
Sayak Paul
10c36e0b78
[chore] post release v0.34.0 ( #11800 )
...
* post release v0.34.0
* code quality
---------
Co-authored-by: YiYi Xu <yixu310@gmail.com >
2025-06-26 06:56:46 +05:30
Sayak Paul
8846635873
fix deprecation in lora after 0.34.0 release ( #11802 )
2025-06-25 08:48:20 -10:00
Sayak Paul
d3e27e05f0
guard omnigen processor. ( #11799 )
2025-06-24 19:15:34 +05:30
Aryan
5df02fc171
[tests] Fix group offloading and layerwise casting test interaction ( #11796 )
...
* update
* update
* update
2025-06-24 17:33:32 +05:30
Sayak Paul
7392c8ff5a
[chore] raise as early as possible in group offloading ( #11792 )
...
* raise as early as possible in group offloading
* remove check from ModuleGroup
2025-06-24 15:05:23 +05:30
YiYi Xu
7bc0a07b19
[lora] only remove hooks that we add back ( #11768 )
...
up
2025-06-23 16:49:19 -10:00
Sayak Paul
92542719ed
[docs] minor cleanups in the lora docs. ( #11770 )
...
* minor cleanups in the lora docs.
* Apply suggestions from code review
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com >
* format docs
* fix copies
---------
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com >
2025-06-24 08:10:07 +05:30
Yuanchen Guo
798265f2b6
[Wan] Fix mask padding in Wan VACE pipeline. ( #11778 )
2025-06-23 16:28:21 +05:30
Yao Matrix
f20b83a04f
enable cpu offloading of new pipelines on XPU & use device agnostic empty to make pipelines work on XPU ( #11671 )
...
* commit 1
Signed-off-by: YAO Matrix <matrix.yao@intel.com >
* patch 2
Signed-off-by: YAO Matrix <matrix.yao@intel.com >
* Update pipeline_pag_sana.py
* Update pipeline_sana.py
* Update pipeline_sana_controlnet.py
* Update pipeline_sana_sprint_img2img.py
* Update pipeline_sana_sprint.py
* fix style
Signed-off-by: YAO Matrix <matrix.yao@intel.com >
* fix fat-thumb while merge conflict
Signed-off-by: YAO Matrix <matrix.yao@intel.com >
* fix ci issues
Signed-off-by: YAO Matrix <matrix.yao@intel.com >
---------
Signed-off-by: YAO Matrix <matrix.yao@intel.com >
Co-authored-by: Ilyas Moutawwakil <57442720+IlyasMoutawwakil@users.noreply.github.com >
2025-06-23 09:44:16 +05:30
Tolga Cangöz
7fc53b5d66
Fix dimensionalities in apply_rotary_emb functions' comments ( #11717 )
...
Fix dimensionality in `apply_rotary_emb` functions' comments.
2025-06-21 12:09:28 -10:00
Dhruv Nair
42077e6c73
Fix failing cpu offload test for LTX Latent Upscale ( #11755 )
...
update
2025-06-20 06:07:34 +02:00
Sayak Paul
3d8d8485fc
fix invalid component handling behaviour in PipelineQuantizationConfig ( #11750 )
...
* start
* updates
2025-06-20 07:54:12 +05:30
Dhruv Nair
195926bbdc
Update Chroma Docs ( #11753 )
...
* update
* update
---------
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
2025-06-19 19:33:19 +02:00
Sayak Paul
85a916bb8b
make group offloading work with disk/nvme transfers ( #11682 )
...
* start implementing disk offloading in group.
* delete diff file.
* updates.patch
* offload_to_disk_path
* check if safetensors already exist.
* add test and clarify.
* updates
* update todos.
* update more docs.
* update docs
2025-06-19 18:09:30 +05:30
Sayak Paul
fb57c76aa1
[LoRA] refactor lora loading at the model-level ( #11719 )
...
* factor out stuff from load_lora_adapter().
* simplifying text encoder lora loading.
* fix peft.py
* fix logging locations.
* formatting
* fix
* update
* update
* update
2025-06-19 13:06:25 +05:30
Aryan
3fba74e153
Add missing HiDream license ( #11747 )
...
update
2025-06-19 08:07:47 +05:30
Aryan
a4df8dbc40
Update more licenses to 2025 ( #11746 )
...
update
2025-06-19 07:46:01 +05:30
Sayak Paul
48eae6f420
[Quantizers] add is_compileable property to quantizers. ( #11736 )
...
add is_compileable property to quantizers.
2025-06-19 07:45:06 +05:30
Dhruv Nair
66394bf6c7
Chroma Follow Up ( #11725 )
...
* update
* update
* update
* update
* update
* update
* update
* update
* update
* update
* update
* update
* update
* update
* update
* update
* update
* update
* update
* update
* updte
* update
* update
* update
2025-06-18 22:24:41 +05:30
Sayak Paul
62cce3045d
[chore] change to 2025 licensing for remaining ( #11741 )
...
change to 2025 licensing for remaining
2025-06-18 20:56:00 +05:30
Saurabh Misra
5ce4814af1
⚡ ️ Speed up method AutoencoderKLWan.clear_cache by 886% ( #11665 )
...
* ⚡ ️ Speed up method `AutoencoderKLWan.clear_cache` by 886%
**Key optimizations:**
- Compute the number of `WanCausalConv3d` modules in each model (`encoder`/`decoder`) **only once during initialization**, store in `self._cached_conv_counts`. This removes unnecessary repeated tree traversals at every `clear_cache` call, which was the main bottleneck (from profiling).
- The internal helper `_count_conv3d_fast` is optimized via a generator expression with `sum` for efficiency.
All comments from the original code are preserved, except for updated or removed local docstrings/comments relevant to changed lines.
**Function signatures and outputs remain unchanged.**
* Apply style fixes
* Apply suggestions from code review
Co-authored-by: Aryan <contact.aryanvs@gmail.com >
* Apply style fixes
---------
Co-authored-by: codeflash-ai[bot] <148906541+codeflash-ai[bot]@users.noreply.github.com>
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
Co-authored-by: Aryan <aryan@huggingface.co >
Co-authored-by: Aryan <contact.aryanvs@gmail.com >
Co-authored-by: Aseem Saxena <aseem.bits@gmail.com >
2025-06-18 08:46:03 +05:30
Aryan
79bd7ecc78
Support more Wan loras (VACE) ( #11726 )
...
update
2025-06-17 10:39:18 +05:30
Sayak Paul
f0dba33d82
[training] show how metadata stuff should be incorporated in training scripts. ( #11707 )
...
* show how metadata stuff should be incorporated in training scripts.
* typing
* fix
---------
Co-authored-by: Linoy Tsaban <57615435+linoytsaban@users.noreply.github.com >
2025-06-16 16:42:34 +05:30
Sayak Paul
d1db4f853a
[LoRA ]fix flux lora loader when return_metadata is true for non-diffusers ( #11716 )
...
* fix flux lora loader when return_metadata is true for non-diffusers
* remove annotation
2025-06-16 14:26:35 +05:30
Edna
8adc6003ba
Chroma Pipeline ( #11698 )
...
* working state from hameerabbasi and iddl
* working state form hameerabbasi and iddl (transformer)
* working state (normalization)
* working state (embeddings)
* add chroma loader
* add chroma to mappings
* add chroma to transformer init
* take out variant stuff
* get decently far in changing variant stuff
* add chroma init
* make chroma output class
* add chroma transformer to dummy tp
* add chroma to init
* add chroma to init
* fix single file
* update
* update
* add chroma to auto pipeline
* add chroma to pipeline init
* change to chroma transformer
* take out variant from blocks
* swap embedder location
* remove prompt_2
* work on swapping text encoders
* remove mask function
* dont modify mask (for now)
* wrap attn mask
* no attn mask (can't get it to work)
* remove pooled prompt embeds
* change to my own unpooled embeddeer
* fix load
* take pooled projections out of transformer
* ensure correct dtype for chroma embeddings
* update
* use dn6 attn mask + fix true_cfg_scale
* use chroma pipeline output
* use DN6 embeddings
* remove guidance
* remove guidance embed (pipeline)
* remove guidance from embeddings
* don't return length
* dont change dtype
* remove unused stuff, fix up docs
* add chroma autodoc
* add .md (oops)
* initial chroma docs
* undo don't change dtype
* undo arxiv change
unsure why that happened
* fix hf papers regression in more places
* Update docs/source/en/api/pipelines/chroma.md
Co-authored-by: Dhruv Nair <dhruv.nair@gmail.com >
* do_cfg -> self.do_classifier_free_guidance
* Update docs/source/en/api/models/chroma_transformer.md
Co-authored-by: Dhruv Nair <dhruv.nair@gmail.com >
* Update chroma.md
* Move chroma layers into transformer
* Remove pruned AdaLayerNorms
* Add chroma fast tests
* (untested) batch cond and uncond
* Add # Copied from for shift
* Update # Copied from statements
* update norm imports
* Revert cond + uncond batching
* Add transformer tests
* move chroma test (oops)
* chroma init
* fix chroma pipeline fast tests
* Update src/diffusers/models/transformers/transformer_chroma.py
Co-authored-by: Dhruv Nair <dhruv.nair@gmail.com >
* Move Approximator and Embeddings
* Fix auto pipeline + make style, quality
* make style
* Apply style fixes
* switch to new input ids
* fix # Copied from error
* remove # Copied from on protected members
* try to fix import
* fix import
* make fix-copes
* revert style fix
* update chroma transformer params
* update chroma transformer approximator init params
* update to pad tokens
* fix batch inference
* Make more pipeline tests work
* Make most transformer tests work
* fix docs
* make style, make quality
* skip batch tests
* fix test skipping
* fix test skipping again
* fix for tests
* Fix all pipeline test
* update
* push local changes, fix docs
* add encoder test, remove pooled dim
* default proj dim
* fix tests
* fix equal size list input
* update
* push local changes, fix docs
* add encoder test, remove pooled dim
* default proj dim
* fix tests
* fix equal size list input
* Revert "fix equal size list input"
This reverts commit 3fe4ad67d5 .
* update
* update
* update
* update
* update
---------
Co-authored-by: Dhruv Nair <dhruv.nair@gmail.com >
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2025-06-14 06:52:56 +05:30
Aryan
9f91305f85
Cosmos Predict2 ( #11695 )
...
* support text-to-image
* update example
* make fix-copies
* support use_flow_sigmas in EDM scheduler instead of maintain cosmos-specific scheduler
* support video-to-world
* update
* rename text2image pipeline
* make fix-copies
* add t2i test
* add test for v2w pipeline
* support edm dpmsolver multistep
* update
* update
* update
* update tests
* fix tests
* safety checker
* make conversion script work without guardrail
2025-06-14 01:51:29 +05:30
Sayak Paul
368958df6f
[LoRA] parse metadata from LoRA and save metadata ( #11324 )
...
* feat: parse metadata from lora state dicts.
* tests
* fix tests
* key renaming
* fix
* smol update
* smol updates
* load metadata.
* automatically save metadata in save_lora_adapter.
* propagate changes.
* changes
* add test to models too.
* tigher tests.
* updates
* fixes
* rename tests.
* sorted.
* Update src/diffusers/loaders/lora_base.py
Co-authored-by: Benjamin Bossan <BenjaminBossan@users.noreply.github.com >
* review suggestions.
* removeprefix.
* propagate changes.
* fix-copies
* sd
* docs.
* fixes
* get review ready.
* one more test to catch error.
* change to a different approach.
* fix-copies.
* todo
* sd3
* update
* revert changes in get_peft_kwargs.
* update
* fixes
* fixes
* simplify _load_sft_state_dict_metadata
* update
* style fix
* uipdate
* update
* update
* empty commit
* _pack_dict_with_prefix
* update
* TODO 1.
* todo: 2.
* todo: 3.
* update
* update
* Apply suggestions from code review
Co-authored-by: Benjamin Bossan <BenjaminBossan@users.noreply.github.com >
* reraise.
* move argument.
---------
Co-authored-by: Benjamin Bossan <BenjaminBossan@users.noreply.github.com >
Co-authored-by: Linoy Tsaban <57615435+linoytsaban@users.noreply.github.com >
2025-06-13 14:37:49 +05:30
Aryan
e52ceae375
Support Wan AccVideo lora ( #11704 )
...
* update
* make style
* Update src/diffusers/loaders/lora_conversion_utils.py
* add note explaining threshold
2025-06-13 11:55:08 +05:30
Tolga Cangöz
47ef79464f
Apply Occam's Razor in position embedding calculation ( #11562 )
...
* fix: remove redundant indexing
* style
2025-06-11 13:47:37 -10:00
Joel Schlosser
b272807bc8
Avoid DtoH sync from access of nonzero() item in scheduler ( #11696 )
2025-06-11 12:03:40 -10:00
rasmi
447ccd0679
Set _torch_version to N/A if torch is disabled. ( #11645 )
2025-06-11 11:59:54 -10:00
Aryan
f3e09114f2
Improve Wan docstrings ( #11689 )
...
* improve docstrings for wan
* Apply suggestions from code review
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com >
* make style
---------
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com >
2025-06-12 01:18:40 +05:30
Sayak Paul
91545666e0
[tests] model-level device_map clarifications ( #11681 )
...
* add clarity in documentation for device_map
* docs
* fix how compiler tester mixins are used.
* propagate
* more
* typo.
* fix tests
* fix order of decroators.
* clarify more.
* more test cases.
* fix doc
* fix device_map docstring in pipeline_utils.
* more examples
* more
* update
* remove code for stuff that is already supported.
* fix stuff.
2025-06-11 22:41:59 +05:30
Sayak Paul
b6f7933044
[tests] tests for compilation + quantization (bnb) ( #11672 )
...
* start adding compilation tests for quantization.
* fixes
* make common utility.
* modularize.
* add group offloading+compile
* xfail
* update
* Update tests/quantization/test_torch_compile_utils.py
Co-authored-by: Dhruv Nair <dhruv.nair@gmail.com >
* fixes
---------
Co-authored-by: Dhruv Nair <dhruv.nair@gmail.com >
2025-06-11 21:14:24 +05:30
Yao Matrix
33e636cea5
enable torchao test cases on XPU and switch to device agnostic APIs for test cases ( #11654 )
...
* enable torchao cases on XPU
Signed-off-by: Matrix YAO <matrix.yao@intel.com >
* device agnostic APIs
Signed-off-by: YAO Matrix <matrix.yao@intel.com >
* more
Signed-off-by: YAO Matrix <matrix.yao@intel.com >
* fix style
Signed-off-by: YAO Matrix <matrix.yao@intel.com >
* enable test_torch_compile_recompilation_and_graph_break on XPU
Signed-off-by: YAO Matrix <matrix.yao@intel.com >
* resolve comments
Signed-off-by: YAO Matrix <matrix.yao@intel.com >
---------
Signed-off-by: Matrix YAO <matrix.yao@intel.com >
Signed-off-by: YAO Matrix <matrix.yao@intel.com >
2025-06-11 15:17:06 +05:30