Sayak Paul
fb57c76aa1
[LoRA] refactor lora loading at the model-level ( #11719 )
...
* factor out stuff from load_lora_adapter().
* simplifying text encoder lora loading.
* fix peft.py
* fix logging locations.
* formatting
* fix
* update
* update
* update
2025-06-19 13:06:25 +05:30
Sayak Paul
62cce3045d
[chore] change to 2025 licensing for remaining ( #11741 )
...
change to 2025 licensing for remaining
2025-06-18 20:56:00 +05:30
Sayak Paul
368958df6f
[LoRA] parse metadata from LoRA and save metadata ( #11324 )
...
* feat: parse metadata from lora state dicts.
* tests
* fix tests
* key renaming
* fix
* smol update
* smol updates
* load metadata.
* automatically save metadata in save_lora_adapter.
* propagate changes.
* changes
* add test to models too.
* tigher tests.
* updates
* fixes
* rename tests.
* sorted.
* Update src/diffusers/loaders/lora_base.py
Co-authored-by: Benjamin Bossan <BenjaminBossan@users.noreply.github.com >
* review suggestions.
* removeprefix.
* propagate changes.
* fix-copies
* sd
* docs.
* fixes
* get review ready.
* one more test to catch error.
* change to a different approach.
* fix-copies.
* todo
* sd3
* update
* revert changes in get_peft_kwargs.
* update
* fixes
* fixes
* simplify _load_sft_state_dict_metadata
* update
* style fix
* uipdate
* update
* update
* empty commit
* _pack_dict_with_prefix
* update
* TODO 1.
* todo: 2.
* todo: 3.
* update
* update
* Apply suggestions from code review
Co-authored-by: Benjamin Bossan <BenjaminBossan@users.noreply.github.com >
* reraise.
* move argument.
---------
Co-authored-by: Benjamin Bossan <BenjaminBossan@users.noreply.github.com >
Co-authored-by: Linoy Tsaban <57615435+linoytsaban@users.noreply.github.com >
2025-06-13 14:37:49 +05:30
co63oc
8183d0f16e
Fix typos in strings and comments ( #11476 )
...
* Fix typos in strings and comments
Signed-off-by: co63oc <co63oc@users.noreply.github.com >
* Update src/diffusers/hooks/hooks.py
Co-authored-by: Aryan <contact.aryanvs@gmail.com >
* Update src/diffusers/hooks/hooks.py
Co-authored-by: Aryan <contact.aryanvs@gmail.com >
* Update layerwise_casting.py
* Apply style fixes
* update
---------
Signed-off-by: co63oc <co63oc@users.noreply.github.com >
Co-authored-by: Aryan <contact.aryanvs@gmail.com >
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2025-05-30 18:49:00 +05:30
Sayak Paul
a4da216125
[LoRA] improve LoRA fusion tests ( #11274 )
...
* improve lora fusion tests
* more improvements.
* remove comment
* update
* relax tolerance.
* num_fused_loras as a property
Co-authored-by: BenjaminBossan <benjamin.bossan@gmail.com >
* updates
* update
* fix
* fix
Co-authored-by: BenjaminBossan <benjamin.bossan@gmail.com >
* Update src/diffusers/loaders/lora_base.py
Co-authored-by: Benjamin Bossan <BenjaminBossan@users.noreply.github.com >
---------
Co-authored-by: BenjaminBossan <benjamin.bossan@gmail.com >
Co-authored-by: Benjamin Bossan <BenjaminBossan@users.noreply.github.com >
2025-05-27 09:02:12 -07:00
Sayak Paul
a5f4cc7f84
[LoRA] minor fix for load_lora_weights() for Flux and a test ( #11595 )
...
* fix peft delete adapters for flux.
* add test
* empty commit
2025-05-22 15:44:45 +05:30
Yao Matrix
8c661ea586
enable lora cases on XPU ( #11506 )
...
* enable lora cases on XPU
Signed-off-by: Yao Matrix <matrix.yao@intel.com >
* remove hunyuanvideo xpu expectation
Signed-off-by: Yao Matrix <matrix.yao@intel.com >
---------
Signed-off-by: Yao Matrix <matrix.yao@intel.com >
2025-05-06 14:59:50 +05:30
Sayak Paul
4b868f14c1
post release 0.33.0 ( #11255 )
...
* post release
* update
* fix deprecations
* remaining
* update
---------
Co-authored-by: YiYi Xu <yixu310@gmail.com >
2025-04-15 06:50:08 -10:00
Hameer Abbasi
9352a5ca56
[LoRA] Add LoRA support to AuraFlow ( #10216 )
...
* Add AuraFlowLoraLoaderMixin
* Add comments, remove qkv fusion
* Add Tests
* Add AuraFlowLoraLoaderMixin to documentation
* Add Suggested changes
* Change attention_kwargs->joint_attention_kwargs
* Rebasing derp.
* fix
* fix
* Quality fixes.
* make style
* `make fix-copies`
* `ruff check --fix`
* Attept 1 to fix tests.
* Attept 2 to fix tests.
* Attept 3 to fix tests.
* Address review comments.
* Rebasing derp.
* Get more tests passing by copying from Flux. Address review comments.
* `joint_attention_kwargs`->`attention_kwargs`
* Add `lora_scale` property for te LoRAs.
* Make test better.
* Remove useless property.
* Skip TE-only tests for AuraFlow.
* Support LoRA for non-CLIP TEs.
* Restore LoRA tests.
* Undo adding LoRA support for non-CLIP TEs.
* Undo support for TE in AuraFlow LoRA.
* `make fix-copies`
* Sync with upstream changes.
* Remove unneeded stuff.
* Mirror `Lumina2`.
* Skip for MPS.
* Address review comments.
* Remove duplicated code.
* Remove unnecessary code.
* Remove repeated docs.
* Propagate attention.
* Fix TE target modules.
* MPS fix for LoRA tests.
* Unrelated TE LoRA tests fix.
* Fix AuraFlow LoRA tests by applying to the right denoiser layers.
Co-authored-by: AstraliteHeart <81396681+AstraliteHeart@users.noreply.github.com >
* Apply style fixes
* empty commit
* Fix the repo consistency issues.
* Remove unrelated changes.
* Style.
* Fix `test_lora_fuse_nan`.
* fix quality issues.
* `pytest.xfail` -> `ValueError`.
* Add back `skip_mps`.
* Apply style fixes
* `make fix-copies`
---------
Co-authored-by: Warlord-K <warlordk28@gmail.com >
Co-authored-by: hlky <hlky@hlky.ac >
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
Co-authored-by: AstraliteHeart <81396681+AstraliteHeart@users.noreply.github.com >
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2025-04-15 10:41:28 +05:30
Sayak Paul
ea5a6a8b7c
[Tests] Cleanup lora tests utils ( #11276 )
...
* start cleaning up lora test utils for reusability
* update
* updates
* updates
2025-04-10 15:50:34 +05:30
Yao Matrix
68663f8a17
fix test_vanilla_funetuning failure on XPU and A100 ( #11263 )
...
* fix test_vanilla_funetuning failure on XPU and A100
Signed-off-by: Matrix Yao <matrix.yao@intel.com >
* change back to 5e-2
Signed-off-by: Matrix Yao <matrix.yao@intel.com >
---------
Signed-off-by: Matrix Yao <matrix.yao@intel.com >
2025-04-10 05:55:07 +01:00
Sayak Paul
20e4b6a628
[LoRA] change to warning from info when notifying the users about a LoRA no-op ( #11044 )
...
* move to warning.
* test related changes.
2025-03-12 21:20:48 +05:30
Aryan
8eefed65bd
[LoRA] CogView4 ( #10981 )
...
* update
* make fix-copies
* update
2025-03-10 20:24:05 +05:30
Sayak Paul
26149c0ecd
[LoRA] Improve warning messages when LoRA loading becomes a no-op ( #10187 )
...
* updates
* updates
* updates
* updates
* notebooks revert
* fix-copies.
* seeing
* fix
* revert
* fixes
* fixes
* fixes
* remove print
* fix
* conflicts ii.
* updates
* fixes
* better filtering of prefix.
---------
Co-authored-by: hlky <hlky@hlky.ac >
2025-03-10 09:28:32 +05:30
Aryan
3ee899fa0c
[LoRA] Support Wan ( #10943 )
...
* update
* refactor image-to-video pipeline
* update
* fix copied from
* use FP32LayerNorm
2025-03-05 01:27:34 +05:30
Fanli Lin
7855ac597e
[tests] make tests device-agnostic (part 4) ( #10508 )
...
* initial comit
* fix empty cache
* fix one more
* fix style
* update device functions
* update
* update
* Update src/diffusers/utils/testing_utils.py
Co-authored-by: hlky <hlky@hlky.ac >
* Update src/diffusers/utils/testing_utils.py
Co-authored-by: hlky <hlky@hlky.ac >
* Update src/diffusers/utils/testing_utils.py
Co-authored-by: hlky <hlky@hlky.ac >
* Update tests/pipelines/controlnet/test_controlnet.py
Co-authored-by: hlky <hlky@hlky.ac >
* Update src/diffusers/utils/testing_utils.py
Co-authored-by: hlky <hlky@hlky.ac >
* Update src/diffusers/utils/testing_utils.py
Co-authored-by: hlky <hlky@hlky.ac >
* Update tests/pipelines/controlnet/test_controlnet.py
Co-authored-by: hlky <hlky@hlky.ac >
* with gc.collect
* update
* make style
* check_torch_dependencies
* add mps empty cache
* add changes
* bug fix
* enable on xpu
* update more cases
* revert
* revert back
* Update test_stable_diffusion_xl.py
* Update tests/pipelines/stable_diffusion/test_stable_diffusion.py
Co-authored-by: hlky <hlky@hlky.ac >
* Update tests/pipelines/stable_diffusion/test_stable_diffusion.py
Co-authored-by: hlky <hlky@hlky.ac >
* Update tests/pipelines/stable_diffusion/test_stable_diffusion_img2img.py
Co-authored-by: hlky <hlky@hlky.ac >
* Update tests/pipelines/stable_diffusion/test_stable_diffusion_img2img.py
Co-authored-by: hlky <hlky@hlky.ac >
* Update tests/pipelines/stable_diffusion/test_stable_diffusion_img2img.py
Co-authored-by: hlky <hlky@hlky.ac >
* Apply suggestions from code review
Co-authored-by: hlky <hlky@hlky.ac >
* add test marker
---------
Co-authored-by: hlky <hlky@hlky.ac >
2025-03-04 08:26:06 +00:00
Sayak Paul
764d7ed49a
[Tests] fix: lumina2 lora fuse_nan test ( #10911 )
...
fix: lumina2 lora fuse_nan test
2025-02-26 22:44:49 +05:30
Sayak Paul
f10d3c6d04
[LoRA] add LoRA support to Lumina2 and fine-tuning script ( #10818 )
...
* feat: lora support for Lumina2.
* fix-copies.
* updates
* updates
* docs.
* fix
* add: training script.
* tests
* updates
* updates
* major updates.
* updates
* fixes
* docs.
* updates
* updates
2025-02-20 09:41:51 +05:30
Sayak Paul
6fe05b9b93
[LoRA] make set_adapters() robust on silent failures. ( #9618 )
...
* make set_adapters() robust on silent failures.
* fixes to tests
* flaky decorator.
* fix
* flaky to sd3.
* remove warning.
* sort
* quality
* skip test_simple_inference_with_text_denoiser_multi_adapter_block_lora
* skip testing unsupported features.
* raise warning instead of error.
2025-02-19 14:33:57 +05:30
Aryan
a0c22997fd
Disable PEFT input autocast when using fp8 layerwise casting ( #10685 )
...
* disable peft input autocast
* use new peft method name; only disable peft input autocast if submodule layerwise casting active
* add test; reference PeftInputAutocastDisableHook in peft docs
* add load_lora_weights test
* casted -> cast
* Update tests/lora/utils.py
2025-02-13 23:12:54 +05:30
Aryan
beacaa5528
[core] Layerwise Upcasting ( #10347 )
...
* update
* update
* make style
* remove dynamo disable
* add coauthor
Co-Authored-By: Dhruv Nair <dhruv.nair@gmail.com >
* update
* update
* update
* update mixin
* add some basic tests
* update
* update
* non_blocking
* improvements
* update
* norm.* -> norm
* apply suggestions from review
* add example
* update hook implementation to the latest changes from pyramid attention broadcast
* deinitialize should raise an error
* update doc page
* Apply suggestions from code review
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com >
* update docs
* update
* refactor
* fix _always_upcast_modules for asym ae and vq_model
* fix lumina embedding forward to not depend on weight dtype
* refactor tests
* add simple lora inference tests
* _always_upcast_modules -> _precision_sensitive_module_patterns
* remove todo comments about review; revert changes to self.dtype in unets because .dtype on ModelMixin should be able to handle fp8 weight case
* check layer dtypes in lora test
* fix UNet1DModelTests::test_layerwise_upcasting_inference
* _precision_sensitive_module_patterns -> _skip_layerwise_casting_patterns based on feedback
* skip test in NCSNppModelTests
* skip tests for AutoencoderTinyTests
* skip tests for AutoencoderOobleckTests
* skip tests for UNet1DModelTests - unsupported pytorch operations
* layerwise_upcasting -> layerwise_casting
* skip tests for UNetRLModelTests; needs next pytorch release for currently unimplemented operation support
* add layerwise fp8 pipeline test
* use xfail
* Apply suggestions from code review
Co-authored-by: Dhruv Nair <dhruv.nair@gmail.com >
* add assertion with fp32 comparison; add tolerance to fp8-fp32 vs fp32-fp32 comparison (required for a few models' test to pass)
* add note about memory consumption on tesla CI runner for failing test
---------
Co-authored-by: Dhruv Nair <dhruv.nair@gmail.com >
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com >
2025-01-22 19:49:37 +05:30
Sayak Paul
9f06a0d1a4
[CI] Match remaining assertions from big runner ( #10521 )
...
* print
* remove print.
* print
* update slice.
* empty
2025-01-10 16:37:36 +05:30
Sayak Paul
a6f043a80f
[LoRA] allow big CUDA tests to run properly for LoRA (and others) ( #9845 )
...
* allow big lora tests to run on the CI.
* print
* print.
* print
* print
* print
* print
* more
* print
* remove print.
* remove print
* directly place on cuda.
* remove pipeline.
* remove
* fix
* fix
* spaces
* quality
* updates
* directly place flux controlnet pipeline on cuda.
* torch_device instead of cuda.
* style
* device placement.
* fixes
* add big gpu marker for mochi; rename test correctly
* address feedback
* fix
---------
Co-authored-by: Aryan <aryan@huggingface.co >
2025-01-10 12:50:24 +05:30
Aryan
811560b1d7
[LoRA] Support original format loras for HunyuanVideo ( #10376 )
...
* update
* fix make copies
* update
* add relevant markers to the integration test suite.
* add copied.
* fox-copies
* temporarily add print.
* directly place on CUDA as CPU isn't that big on the CIO.
* fixes to fuse_lora, aryan was right.
* fixes
---------
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
2025-01-07 13:18:57 +05:30
Sayak Paul
d9d94e12f3
[LoRA] fix: lora unloading when using expanded Flux LoRAs. ( #10397 )
...
* fix: lora unloading when using expanded Flux LoRAs.
* fix argument name.
Co-authored-by: a-r-r-o-w <contact.aryanvs@gmail.com >
* docs.
---------
Co-authored-by: a-r-r-o-w <contact.aryanvs@gmail.com >
2025-01-06 08:35:05 -10:00
Sayak Paul
b5726358cf
[Tests] add slow and nightly markers to sd3 lora integation. ( #10458 )
...
add slow and nightly markers to sd3 lora integation.
2025-01-06 07:29:04 +05:30
maxs-kan
44640c8358
Fix Flux multiple Lora loading bug ( #10388 )
...
* check for base_layer key in transformer state dict
* test_lora_expansion_works_for_absent_keys
* check
* Update tests/lora/test_lora_layers_flux.py
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
* check
* test_lora_expansion_works_for_absent_keys/test_lora_expansion_works_for_extra_keys
* absent->extra
---------
Co-authored-by: hlky <hlky@hlky.ac >
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
2025-01-02 08:34:48 -10:00
Sayak Paul
1b202c5730
[LoRA] feat: support unload_lora_weights() for Flux Control. ( #10206 )
...
* feat: support unload_lora_weights() for Flux Control.
* tighten test
* minor
* updates
* meta device fixes.
2024-12-25 17:27:16 +05:30
Aryan
4b557132ce
[core] LTX Video 0.9.1 ( #10330 )
...
* update
* make style
* update
* update
* update
* make style
* single file related changes
* update
* fix
* update single file urls and docs
* update
* fix
2024-12-23 19:51:33 +05:30
Sayak Paul
851dfa30ae
[Tests] Fix more tests sayak ( #10359 )
...
* fixes to tests
* fixture
* fixes
2024-12-23 19:11:21 +05:30
Sayak Paul
ea1ba0ba53
[LoRA] test fix ( #10351 )
...
updates
2024-12-23 15:45:45 +05:30
Sayak Paul
c34fc34563
[Tests] QoL improvements to the LoRA test suite ( #10304 )
...
* misc lora test improvements.
* updates
* fixes to tests
2024-12-23 13:59:55 +05:30
Sayak Paul
76e2727b5c
[SANA LoRA] sana lora training tests and misc. ( #10296 )
...
* sana lora training tests and misc.
* remove push to hub
* Update examples/dreambooth/train_dreambooth_lora_sana.py
Co-authored-by: Aryan <aryan@huggingface.co >
---------
Co-authored-by: Aryan <aryan@huggingface.co >
2024-12-23 12:35:13 +05:30
Sayak Paul
bf6eaa8aec
[Tests] add integration tests for lora expansion stuff in Flux. ( #10318 )
...
add integration tests for lora expansion stuff in Flux.
2024-12-20 16:14:58 +05:30
Sayak Paul
17128c42a4
[LoRA] feat: support loading regular Flux LoRAs into Flux Control, and Fill ( #10259 )
...
* lora expansion with dummy zeros.
* updates
* fix working 🥳
* working.
* use torch.device meta for state dict expansion.
* tests
Co-authored-by: a-r-r-o-w <contact.aryanvs@gmail.com >
* fixes
* fixes
* switch to debug
* fix
* Apply suggestions from code review
Co-authored-by: Aryan <aryan@huggingface.co >
* fix stuff
* docs
---------
Co-authored-by: a-r-r-o-w <contact.aryanvs@gmail.com >
Co-authored-by: Aryan <aryan@huggingface.co >
2024-12-20 14:30:32 +05:30
Aryan
d8825e7697
Fix failing lora tests after HunyuanVideo lora ( #10307 )
...
fix
2024-12-20 02:35:41 +05:30
Shenghai Yuan
1826a1e7d3
[LoRA] Support HunyuanVideo ( #10254 )
...
* 1217
* 1217
* 1217
* update
* reverse
* add test
* update test
* make style
* update
* make style
---------
Co-authored-by: Aryan <aryan@huggingface.co >
2024-12-19 16:22:20 +05:30
Aryan
f35a38725b
[tests] remove nullop import checks from lora tests ( #10273 )
...
remove nullop imports
2024-12-19 01:19:08 +05:30
Sayak Paul
9408aa2dfc
[LoRA] feat: lora support for SANA. ( #10234 )
...
* feat: lora support for SANA.
* make fix-copies
* rename test class.
* attention_kwargs -> cross_attention_kwargs.
* Revert "attention_kwargs -> cross_attention_kwargs."
This reverts commit 23433bf9bc .
* exhaust 119 max line limit
* sana lora fine-tuning script.
* readme
* add a note about the supported models.
* Apply suggestions from code review
Co-authored-by: Aryan <aryan@huggingface.co >
* style
* docs for attention_kwargs.
* remove lora_scale from pag pipeline.
* copy fix
---------
Co-authored-by: Aryan <aryan@huggingface.co >
2024-12-18 08:22:31 +05:30
Aryan
ac86393487
[LoRA] Support LTX Video ( #10228 )
...
* add lora support for ltx
* add tests
* fix copied from comments
* update
---------
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
2024-12-17 12:05:05 +05:30
Aryan
22c4f079b1
Test error raised when loading normal and expanding loras together in Flux ( #10188 )
...
* add test for expanding lora and normal lora error
* Update tests/lora/test_lora_layers_flux.py
* fix things.
* Update src/diffusers/loaders/peft.py
---------
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
2024-12-15 21:46:21 +05:30
Sayak Paul
a6a18cff5e
[LoRA] add a test to ensure set_adapters() and attn kwargs outs match ( #10110 )
...
* add a test to ensure set_adapters() and attn kwargs outs match
* remove print
* fix
* Apply suggestions from code review
Co-authored-by: Benjamin Bossan <BenjaminBossan@users.noreply.github.com >
* assertFalse.
---------
Co-authored-by: Benjamin Bossan <BenjaminBossan@users.noreply.github.com >
2024-12-12 12:52:50 +05:30
Aryan
49a9143479
Flux Control LoRA ( #9999 )
...
* update
---------
Co-authored-by: yiyixuxu <yixu310@gmail.com >
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
2024-12-10 09:08:13 -10:00
Sayak Paul
40fc389c44
[Tests] fix condition argument in xfail. ( #10099 )
...
* fix condition argument in xfail.
* revert init changes.
2024-12-05 10:13:45 +05:30
Sayak Paul
2e86a3f023
[Tests] skip nan lora tests on PyTorch 2.5.1 CPU. ( #9975 )
...
* skip nan lora tests on PyTorch 2.5.1 CPU.
* cog
* use xfail
* correct xfail
* add condition
* tests
2024-11-22 12:45:21 +05:30
raulmosa
3139d39fa7
Update handle single blocks on _convert_xlabs_flux_lora_to_diffusers ( #9915 )
...
* Update handle single blocks on _convert_xlabs_flux_lora_to_diffusers to fix bug on updating keys and old_state_dict
---------
Co-authored-by: raul_ar <raul.moreno.salinas@autoretouch.com >
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
2024-11-20 12:53:20 -10:00
Sayak Paul
805aa93789
[LoRA] enable LoRA for Mochi-1 ( #9943 )
...
* feat: add lora support to Mochi-1.
2024-11-20 12:07:04 -10:00
Sayak Paul
7d0b9c4d4e
[LoRA] feat: save_lora_adapter() ( #9862 )
...
* feat: save_lora_adapter.
2024-11-18 21:03:38 -10:00
SahilCarterr
08ac5cbc7f
[Fix] Test of sd3 lora ( #9843 )
...
* fix test
* fix test asser
* fix format
* Update test_lora_layers_sd3.py
2024-11-05 11:05:20 -10:00
Sayak Paul
13e8fdecda
[feat] add load_lora_adapter() for compatible models ( #9712 )
...
* add first draft.
* fix
* updates.
* updates.
* updates
* updates
* updates.
* fix-copies
* lora constants.
* add tests
* Apply suggestions from code review
Co-authored-by: Benjamin Bossan <BenjaminBossan@users.noreply.github.com >
* docstrings.
---------
Co-authored-by: Benjamin Bossan <BenjaminBossan@users.noreply.github.com >
2024-11-02 09:50:39 +05:30