Sayak Paul
814d710e56
[tests] cache non lora pipeline outputs. ( #12298 )
...
* cache non lora pipeline outputs.
* up
* up
* up
* up
* Revert "up"
This reverts commit 772c32e433 .
* up
* Revert "up"
This reverts commit cca03df7fc .
* up
* up
* add .
* up
* up
* up
* up
* up
* up
2025-10-01 09:02:55 +05:30
Sayak Paul
09e777a3e1
[tests] Single scheduler in lora tests ( #12315 )
...
* single scheduler please.
* up
* up
* up
2025-09-24 08:36:50 +05:30
Dhruv Nair
7aa6af1138
[Refactor] Move testing utils out of src ( #12238 )
...
* update
* update
* update
* update
* update
* merge main
* Revert "merge main"
This reverts commit 65efbcead5 .
2025-08-28 19:53:02 +05:30
Sayak Paul
7b10e4ae65
[tests] device placement for non-denoiser components in group offloading LoRA tests ( #12103 )
...
up
2025-08-08 13:34:29 +05:30
Sayak Paul
a8e47978c6
[lora] adapt new LoRA config injection method ( #11999 )
...
* use state dict when setting up LoRA.
* up
* up
* up
* comment
* up
* up
2025-08-08 09:22:48 +05:30
Aryan
6f3ac3050f
[refactor] some shared parts between hooks + docs ( #11968 )
...
* update
* try test fix
* add missing link
* fix tests
* Update src/diffusers/hooks/first_block_cache.py
* make style
2025-07-29 07:44:02 +05:30
Sayak Paul
265840a098
[LoRA] fix: disabling hooks when loading loras. ( #11896 )
...
fix: disabling hooks when loading loras.
2025-07-10 10:30:10 +05:30
Sayak Paul
bc34fa8386
[lora]feat: use exclude modules to loraconfig. ( #11806 )
...
* feat: use exclude modules to loraconfig.
* version-guard.
* tests and version guard.
* remove print.
* describe the test
* more detailed warning message + shift to debug
* update
* update
* update
* remove test
2025-06-30 20:08:53 +05:30
Sayak Paul
05e7a854d0
[lora] fix: lora unloading behvaiour ( #11822 )
...
* fix: lora unloading behvaiour
* fix
* update
2025-06-28 12:00:42 +05:30
Aryan
76ec3d1fee
Support dynamically loading/unloading loras with group offloading ( #11804 )
...
* update
* add test
* address review comments
* update
* fixes
* change decorator order to fix tests
* try fix
* fight tests
2025-06-27 23:20:53 +05:30
Sayak Paul
fb57c76aa1
[LoRA] refactor lora loading at the model-level ( #11719 )
...
* factor out stuff from load_lora_adapter().
* simplifying text encoder lora loading.
* fix peft.py
* fix logging locations.
* formatting
* fix
* update
* update
* update
2025-06-19 13:06:25 +05:30
Sayak Paul
62cce3045d
[chore] change to 2025 licensing for remaining ( #11741 )
...
change to 2025 licensing for remaining
2025-06-18 20:56:00 +05:30
Sayak Paul
368958df6f
[LoRA] parse metadata from LoRA and save metadata ( #11324 )
...
* feat: parse metadata from lora state dicts.
* tests
* fix tests
* key renaming
* fix
* smol update
* smol updates
* load metadata.
* automatically save metadata in save_lora_adapter.
* propagate changes.
* changes
* add test to models too.
* tigher tests.
* updates
* fixes
* rename tests.
* sorted.
* Update src/diffusers/loaders/lora_base.py
Co-authored-by: Benjamin Bossan <BenjaminBossan@users.noreply.github.com >
* review suggestions.
* removeprefix.
* propagate changes.
* fix-copies
* sd
* docs.
* fixes
* get review ready.
* one more test to catch error.
* change to a different approach.
* fix-copies.
* todo
* sd3
* update
* revert changes in get_peft_kwargs.
* update
* fixes
* fixes
* simplify _load_sft_state_dict_metadata
* update
* style fix
* uipdate
* update
* update
* empty commit
* _pack_dict_with_prefix
* update
* TODO 1.
* todo: 2.
* todo: 3.
* update
* update
* Apply suggestions from code review
Co-authored-by: Benjamin Bossan <BenjaminBossan@users.noreply.github.com >
* reraise.
* move argument.
---------
Co-authored-by: Benjamin Bossan <BenjaminBossan@users.noreply.github.com >
Co-authored-by: Linoy Tsaban <57615435+linoytsaban@users.noreply.github.com >
2025-06-13 14:37:49 +05:30
co63oc
8183d0f16e
Fix typos in strings and comments ( #11476 )
...
* Fix typos in strings and comments
Signed-off-by: co63oc <co63oc@users.noreply.github.com >
* Update src/diffusers/hooks/hooks.py
Co-authored-by: Aryan <contact.aryanvs@gmail.com >
* Update src/diffusers/hooks/hooks.py
Co-authored-by: Aryan <contact.aryanvs@gmail.com >
* Update layerwise_casting.py
* Apply style fixes
* update
---------
Signed-off-by: co63oc <co63oc@users.noreply.github.com >
Co-authored-by: Aryan <contact.aryanvs@gmail.com >
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2025-05-30 18:49:00 +05:30
Sayak Paul
a4da216125
[LoRA] improve LoRA fusion tests ( #11274 )
...
* improve lora fusion tests
* more improvements.
* remove comment
* update
* relax tolerance.
* num_fused_loras as a property
Co-authored-by: BenjaminBossan <benjamin.bossan@gmail.com >
* updates
* update
* fix
* fix
Co-authored-by: BenjaminBossan <benjamin.bossan@gmail.com >
* Update src/diffusers/loaders/lora_base.py
Co-authored-by: Benjamin Bossan <BenjaminBossan@users.noreply.github.com >
---------
Co-authored-by: BenjaminBossan <benjamin.bossan@gmail.com >
Co-authored-by: Benjamin Bossan <BenjaminBossan@users.noreply.github.com >
2025-05-27 09:02:12 -07:00
Sayak Paul
a5f4cc7f84
[LoRA] minor fix for load_lora_weights() for Flux and a test ( #11595 )
...
* fix peft delete adapters for flux.
* add test
* empty commit
2025-05-22 15:44:45 +05:30
Hameer Abbasi
9352a5ca56
[LoRA] Add LoRA support to AuraFlow ( #10216 )
...
* Add AuraFlowLoraLoaderMixin
* Add comments, remove qkv fusion
* Add Tests
* Add AuraFlowLoraLoaderMixin to documentation
* Add Suggested changes
* Change attention_kwargs->joint_attention_kwargs
* Rebasing derp.
* fix
* fix
* Quality fixes.
* make style
* `make fix-copies`
* `ruff check --fix`
* Attept 1 to fix tests.
* Attept 2 to fix tests.
* Attept 3 to fix tests.
* Address review comments.
* Rebasing derp.
* Get more tests passing by copying from Flux. Address review comments.
* `joint_attention_kwargs`->`attention_kwargs`
* Add `lora_scale` property for te LoRAs.
* Make test better.
* Remove useless property.
* Skip TE-only tests for AuraFlow.
* Support LoRA for non-CLIP TEs.
* Restore LoRA tests.
* Undo adding LoRA support for non-CLIP TEs.
* Undo support for TE in AuraFlow LoRA.
* `make fix-copies`
* Sync with upstream changes.
* Remove unneeded stuff.
* Mirror `Lumina2`.
* Skip for MPS.
* Address review comments.
* Remove duplicated code.
* Remove unnecessary code.
* Remove repeated docs.
* Propagate attention.
* Fix TE target modules.
* MPS fix for LoRA tests.
* Unrelated TE LoRA tests fix.
* Fix AuraFlow LoRA tests by applying to the right denoiser layers.
Co-authored-by: AstraliteHeart <81396681+AstraliteHeart@users.noreply.github.com >
* Apply style fixes
* empty commit
* Fix the repo consistency issues.
* Remove unrelated changes.
* Style.
* Fix `test_lora_fuse_nan`.
* fix quality issues.
* `pytest.xfail` -> `ValueError`.
* Add back `skip_mps`.
* Apply style fixes
* `make fix-copies`
---------
Co-authored-by: Warlord-K <warlordk28@gmail.com >
Co-authored-by: hlky <hlky@hlky.ac >
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
Co-authored-by: AstraliteHeart <81396681+AstraliteHeart@users.noreply.github.com >
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2025-04-15 10:41:28 +05:30
Sayak Paul
ea5a6a8b7c
[Tests] Cleanup lora tests utils ( #11276 )
...
* start cleaning up lora test utils for reusability
* update
* updates
* updates
2025-04-10 15:50:34 +05:30
Sayak Paul
20e4b6a628
[LoRA] change to warning from info when notifying the users about a LoRA no-op ( #11044 )
...
* move to warning.
* test related changes.
2025-03-12 21:20:48 +05:30
Sayak Paul
26149c0ecd
[LoRA] Improve warning messages when LoRA loading becomes a no-op ( #10187 )
...
* updates
* updates
* updates
* updates
* notebooks revert
* fix-copies.
* seeing
* fix
* revert
* fixes
* fixes
* fixes
* remove print
* fix
* conflicts ii.
* updates
* fixes
* better filtering of prefix.
---------
Co-authored-by: hlky <hlky@hlky.ac >
2025-03-10 09:28:32 +05:30
Aryan
3ee899fa0c
[LoRA] Support Wan ( #10943 )
...
* update
* refactor image-to-video pipeline
* update
* fix copied from
* use FP32LayerNorm
2025-03-05 01:27:34 +05:30
Sayak Paul
6fe05b9b93
[LoRA] make set_adapters() robust on silent failures. ( #9618 )
...
* make set_adapters() robust on silent failures.
* fixes to tests
* flaky decorator.
* fix
* flaky to sd3.
* remove warning.
* sort
* quality
* skip test_simple_inference_with_text_denoiser_multi_adapter_block_lora
* skip testing unsupported features.
* raise warning instead of error.
2025-02-19 14:33:57 +05:30
Aryan
a0c22997fd
Disable PEFT input autocast when using fp8 layerwise casting ( #10685 )
...
* disable peft input autocast
* use new peft method name; only disable peft input autocast if submodule layerwise casting active
* add test; reference PeftInputAutocastDisableHook in peft docs
* add load_lora_weights test
* casted -> cast
* Update tests/lora/utils.py
2025-02-13 23:12:54 +05:30
Aryan
beacaa5528
[core] Layerwise Upcasting ( #10347 )
...
* update
* update
* make style
* remove dynamo disable
* add coauthor
Co-Authored-By: Dhruv Nair <dhruv.nair@gmail.com >
* update
* update
* update
* update mixin
* add some basic tests
* update
* update
* non_blocking
* improvements
* update
* norm.* -> norm
* apply suggestions from review
* add example
* update hook implementation to the latest changes from pyramid attention broadcast
* deinitialize should raise an error
* update doc page
* Apply suggestions from code review
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com >
* update docs
* update
* refactor
* fix _always_upcast_modules for asym ae and vq_model
* fix lumina embedding forward to not depend on weight dtype
* refactor tests
* add simple lora inference tests
* _always_upcast_modules -> _precision_sensitive_module_patterns
* remove todo comments about review; revert changes to self.dtype in unets because .dtype on ModelMixin should be able to handle fp8 weight case
* check layer dtypes in lora test
* fix UNet1DModelTests::test_layerwise_upcasting_inference
* _precision_sensitive_module_patterns -> _skip_layerwise_casting_patterns based on feedback
* skip test in NCSNppModelTests
* skip tests for AutoencoderTinyTests
* skip tests for AutoencoderOobleckTests
* skip tests for UNet1DModelTests - unsupported pytorch operations
* layerwise_upcasting -> layerwise_casting
* skip tests for UNetRLModelTests; needs next pytorch release for currently unimplemented operation support
* add layerwise fp8 pipeline test
* use xfail
* Apply suggestions from code review
Co-authored-by: Dhruv Nair <dhruv.nair@gmail.com >
* add assertion with fp32 comparison; add tolerance to fp8-fp32 vs fp32-fp32 comparison (required for a few models' test to pass)
* add note about memory consumption on tesla CI runner for failing test
---------
Co-authored-by: Dhruv Nair <dhruv.nair@gmail.com >
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com >
2025-01-22 19:49:37 +05:30
Sayak Paul
851dfa30ae
[Tests] Fix more tests sayak ( #10359 )
...
* fixes to tests
* fixture
* fixes
2024-12-23 19:11:21 +05:30
Sayak Paul
ea1ba0ba53
[LoRA] test fix ( #10351 )
...
updates
2024-12-23 15:45:45 +05:30
Sayak Paul
c34fc34563
[Tests] QoL improvements to the LoRA test suite ( #10304 )
...
* misc lora test improvements.
* updates
* fixes to tests
2024-12-23 13:59:55 +05:30
Aryan
d8825e7697
Fix failing lora tests after HunyuanVideo lora ( #10307 )
...
fix
2024-12-20 02:35:41 +05:30
Shenghai Yuan
1826a1e7d3
[LoRA] Support HunyuanVideo ( #10254 )
...
* 1217
* 1217
* 1217
* update
* reverse
* add test
* update test
* make style
* update
* make style
---------
Co-authored-by: Aryan <aryan@huggingface.co >
2024-12-19 16:22:20 +05:30
Sayak Paul
9408aa2dfc
[LoRA] feat: lora support for SANA. ( #10234 )
...
* feat: lora support for SANA.
* make fix-copies
* rename test class.
* attention_kwargs -> cross_attention_kwargs.
* Revert "attention_kwargs -> cross_attention_kwargs."
This reverts commit 23433bf9bc .
* exhaust 119 max line limit
* sana lora fine-tuning script.
* readme
* add a note about the supported models.
* Apply suggestions from code review
Co-authored-by: Aryan <aryan@huggingface.co >
* style
* docs for attention_kwargs.
* remove lora_scale from pag pipeline.
* copy fix
---------
Co-authored-by: Aryan <aryan@huggingface.co >
2024-12-18 08:22:31 +05:30
Sayak Paul
a6a18cff5e
[LoRA] add a test to ensure set_adapters() and attn kwargs outs match ( #10110 )
...
* add a test to ensure set_adapters() and attn kwargs outs match
* remove print
* fix
* Apply suggestions from code review
Co-authored-by: Benjamin Bossan <BenjaminBossan@users.noreply.github.com >
* assertFalse.
---------
Co-authored-by: Benjamin Bossan <BenjaminBossan@users.noreply.github.com >
2024-12-12 12:52:50 +05:30
Sayak Paul
40fc389c44
[Tests] fix condition argument in xfail. ( #10099 )
...
* fix condition argument in xfail.
* revert init changes.
2024-12-05 10:13:45 +05:30
Sayak Paul
2e86a3f023
[Tests] skip nan lora tests on PyTorch 2.5.1 CPU. ( #9975 )
...
* skip nan lora tests on PyTorch 2.5.1 CPU.
* cog
* use xfail
* correct xfail
* add condition
* tests
2024-11-22 12:45:21 +05:30
Sayak Paul
7d0b9c4d4e
[LoRA] feat: save_lora_adapter() ( #9862 )
...
* feat: save_lora_adapter.
2024-11-18 21:03:38 -10:00
Sayak Paul
13e8fdecda
[feat] add load_lora_adapter() for compatible models ( #9712 )
...
* add first draft.
* fix
* updates.
* updates.
* updates
* updates
* updates.
* fix-copies
* lora constants.
* add tests
* Apply suggestions from code review
Co-authored-by: Benjamin Bossan <BenjaminBossan@users.noreply.github.com >
* docstrings.
---------
Co-authored-by: Benjamin Bossan <BenjaminBossan@users.noreply.github.com >
2024-11-02 09:50:39 +05:30
Sayak Paul
cef4f65cf7
[LoRA] log a warning when there are missing keys in the LoRA loading. ( #9622 )
...
* log a warning when there are missing keys in the LoRA loading.
* handle missing keys and unexpected keys better.
* add tests
* fix-copies.
* updates
* tests
* concat warning.
* Add Differential Diffusion to Kolors (#9423 )
* Added diff diff support for kolors img2img
* Fized relative imports
* Fized relative imports
* Added diff diff support for Kolors
* Fized import issues
* Added map
* Fized import issues
* Fixed naming issues
* Added diffdiff support for Kolors img2img pipeline
* Removed example docstrings
* Added map input
* Updated latents
Co-authored-by: Álvaro Somoza <asomoza@users.noreply.github.com >
* Updated `original_with_noise`
Co-authored-by: Álvaro Somoza <asomoza@users.noreply.github.com >
* Improved code quality
---------
Co-authored-by: Álvaro Somoza <asomoza@users.noreply.github.com >
* FluxMultiControlNetModel (#9647 )
* tests
* Update src/diffusers/loaders/lora_pipeline.py
Co-authored-by: YiYi Xu <yixu310@gmail.com >
* fix
---------
Co-authored-by: M Saqlain <118016760+saqlain2204@users.noreply.github.com >
Co-authored-by: Álvaro Somoza <asomoza@users.noreply.github.com >
Co-authored-by: hlky <hlky@hlky.ac >
Co-authored-by: YiYi Xu <yixu310@gmail.com >
2024-10-16 07:46:12 +05:30
Sayak Paul
86bcbc389e
[Tests] increase transformers version in test_low_cpu_mem_usage_with_loading ( #9662 )
...
increase transformers version in test_low_cpu_mem_usage_with_loading
2024-10-13 22:39:38 +05:30
Sayak Paul
31058cdaef
[LoRA] allow loras to be loaded with low_cpu_mem_usage. ( #9510 )
...
* allow loras to be loaded with low_cpu_mem_usage.
* add flux support but note https://github.com/huggingface/diffusers/pull/9510\#issuecomment-2378316687
* low_cpu_mem_usage.
* fix-copies
* fix-copies again
* tests
* _LOW_CPU_MEM_USAGE_DEFAULT_LORA
* _peft_version default.
* version checks.
* version check.
* version check.
* version check.
* require peft 0.13.1.
* explicitly specify low_cpu_mem_usage=False.
* docs.
* transformers version 4.45.2.
* update
* fix
* empty
* better name initialize_dummy_state_dict.
* doc todos.
* Apply suggestions from code review
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com >
* style
* fix-copies
---------
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com >
2024-10-09 10:57:16 +05:30
Sayak Paul
81cf3b2f15
[Tests] [LoRA] clean up the serialization stuff. ( #9512 )
...
* clean up the serialization stuff.
* better
2024-09-27 07:57:09 -10:00
Sayak Paul
2daedc0ad3
[LoRA] make set_adapters() method more robust. ( #9535 )
...
* make set_adapters() method more robust.
* remove patch
* better and concise code.
* Update src/diffusers/loaders/lora_base.py
Co-authored-by: YiYi Xu <yixu310@gmail.com >
---------
Co-authored-by: YiYi Xu <yixu310@gmail.com >
2024-09-27 07:32:43 +05:30
Aryan
e5d0a328d6
[refactor] LoRA tests ( #9481 )
...
* refactor scheduler class usage
* reorder to make tests more readable
* remove pipeline specific checks and skip tests directly
* rewrite denoiser conditions cleaner
* bump tolerance for cog test
2024-09-21 07:10:36 +05:30
Aryan
2b443a5d62
[training] CogVideoX Lora ( #9302 )
...
* cogvideox lora training draft
* update
* update
* update
* update
* update
* make fix-copies
* update
* update
* apply suggestions from review
* apply suggestions from reveiw
* fix typo
* Update examples/cogvideo/train_cogvideox_lora.py
Co-authored-by: YiYi Xu <yixu310@gmail.com >
* fix lora alpha
* use correct lora scaling for final test pipeline
* Update examples/cogvideo/train_cogvideox_lora.py
Co-authored-by: YiYi Xu <yixu310@gmail.com >
* apply suggestions from review; prodigy optimizer
YiYi Xu <yixu310@gmail.com >
* add tests
* make style
* add README
* update
* update
* make style
* fix
* update
* add test skeleton
* revert lora utils changes
* add cleaner modifications to lora testing utils
* update lora tests
* deepspeed stuff
* add requirements.txt
* deepspeed refactor
* add lora stuff to img2vid pipeline to fix tests
* fight tests
* add co-authors
Co-Authored-By: Fu-Yun Wang <1697256461@qq.com >
Co-Authored-By: zR <2448370773@qq.com >
* fight lora runner tests
* import Dummy optim and scheduler only wheh required
* update docs
* add coauthors
Co-Authored-By: Fu-Yun Wang <1697256461@qq.com >
* remove option to train text encoder
Co-Authored-By: bghira <bghira@users.github.com >
* update tests
* fight more tests
* update
* fix vid2vid
* fix typo
* remove lora tests; todo in follow-up PR
* undo img2vid changes
* remove text encoder related changes in lora loader mixin
* Revert "remove text encoder related changes in lora loader mixin"
This reverts commit f8a8444487 .
* update
* round 1 of fighting tests
* round 2 of fighting tests
* fix copied from comment
* fix typo in lora test
* update styling
Co-Authored-By: YiYi Xu <yixu310@gmail.com >
---------
Co-authored-by: YiYi Xu <yixu310@gmail.com >
Co-authored-by: zR <2448370773@qq.com >
Co-authored-by: Fu-Yun Wang <1697256461@qq.com >
Co-authored-by: bghira <bghira@users.github.com >
2024-09-19 14:37:57 +05:30
Sayak Paul
fc6a91e383
[FLUX] support LoRA ( #9057 )
...
* feat: lora support for Flux.
add tests
fix imports
major fixes.
* fix
fixes
final fixes?
* fix
* remove is_peft_available.
2024-08-05 10:24:05 +05:30
Sayak Paul
d87fe95f90
[Chore] add LoraLoaderMixin to the inits ( #8981 )
...
* introduce to promote reusability.
* up
* add more tests
* up
* remove comments.
* fix fuse_nan test
* clarify the scope of fuse_lora and unfuse_lora
* remove space
* rewrite fuse_lora a bit.
* feedback
* copy over load_lora_into_text_encoder.
* address dhruv's feedback.
* fix-copies
* fix issubclass.
* num_fused_loras
* fix
* fix
* remove mapping
* up
* fix
* style
* fix-copies
* change to SD3TransformerLoRALoadersMixin
* Apply suggestions from code review
Co-authored-by: Dhruv Nair <dhruv.nair@gmail.com >
* up
* handle wuerstchen
* up
* move lora to lora_pipeline.py
* up
* fix-copies
* fix documentation.
* comment set_adapters().
* fix-copies
* fix set_adapters() at the model level.
* fix?
* fix
* loraloadermixin.
---------
Co-authored-by: Dhruv Nair <dhruv.nair@gmail.com >
2024-07-26 08:59:33 +05:30
YiYi Xu
62863bb1ea
Revert "[LoRA] introduce LoraBaseMixin to promote reusability." ( #8976 )
...
Revert "[LoRA] introduce LoraBaseMixin to promote reusability. (#8774 )"
This reverts commit 527430d0a4 .
2024-07-25 09:10:35 -10:00
Sayak Paul
527430d0a4
[LoRA] introduce LoraBaseMixin to promote reusability. ( #8774 )
...
* introduce to promote reusability.
* up
* add more tests
* up
* remove comments.
* fix fuse_nan test
* clarify the scope of fuse_lora and unfuse_lora
* remove space
* rewrite fuse_lora a bit.
* feedback
* copy over load_lora_into_text_encoder.
* address dhruv's feedback.
* fix-copies
* fix issubclass.
* num_fused_loras
* fix
* fix
* remove mapping
* up
* fix
* style
* fix-copies
* change to SD3TransformerLoRALoadersMixin
* Apply suggestions from code review
Co-authored-by: Dhruv Nair <dhruv.nair@gmail.com >
* up
* handle wuerstchen
* up
* move lora to lora_pipeline.py
* up
* fix-copies
* fix documentation.
* comment set_adapters().
* fix-copies
* fix set_adapters() at the model level.
* fix?
* fix
---------
Co-authored-by: Dhruv Nair <dhruv.nair@gmail.com >
2024-07-25 21:40:58 +05:30
Sayak Paul
984d340534
Revert "[LoRA] introduce LoraBaseMixin to promote reusability." ( #8773 )
...
Revert "[LoRA] introduce `LoraBaseMixin` to promote reusability. (#8670 )"
This reverts commit a2071a1837 .
2024-07-03 07:05:01 +05:30
Sayak Paul
a2071a1837
[LoRA] introduce LoraBaseMixin to promote reusability. ( #8670 )
...
* introduce to promote reusability.
* up
* add more tests
* up
* remove comments.
* fix fuse_nan test
* clarify the scope of fuse_lora and unfuse_lora
* remove space
2024-07-03 07:04:37 +05:30
Gæros
298ce67999
[LoRA] text encoder: read the ranks for all the attn modules ( #8324 )
...
* [LoRA] text encoder: read the ranks for all the attn modules
* In addition to out_proj, read the ranks of adapters for q_proj, k_proj, and v_proj
* Allow missing adapters (UNet already supports this)
* ruff format loaders.lora
* [LoRA] add tests for partial text encoders LoRAs
* [LoRA] update test_simple_inference_with_partial_text_lora to be deterministic
* [LoRA] comment justifying test_simple_inference_with_partial_text_lora
* style
---------
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
2024-06-18 21:10:50 +01:00
Tolga Cangöz
a2ecce26bc
Fix Copying Mechanism typo/bug ( #8232 )
...
* Fix copying mechanism typos
* fix copying mecha
* Revert, since they are in TODO
* Fix copying mechanism
2024-05-29 09:37:18 -07:00