Sayak Paul
851dfa30ae
[Tests] Fix more tests sayak ( #10359 )
...
* fixes to tests
* fixture
* fixes
2024-12-23 19:11:21 +05:30
Sayak Paul
ea1ba0ba53
[LoRA] test fix ( #10351 )
...
updates
2024-12-23 15:45:45 +05:30
Sayak Paul
c34fc34563
[Tests] QoL improvements to the LoRA test suite ( #10304 )
...
* misc lora test improvements.
* updates
* fixes to tests
2024-12-23 13:59:55 +05:30
Aryan
d8825e7697
Fix failing lora tests after HunyuanVideo lora ( #10307 )
...
fix
2024-12-20 02:35:41 +05:30
Shenghai Yuan
1826a1e7d3
[LoRA] Support HunyuanVideo ( #10254 )
...
* 1217
* 1217
* 1217
* update
* reverse
* add test
* update test
* make style
* update
* make style
---------
Co-authored-by: Aryan <aryan@huggingface.co >
2024-12-19 16:22:20 +05:30
Sayak Paul
9408aa2dfc
[LoRA] feat: lora support for SANA. ( #10234 )
...
* feat: lora support for SANA.
* make fix-copies
* rename test class.
* attention_kwargs -> cross_attention_kwargs.
* Revert "attention_kwargs -> cross_attention_kwargs."
This reverts commit 23433bf9bc .
* exhaust 119 max line limit
* sana lora fine-tuning script.
* readme
* add a note about the supported models.
* Apply suggestions from code review
Co-authored-by: Aryan <aryan@huggingface.co >
* style
* docs for attention_kwargs.
* remove lora_scale from pag pipeline.
* copy fix
---------
Co-authored-by: Aryan <aryan@huggingface.co >
2024-12-18 08:22:31 +05:30
Sayak Paul
a6a18cff5e
[LoRA] add a test to ensure set_adapters() and attn kwargs outs match ( #10110 )
...
* add a test to ensure set_adapters() and attn kwargs outs match
* remove print
* fix
* Apply suggestions from code review
Co-authored-by: Benjamin Bossan <BenjaminBossan@users.noreply.github.com >
* assertFalse.
---------
Co-authored-by: Benjamin Bossan <BenjaminBossan@users.noreply.github.com >
2024-12-12 12:52:50 +05:30
Sayak Paul
40fc389c44
[Tests] fix condition argument in xfail. ( #10099 )
...
* fix condition argument in xfail.
* revert init changes.
2024-12-05 10:13:45 +05:30
Sayak Paul
2e86a3f023
[Tests] skip nan lora tests on PyTorch 2.5.1 CPU. ( #9975 )
...
* skip nan lora tests on PyTorch 2.5.1 CPU.
* cog
* use xfail
* correct xfail
* add condition
* tests
2024-11-22 12:45:21 +05:30
Sayak Paul
7d0b9c4d4e
[LoRA] feat: save_lora_adapter() ( #9862 )
...
* feat: save_lora_adapter.
2024-11-18 21:03:38 -10:00
Sayak Paul
13e8fdecda
[feat] add load_lora_adapter() for compatible models ( #9712 )
...
* add first draft.
* fix
* updates.
* updates.
* updates
* updates
* updates.
* fix-copies
* lora constants.
* add tests
* Apply suggestions from code review
Co-authored-by: Benjamin Bossan <BenjaminBossan@users.noreply.github.com >
* docstrings.
---------
Co-authored-by: Benjamin Bossan <BenjaminBossan@users.noreply.github.com >
2024-11-02 09:50:39 +05:30
Sayak Paul
cef4f65cf7
[LoRA] log a warning when there are missing keys in the LoRA loading. ( #9622 )
...
* log a warning when there are missing keys in the LoRA loading.
* handle missing keys and unexpected keys better.
* add tests
* fix-copies.
* updates
* tests
* concat warning.
* Add Differential Diffusion to Kolors (#9423 )
* Added diff diff support for kolors img2img
* Fized relative imports
* Fized relative imports
* Added diff diff support for Kolors
* Fized import issues
* Added map
* Fized import issues
* Fixed naming issues
* Added diffdiff support for Kolors img2img pipeline
* Removed example docstrings
* Added map input
* Updated latents
Co-authored-by: Álvaro Somoza <asomoza@users.noreply.github.com >
* Updated `original_with_noise`
Co-authored-by: Álvaro Somoza <asomoza@users.noreply.github.com >
* Improved code quality
---------
Co-authored-by: Álvaro Somoza <asomoza@users.noreply.github.com >
* FluxMultiControlNetModel (#9647 )
* tests
* Update src/diffusers/loaders/lora_pipeline.py
Co-authored-by: YiYi Xu <yixu310@gmail.com >
* fix
---------
Co-authored-by: M Saqlain <118016760+saqlain2204@users.noreply.github.com >
Co-authored-by: Álvaro Somoza <asomoza@users.noreply.github.com >
Co-authored-by: hlky <hlky@hlky.ac >
Co-authored-by: YiYi Xu <yixu310@gmail.com >
2024-10-16 07:46:12 +05:30
Sayak Paul
86bcbc389e
[Tests] increase transformers version in test_low_cpu_mem_usage_with_loading ( #9662 )
...
increase transformers version in test_low_cpu_mem_usage_with_loading
2024-10-13 22:39:38 +05:30
Sayak Paul
31058cdaef
[LoRA] allow loras to be loaded with low_cpu_mem_usage. ( #9510 )
...
* allow loras to be loaded with low_cpu_mem_usage.
* add flux support but note https://github.com/huggingface/diffusers/pull/9510\#issuecomment-2378316687
* low_cpu_mem_usage.
* fix-copies
* fix-copies again
* tests
* _LOW_CPU_MEM_USAGE_DEFAULT_LORA
* _peft_version default.
* version checks.
* version check.
* version check.
* version check.
* require peft 0.13.1.
* explicitly specify low_cpu_mem_usage=False.
* docs.
* transformers version 4.45.2.
* update
* fix
* empty
* better name initialize_dummy_state_dict.
* doc todos.
* Apply suggestions from code review
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com >
* style
* fix-copies
---------
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com >
2024-10-09 10:57:16 +05:30
Sayak Paul
81cf3b2f15
[Tests] [LoRA] clean up the serialization stuff. ( #9512 )
...
* clean up the serialization stuff.
* better
2024-09-27 07:57:09 -10:00
Sayak Paul
2daedc0ad3
[LoRA] make set_adapters() method more robust. ( #9535 )
...
* make set_adapters() method more robust.
* remove patch
* better and concise code.
* Update src/diffusers/loaders/lora_base.py
Co-authored-by: YiYi Xu <yixu310@gmail.com >
---------
Co-authored-by: YiYi Xu <yixu310@gmail.com >
2024-09-27 07:32:43 +05:30
Aryan
e5d0a328d6
[refactor] LoRA tests ( #9481 )
...
* refactor scheduler class usage
* reorder to make tests more readable
* remove pipeline specific checks and skip tests directly
* rewrite denoiser conditions cleaner
* bump tolerance for cog test
2024-09-21 07:10:36 +05:30
Aryan
2b443a5d62
[training] CogVideoX Lora ( #9302 )
...
* cogvideox lora training draft
* update
* update
* update
* update
* update
* make fix-copies
* update
* update
* apply suggestions from review
* apply suggestions from reveiw
* fix typo
* Update examples/cogvideo/train_cogvideox_lora.py
Co-authored-by: YiYi Xu <yixu310@gmail.com >
* fix lora alpha
* use correct lora scaling for final test pipeline
* Update examples/cogvideo/train_cogvideox_lora.py
Co-authored-by: YiYi Xu <yixu310@gmail.com >
* apply suggestions from review; prodigy optimizer
YiYi Xu <yixu310@gmail.com >
* add tests
* make style
* add README
* update
* update
* make style
* fix
* update
* add test skeleton
* revert lora utils changes
* add cleaner modifications to lora testing utils
* update lora tests
* deepspeed stuff
* add requirements.txt
* deepspeed refactor
* add lora stuff to img2vid pipeline to fix tests
* fight tests
* add co-authors
Co-Authored-By: Fu-Yun Wang <1697256461@qq.com >
Co-Authored-By: zR <2448370773@qq.com >
* fight lora runner tests
* import Dummy optim and scheduler only wheh required
* update docs
* add coauthors
Co-Authored-By: Fu-Yun Wang <1697256461@qq.com >
* remove option to train text encoder
Co-Authored-By: bghira <bghira@users.github.com >
* update tests
* fight more tests
* update
* fix vid2vid
* fix typo
* remove lora tests; todo in follow-up PR
* undo img2vid changes
* remove text encoder related changes in lora loader mixin
* Revert "remove text encoder related changes in lora loader mixin"
This reverts commit f8a8444487 .
* update
* round 1 of fighting tests
* round 2 of fighting tests
* fix copied from comment
* fix typo in lora test
* update styling
Co-Authored-By: YiYi Xu <yixu310@gmail.com >
---------
Co-authored-by: YiYi Xu <yixu310@gmail.com >
Co-authored-by: zR <2448370773@qq.com >
Co-authored-by: Fu-Yun Wang <1697256461@qq.com >
Co-authored-by: bghira <bghira@users.github.com >
2024-09-19 14:37:57 +05:30
Sayak Paul
fc6a91e383
[FLUX] support LoRA ( #9057 )
...
* feat: lora support for Flux.
add tests
fix imports
major fixes.
* fix
fixes
final fixes?
* fix
* remove is_peft_available.
2024-08-05 10:24:05 +05:30
Sayak Paul
d87fe95f90
[Chore] add LoraLoaderMixin to the inits ( #8981 )
...
* introduce to promote reusability.
* up
* add more tests
* up
* remove comments.
* fix fuse_nan test
* clarify the scope of fuse_lora and unfuse_lora
* remove space
* rewrite fuse_lora a bit.
* feedback
* copy over load_lora_into_text_encoder.
* address dhruv's feedback.
* fix-copies
* fix issubclass.
* num_fused_loras
* fix
* fix
* remove mapping
* up
* fix
* style
* fix-copies
* change to SD3TransformerLoRALoadersMixin
* Apply suggestions from code review
Co-authored-by: Dhruv Nair <dhruv.nair@gmail.com >
* up
* handle wuerstchen
* up
* move lora to lora_pipeline.py
* up
* fix-copies
* fix documentation.
* comment set_adapters().
* fix-copies
* fix set_adapters() at the model level.
* fix?
* fix
* loraloadermixin.
---------
Co-authored-by: Dhruv Nair <dhruv.nair@gmail.com >
2024-07-26 08:59:33 +05:30
YiYi Xu
62863bb1ea
Revert "[LoRA] introduce LoraBaseMixin to promote reusability." ( #8976 )
...
Revert "[LoRA] introduce LoraBaseMixin to promote reusability. (#8774 )"
This reverts commit 527430d0a4 .
2024-07-25 09:10:35 -10:00
Sayak Paul
527430d0a4
[LoRA] introduce LoraBaseMixin to promote reusability. ( #8774 )
...
* introduce to promote reusability.
* up
* add more tests
* up
* remove comments.
* fix fuse_nan test
* clarify the scope of fuse_lora and unfuse_lora
* remove space
* rewrite fuse_lora a bit.
* feedback
* copy over load_lora_into_text_encoder.
* address dhruv's feedback.
* fix-copies
* fix issubclass.
* num_fused_loras
* fix
* fix
* remove mapping
* up
* fix
* style
* fix-copies
* change to SD3TransformerLoRALoadersMixin
* Apply suggestions from code review
Co-authored-by: Dhruv Nair <dhruv.nair@gmail.com >
* up
* handle wuerstchen
* up
* move lora to lora_pipeline.py
* up
* fix-copies
* fix documentation.
* comment set_adapters().
* fix-copies
* fix set_adapters() at the model level.
* fix?
* fix
---------
Co-authored-by: Dhruv Nair <dhruv.nair@gmail.com >
2024-07-25 21:40:58 +05:30
Sayak Paul
984d340534
Revert "[LoRA] introduce LoraBaseMixin to promote reusability." ( #8773 )
...
Revert "[LoRA] introduce `LoraBaseMixin` to promote reusability. (#8670 )"
This reverts commit a2071a1837 .
2024-07-03 07:05:01 +05:30
Sayak Paul
a2071a1837
[LoRA] introduce LoraBaseMixin to promote reusability. ( #8670 )
...
* introduce to promote reusability.
* up
* add more tests
* up
* remove comments.
* fix fuse_nan test
* clarify the scope of fuse_lora and unfuse_lora
* remove space
2024-07-03 07:04:37 +05:30
Gæros
298ce67999
[LoRA] text encoder: read the ranks for all the attn modules ( #8324 )
...
* [LoRA] text encoder: read the ranks for all the attn modules
* In addition to out_proj, read the ranks of adapters for q_proj, k_proj, and v_proj
* Allow missing adapters (UNet already supports this)
* ruff format loaders.lora
* [LoRA] add tests for partial text encoders LoRAs
* [LoRA] update test_simple_inference_with_partial_text_lora to be deterministic
* [LoRA] comment justifying test_simple_inference_with_partial_text_lora
* style
---------
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
2024-06-18 21:10:50 +01:00
Tolga Cangöz
a2ecce26bc
Fix Copying Mechanism typo/bug ( #8232 )
...
* Fix copying mechanism typos
* fix copying mecha
* Revert, since they are in TODO
* Fix copying mechanism
2024-05-29 09:37:18 -07:00
UmerHA
0302446819
Implements Blockwise lora ( #7352 )
...
* Initial commit
* Implemented block lora
- implemented block lora
- updated docs
- added tests
* Finishing up
* Reverted unrelated changes made by make style
* Fixed typo
* Fixed bug + Made text_encoder_2 scalable
* Integrated some review feedback
* Incorporated review feedback
* Fix tests
* Made every module configurable
* Adapter to new lora test structure
* Final cleanup
* Some more final fixes
- Included examples in `using_peft_for_inference.md`
- Added hint that only attns are scaled
- Removed NoneTypes
- Added test to check mismatching lens of adapter names / weights raise error
* Update using_peft_for_inference.md
* Update using_peft_for_inference.md
* Make style, quality, fix-copies
* Updated tutorial;Warning if scale/adapter mismatch
* floats are forwarded as-is; changed tutorial scale
* make style, quality, fix-copies
* Fixed typo in tutorial
* Moved some warnings into `lora_loader_utils.py`
* Moved scale/lora mismatch warnings back
* Integrated final review suggestions
* Empty commit to trigger CI
* Reverted emoty commit to trigger CI
---------
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
2024-03-29 21:15:57 +05:30
UmerHA
0b8e29289d
Skip test_lora_fuse_nan on mps ( #7481 )
...
Skipping test_lora_fuse_nan on mps
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
2024-03-27 14:35:59 +05:30
Sayak Paul
699dfb084c
feat: support DoRA LoRA from community ( #7371 )
...
* feat: support dora loras from community
* safe-guard dora operations under peft version.
* pop use_dora when False
* make dora lora from kohya work.
* fix: kohya conversion utils.
* add a fast test for DoRA compatibility..
* add a nightly test.
2024-03-26 09:37:33 +05:30
UmerHA
1cd4732e7f
Fixed minor error in test_lora_layers_peft.py ( #7394 )
...
* Update test_lora_layers_peft.py
* Update utils.py
2024-03-25 11:35:27 -10:00
Sayak Paul
e25e525fde
[LoRA test suite] refactor the test suite and cleanse it ( #7316 )
...
* cleanse and refactor lora testing suite.
* more cleanup.
* make check_if_lora_correctly_set a utility function
* fix: typo
* retrigger ci
* style
2024-03-20 17:13:52 +05:30