Aryan
7c843949f6
fight more tests
2024-09-17 22:27:10 +02:00
Aryan
0e1c569c58
update tests
2024-09-17 21:52:33 +02:00
Aryan
0aa8f3ad20
fight tests
2024-09-17 02:13:51 +02:00
Aryan
ca9d9a125d
add cleaner modifications to lora testing utils
2024-09-15 22:38:48 +02:00
Aryan
19d12f55e7
revert lora utils changes
2024-09-15 22:33:47 +02:00
Aryan
200f63a21d
make style
2024-09-14 04:14:02 +02:00
Aryan
f1f9e81171
add tests
2024-09-14 04:13:37 +02:00
Dhruv Nair
1e8cf2763d
[CI] Nightly Test Updates ( #9380 )
...
* update
* update
* update
* update
* update
---------
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
Co-authored-by: YiYi Xu <yixu310@gmail.com >
2024-09-12 20:21:28 +05:30
Dhruv Nair
f6f16a0c11
[CI] More Fast GPU Test Fixes ( #9346 )
...
* update
* update
* update
* update
2024-09-03 13:22:38 +05:30
Sayak Paul
5090b09d48
[Flux LoRA] support parsing alpha from a flux lora state dict. ( #9236 )
...
* support parsing alpha from a flux lora state dict.
* conditional import.
* fix breaking changes.
* safeguard alpha.
* fix
2024-08-22 07:01:52 +05:30
Dhruv Nair
940b8e0358
[CI] Multiple Slow Test fixes. ( #9198 )
...
* update
* update
* update
* update
2024-08-19 13:31:09 +05:30
Sayak Paul
fc6a91e383
[FLUX] support LoRA ( #9057 )
...
* feat: lora support for Flux.
add tests
fix imports
major fixes.
* fix
fixes
final fixes?
* fix
* remove is_peft_available.
2024-08-05 10:24:05 +05:30
Sayak Paul
d87fe95f90
[Chore] add LoraLoaderMixin to the inits ( #8981 )
...
* introduce to promote reusability.
* up
* add more tests
* up
* remove comments.
* fix fuse_nan test
* clarify the scope of fuse_lora and unfuse_lora
* remove space
* rewrite fuse_lora a bit.
* feedback
* copy over load_lora_into_text_encoder.
* address dhruv's feedback.
* fix-copies
* fix issubclass.
* num_fused_loras
* fix
* fix
* remove mapping
* up
* fix
* style
* fix-copies
* change to SD3TransformerLoRALoadersMixin
* Apply suggestions from code review
Co-authored-by: Dhruv Nair <dhruv.nair@gmail.com >
* up
* handle wuerstchen
* up
* move lora to lora_pipeline.py
* up
* fix-copies
* fix documentation.
* comment set_adapters().
* fix-copies
* fix set_adapters() at the model level.
* fix?
* fix
* loraloadermixin.
---------
Co-authored-by: Dhruv Nair <dhruv.nair@gmail.com >
2024-07-26 08:59:33 +05:30
YiYi Xu
62863bb1ea
Revert "[LoRA] introduce LoraBaseMixin to promote reusability." ( #8976 )
...
Revert "[LoRA] introduce LoraBaseMixin to promote reusability. (#8774 )"
This reverts commit 527430d0a4 .
2024-07-25 09:10:35 -10:00
Sayak Paul
527430d0a4
[LoRA] introduce LoraBaseMixin to promote reusability. ( #8774 )
...
* introduce to promote reusability.
* up
* add more tests
* up
* remove comments.
* fix fuse_nan test
* clarify the scope of fuse_lora and unfuse_lora
* remove space
* rewrite fuse_lora a bit.
* feedback
* copy over load_lora_into_text_encoder.
* address dhruv's feedback.
* fix-copies
* fix issubclass.
* num_fused_loras
* fix
* fix
* remove mapping
* up
* fix
* style
* fix-copies
* change to SD3TransformerLoRALoadersMixin
* Apply suggestions from code review
Co-authored-by: Dhruv Nair <dhruv.nair@gmail.com >
* up
* handle wuerstchen
* up
* move lora to lora_pipeline.py
* up
* fix-copies
* fix documentation.
* comment set_adapters().
* fix-copies
* fix set_adapters() at the model level.
* fix?
* fix
---------
Co-authored-by: Dhruv Nair <dhruv.nair@gmail.com >
2024-07-25 21:40:58 +05:30
Tolga Cangöz
57084dacc5
Remove unnecessary lines ( #8569 )
...
* Remove unused line
---------
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
2024-07-08 10:42:02 -10:00
Sayak Paul
984d340534
Revert "[LoRA] introduce LoraBaseMixin to promote reusability." ( #8773 )
...
Revert "[LoRA] introduce `LoraBaseMixin` to promote reusability. (#8670 )"
This reverts commit a2071a1837 .
2024-07-03 07:05:01 +05:30
Sayak Paul
a2071a1837
[LoRA] introduce LoraBaseMixin to promote reusability. ( #8670 )
...
* introduce to promote reusability.
* up
* add more tests
* up
* remove comments.
* fix fuse_nan test
* clarify the scope of fuse_lora and unfuse_lora
* remove space
2024-07-03 07:04:37 +05:30
Linoy Tsaban
c6e08ecd46
[Sd3 Dreambooth LoRA] Add text encoder training for the clip encoders ( #8630 )
...
* add clip text-encoder training
* no dora
* text encoder traing fixes
* text encoder traing fixes
* text encoder training fixes
* text encoder training fixes
* text encoder training fixes
* text encoder training fixes
* add text_encoder layers to save_lora
* style
* fix imports
* style
* fix text encoder
* review changes
* review changes
* review changes
* minor change
* add lora tag
* style
* add readme notes
* add tests for clip encoders
* style
* typo
* fixes
* style
* Update tests/lora/test_lora_layers_sd3.py
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
* Update examples/dreambooth/README_sd3.md
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
* minor readme change
---------
Co-authored-by: YiYi Xu <yixu310@gmail.com >
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
2024-06-25 18:00:19 +05:30
Álvaro Somoza
e7b9a0762b
[SD3 LoRA] Fix list index out of range ( #8584 )
...
* fix
* add check
* key present is checked before
* test case draft
* aply suggestions
* changed testing repo, back to old class
* forgot docstring
---------
Co-authored-by: YiYi Xu <yixu310@gmail.com >
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
2024-06-21 21:17:34 +05:30
Sayak Paul
668e34c6e0
[LoRA SD3] add support for lora fusion in sd3 ( #8616 )
...
* add support for lora fusion in sd3
* add test to ensure fused lora and effective lora produce same outpouts
2024-06-20 14:25:51 +05:30
Gæros
298ce67999
[LoRA] text encoder: read the ranks for all the attn modules ( #8324 )
...
* [LoRA] text encoder: read the ranks for all the attn modules
* In addition to out_proj, read the ranks of adapters for q_proj, k_proj, and v_proj
* Allow missing adapters (UNet already supports this)
* ruff format loaders.lora
* [LoRA] add tests for partial text encoders LoRAs
* [LoRA] update test_simple_inference_with_partial_text_lora to be deterministic
* [LoRA] comment justifying test_simple_inference_with_partial_text_lora
* style
---------
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
2024-06-18 21:10:50 +01:00
Dhruv Nair
04717fd861
Add Stable Diffusion 3 ( #8483 )
...
* up
* add sd3
* update
* update
* add tests
* fix copies
* fix docs
* update
* add dreambooth lora
* add LoRA
* update
* update
* update
* update
* import fix
* update
* Update src/diffusers/pipelines/stable_diffusion_3/pipeline_stable_diffusion_3.py
Co-authored-by: YiYi Xu <yixu310@gmail.com >
* import fix 2
* update
* Update src/diffusers/models/autoencoders/autoencoder_kl.py
Co-authored-by: YiYi Xu <yixu310@gmail.com >
* Update src/diffusers/models/autoencoders/autoencoder_kl.py
Co-authored-by: YiYi Xu <yixu310@gmail.com >
* Update src/diffusers/models/autoencoders/autoencoder_kl.py
Co-authored-by: YiYi Xu <yixu310@gmail.com >
* Update src/diffusers/models/autoencoders/autoencoder_kl.py
Co-authored-by: YiYi Xu <yixu310@gmail.com >
* Update src/diffusers/models/autoencoders/autoencoder_kl.py
Co-authored-by: YiYi Xu <yixu310@gmail.com >
* Update src/diffusers/models/autoencoders/autoencoder_kl.py
Co-authored-by: YiYi Xu <yixu310@gmail.com >
* Update src/diffusers/models/autoencoders/autoencoder_kl.py
Co-authored-by: YiYi Xu <yixu310@gmail.com >
* Update src/diffusers/models/autoencoders/autoencoder_kl.py
Co-authored-by: YiYi Xu <yixu310@gmail.com >
* Update src/diffusers/models/autoencoders/autoencoder_kl.py
Co-authored-by: YiYi Xu <yixu310@gmail.com >
* Update src/diffusers/models/autoencoders/autoencoder_kl.py
Co-authored-by: YiYi Xu <yixu310@gmail.com >
* Update src/diffusers/models/autoencoders/autoencoder_kl.py
Co-authored-by: YiYi Xu <yixu310@gmail.com >
* update
* update
* update
* fix ckpt id
* fix more ids
* update
* missing doc
* Update src/diffusers/schedulers/scheduling_flow_match_euler_discrete.py
Co-authored-by: YiYi Xu <yixu310@gmail.com >
* Update src/diffusers/schedulers/scheduling_flow_match_euler_discrete.py
Co-authored-by: YiYi Xu <yixu310@gmail.com >
* Update docs/source/en/api/pipelines/stable_diffusion/stable_diffusion_3.md
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
* Update docs/source/en/api/pipelines/stable_diffusion/stable_diffusion_3.md
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
* update'
* fix
* update
* Update src/diffusers/models/autoencoders/autoencoder_kl.py
* Update src/diffusers/models/autoencoders/autoencoder_kl.py
* note on gated access.
* requirements
* licensing
---------
Co-authored-by: sayakpaul <spsayakpaul@gmail.com >
Co-authored-by: YiYi Xu <yixu310@gmail.com >
2024-06-12 20:44:00 +01:00
Tolga Cangöz
a2ecce26bc
Fix Copying Mechanism typo/bug ( #8232 )
...
* Fix copying mechanism typos
* fix copying mecha
* Revert, since they are in TODO
* Fix copying mechanism
2024-05-29 09:37:18 -07:00
Tolga Cangöz
0ab63ff647
Fix CPU Offloading Usage & Typos ( #8230 )
...
* Fix typos
* Fix `pipe.enable_model_cpu_offload()` usage
* Fix cpu offloading
* Update numbers
2024-05-24 11:25:29 -07:00
Sayak Paul
95d3748453
[LoRA] Fix LoRA tests (side effects of RGB ordering) part ii ( #7932 )
...
* check
* check 2.
* update slices
2024-05-13 09:23:48 -10:00
Sayak Paul
305f2b4498
[Tests] fix things after #7013 ( #7899 )
...
* debugging
* save the resulting image
* check if order reversing works.
* checking values.
* up
* okay
* checking
* fix
* remove print
2024-05-09 16:05:35 +02:00
Álvaro Somoza
23e091564f
Fix for "no lora weight found module" with some loras ( #7875 )
...
* return layer weight if not found
* better system and test
* key example and typo
2024-05-07 13:54:57 +02:00
Benjamin Bossan
2523390c26
FIX Setting device for DoRA parameters ( #7655 )
...
Fix a bug that causes the the call to set_lora_device to ignore the DoRA
parameters.
2024-04-12 13:55:46 +02:00
UmerHA
0302446819
Implements Blockwise lora ( #7352 )
...
* Initial commit
* Implemented block lora
- implemented block lora
- updated docs
- added tests
* Finishing up
* Reverted unrelated changes made by make style
* Fixed typo
* Fixed bug + Made text_encoder_2 scalable
* Integrated some review feedback
* Incorporated review feedback
* Fix tests
* Made every module configurable
* Adapter to new lora test structure
* Final cleanup
* Some more final fixes
- Included examples in `using_peft_for_inference.md`
- Added hint that only attns are scaled
- Removed NoneTypes
- Added test to check mismatching lens of adapter names / weights raise error
* Update using_peft_for_inference.md
* Update using_peft_for_inference.md
* Make style, quality, fix-copies
* Updated tutorial;Warning if scale/adapter mismatch
* floats are forwarded as-is; changed tutorial scale
* make style, quality, fix-copies
* Fixed typo in tutorial
* Moved some warnings into `lora_loader_utils.py`
* Moved scale/lora mismatch warnings back
* Integrated final review suggestions
* Empty commit to trigger CI
* Reverted emoty commit to trigger CI
---------
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
2024-03-29 21:15:57 +05:30
Dhruv Nair
4d39b7483d
Memory clean up on all Slow Tests ( #7514 )
...
* update
* update
---------
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
2024-03-29 14:23:28 +05:30
UmerHA
0b8e29289d
Skip test_lora_fuse_nan on mps ( #7481 )
...
Skipping test_lora_fuse_nan on mps
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
2024-03-27 14:35:59 +05:30
Sayak Paul
699dfb084c
feat: support DoRA LoRA from community ( #7371 )
...
* feat: support dora loras from community
* safe-guard dora operations under peft version.
* pop use_dora when False
* make dora lora from kohya work.
* fix: kohya conversion utils.
* add a fast test for DoRA compatibility..
* add a nightly test.
2024-03-26 09:37:33 +05:30
UmerHA
1cd4732e7f
Fixed minor error in test_lora_layers_peft.py ( #7394 )
...
* Update test_lora_layers_peft.py
* Update utils.py
2024-03-25 11:35:27 -10:00
Sayak Paul
e25e525fde
[LoRA test suite] refactor the test suite and cleanse it ( #7316 )
...
* cleanse and refactor lora testing suite.
* more cleanup.
* make check_if_lora_correctly_set a utility function
* fix: typo
* retrigger ci
* style
2024-03-20 17:13:52 +05:30
Sayak Paul
b09a2aa308
[LoRA] fix cross_attention_kwargs problems and tighten tests ( #7388 )
...
* debugging
* let's see the numbers
* let's see the numbers
* let's see the numbers
* restrict tolerance.
* increase inference steps.
* shallow copy of cross_attentionkwargs
* remove print
2024-03-19 17:53:38 +05:30
Younes Belkada
8a692739c0
FIX [PEFT / Core] Copy the state dict when passing it to load_lora_weights ( #7058 )
...
* copy the state dict in load lora weights
* fixup
2024-02-27 02:42:23 +01:00
jinghuan-Chen
88aa7f6ebf
Make LoRACompatibleConv padding_mode work. ( #6031 )
...
* Make LoRACompatibleConv padding_mode work.
* Format code style.
* add fast test
* Update src/diffusers/models/lora.py
Simplify the code by patrickvonplaten.
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com >
* code refactor
* apply patrickvonplaten suggestion to simplify the code.
* rm test_lora_layers_old_backend.py and add test case in test_lora_layers_peft.py
* update test case.
---------
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com >
Co-authored-by: YiYi Xu <yixu310@gmail.com >
2024-02-26 14:05:13 -10:00
Dhruv Nair
40dd9cb2bd
Move SDXL T2I Adapter lora test into PEFT workflow ( #6965 )
...
update
2024-02-13 17:08:53 +05:30
Sayak Paul
ca9ed5e8d1
[LoRA] deprecate certain lora methods from the old backend. ( #6889 )
...
* deprecate certain lora methods from the old backend.
* uncomment necessary things.
* safe remove old lora backend 👋
2024-02-09 17:14:32 +01:00
Sayak Paul
30e5e81d58
change to 2024 in the license ( #6902 )
...
change to 2024
2024-02-08 08:19:31 -10:00
Dhruv Nair
d66d554dc2
Add tearDown method to LoRA tests. ( #6660 )
...
* update
* update
2024-01-22 14:00:37 +05:30
Sayak Paul
ae060fc4f1
[feat] introduce unload_lora(). ( #6451 )
...
* introduce unload_lora.
* fix-copies
2024-01-05 16:22:11 +05:30
Sayak Paul
0a0bb526aa
[LoRA depcrecation] LoRA depcrecation trilogy ( #6450 )
...
* edebug
* debug
* more debug
* more more debug
* remove tests for LoRAAttnProcessors.
* rename
2024-01-05 15:48:20 +05:30
Sayak Paul
107e02160a
[LoRA tests] fix stuff related to assertions arising from the recent changes. ( #6448 )
...
* debug
* debug test_with_different_scales_fusion_equivalence
* use the right method.
* place it right.
* let's see.
* let's see again
* alright then.
* add a comment.
2024-01-04 12:55:15 +05:30
sayakpaul
6dbef45e6e
Revert "debug"
...
This reverts commit 7715e6c31c .
2024-01-04 10:39:38 +05:30
sayakpaul
7715e6c31c
debug
2024-01-04 10:39:00 +05:30
sayakpaul
05b3d36a25
Revert "debug"
...
This reverts commit fb4aec0ce3 .
2024-01-04 10:38:04 +05:30
sayakpaul
fb4aec0ce3
debug
2024-01-04 10:37:28 +05:30
Sayak Paul
d700140076
[LoRA deprecation] handle rest of the stuff related to deprecated lora stuff. ( #6426 )
...
* handle rest of the stuff related to deprecated lora stuff.
* fix: copies
* don't modify the uNet in-place.
* fix: temporal autoencoder.
* manually remove lora layers.
* don't copy unet.
* alright
* remove lora attn processors from unet3d
* fix: unet3d.
* styl
* Empty-Commit
2024-01-03 20:54:09 +05:30