Tolga Cangöz
c1dc2ae619
Fix multi-gpu case for train_cm_ct_unconditional.py ( #8653 )
...
* Fix multi-gpu case
* Prefer previously created `unwrap_model()` function
For `torch.compile()` generalizability
* `chore: update unwrap_model() function to use accelerator.unwrap_model()`
2024-07-17 19:03:12 +05:30
ustcuna
9f963e7349
[Community Pipelines] Accelerate inference of AnimateDiff by IPEX on CPU ( #8643 )
...
* add animatediff_ipex community pipeline
* address the 1st round review comments
2024-07-12 14:31:15 +05:30
Tolga Cangöz
57084dacc5
Remove unnecessary lines ( #8569 )
...
* Remove unused line
---------
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
2024-07-08 10:42:02 -10:00
apolinário
7833ed957b
Improve model card for push_to_hub trainers ( #8697 )
...
* Improve trainer model cards
* Update train_dreambooth_sd3.py
* Update train_dreambooth_lora_sd3.py
* add link to adapters loading doc
* Update train_dreambooth_lora_sd3.py
---------
Co-authored-by: Linoy Tsaban <57615435+linoytsaban@users.noreply.github.com >
2024-07-05 12:18:41 +05:30
Dhruv Nair
85c4a326e0
Fix saving text encoder weights and kohya weights in advanced dreambooth lora script ( #8766 )
...
* update
* update
* update
2024-07-05 11:28:50 +05:30
Thomas Eding
2e2684f014
Add vae_roundtrip.py example ( #7104 )
...
* Add vae_roundtrip.py example
* Add cuda support to vae_roundtrip
* Move vae_roundtrip.py into research_projects/vae
* Fix channel scaling in vae roundrip and also support taesd.
* Apply ruff --fix for CI gatekeep check
---------
Co-authored-by: Álvaro Somoza <asomoza@users.noreply.github.com >
2024-07-04 01:53:09 -04:00
Linoy Tsaban
beb1c017ad
[advanced dreambooth lora] add clip_skip arg ( #8715 )
...
* add clip_skip
* style
* smol fix
---------
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
2024-07-03 12:15:16 -05:00
Sayak Paul
84bbd2f4ce
Update README.md to include Colab link ( #8775 )
2024-07-03 07:46:38 +05:30
Sayak Paul
600ef8a4dc
Allow SD3 DreamBooth LoRA fine-tuning on a free-tier Colab ( #8762 )
...
* add experimental scripts to train SD3 transformer lora on colab
* add readme
* add colab
* Apply suggestions from code review
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com >
* fix link in the notebook.
---------
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com >
2024-07-03 07:07:47 +05:30
Sayak Paul
984d340534
Revert "[LoRA] introduce LoraBaseMixin to promote reusability." ( #8773 )
...
Revert "[LoRA] introduce `LoraBaseMixin` to promote reusability. (#8670 )"
This reverts commit a2071a1837 .
2024-07-03 07:05:01 +05:30
Sayak Paul
a2071a1837
[LoRA] introduce LoraBaseMixin to promote reusability. ( #8670 )
...
* introduce to promote reusability.
* up
* add more tests
* up
* remove comments.
* fix fuse_nan test
* clarify the scope of fuse_lora and unfuse_lora
* remove space
2024-07-03 07:04:37 +05:30
Dhruv Nair
610a71d7d4
Fix indent in dreambooth lora advanced SD 15 script ( #8753 )
...
update
2024-07-02 11:07:34 +05:30
Sayak Paul
4e57aeff1f
[Tests] add test suite for SD3 DreamBooth ( #8650 )
...
* add a test suite for SD3 DreamBooth
* lora suite
* style
* add checkpointing tests for LoRA
* add test to cover train_text_encoder.
2024-07-02 07:00:22 +05:30
Álvaro Somoza
af92869d9b
[SD3 LoRA Training] Fix errors when not training text encoders ( #8743 )
...
* fix
* fix things.
Co-authored-by: Linoy Tsaban <linoy.tsaban@gmail.com >
* remove patch
* apply suggestions
---------
Co-authored-by: Linoy Tsaban <57615435+linoytsaban@users.noreply.github.com >
Co-authored-by: sayakpaul <spsayakpaul@gmail.com >
Co-authored-by: Linoy Tsaban <linoy.tsaban@gmail.com >
2024-07-02 06:21:16 +05:30
WenheLI
7bfc1ee1b2
fix the LR schedulers for dreambooth_lora ( #8510 )
...
* update training
* update
---------
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
Co-authored-by: Linoy Tsaban <57615435+linoytsaban@users.noreply.github.com >
2024-07-01 08:14:57 +05:30
Bhavay Malhotra
71c046102b
[train_controlnet_sdxl.py] Fix the LR schedulers when num_train_epochs is passed in a distributed training env ( #8476 )
...
* Create diffusers.yml
* num_train_epochs
---------
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
2024-07-01 07:21:40 +05:30
Álvaro Somoza
9b7acc7cf2
[Community pipeline] SD3 Differential Diffusion Img2Img Pipeline ( #8679 )
...
* new pipeline
2024-06-28 17:12:39 -10:00
Linoy Tsaban
35f45ecd71
[Advanced dreambooth lora] adjustments to align with canonical script ( #8406 )
...
* minor changes
* minor changes
* minor changes
* minor changes
* minor changes
* minor changes
* minor changes
* fix
* fix
* aligning with blora script
* aligning with blora script
* aligning with blora script
* aligning with blora script
* aligning with blora script
* remove prints
* style
* default val
* license
* move save_model_card to outside push_to_hub
* Update train_dreambooth_lora_sdxl_advanced.py
---------
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
2024-06-27 13:27:37 +05:30
Linoy Tsaban
c6e08ecd46
[Sd3 Dreambooth LoRA] Add text encoder training for the clip encoders ( #8630 )
...
* add clip text-encoder training
* no dora
* text encoder traing fixes
* text encoder traing fixes
* text encoder training fixes
* text encoder training fixes
* text encoder training fixes
* text encoder training fixes
* add text_encoder layers to save_lora
* style
* fix imports
* style
* fix text encoder
* review changes
* review changes
* review changes
* minor change
* add lora tag
* style
* add readme notes
* add tests for clip encoders
* style
* typo
* fixes
* style
* Update tests/lora/test_lora_layers_sd3.py
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
* Update examples/dreambooth/README_sd3.md
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
* minor readme change
---------
Co-authored-by: YiYi Xu <yixu310@gmail.com >
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
2024-06-25 18:00:19 +05:30
Hammond Liu
1f81fbe274
Fix redundant pipe init in sd3 lora ( #8680 )
...
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
2024-06-25 07:31:20 +05:30
Tolga Cangöz
589931ca79
Errata - Update class method convention to use cls ( #8574 )
...
* Class methods are supposed to use `cls` conventionally
* `make style && make quality`
* An Empty commit
---------
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
2024-06-24 10:35:45 -07:00
Tolga Cangöz
f040c27d4c
Errata - Fix typos and improve style ( #8571 )
...
* Fix typos
* Fix typos & up style
* chore: Update numbers
---------
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
2024-06-24 10:07:22 -07:00
Tolga Cangöz
138fac703a
Discourage using deprecated revision parameter ( #8573 )
...
* Discourage using `revision`
* `make style && make quality`
* Refactor code to use 'variant' instead of 'revision'
* `revision="bf16"` -> `variant="bf16"`
---------
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
2024-06-24 10:06:49 -07:00
Tolga Cangöz
468ae09ed8
Errata - Trim trailing white space in the whole repo ( #8575 )
...
* Trim all the trailing white space in the whole repo
* Remove unnecessary empty places
* make style && make quality
* Trim trailing white space
* trim
---------
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
2024-06-24 18:39:15 +05:30
Tolga Cangöz
c375903db5
Errata - Fix typos & improve contributing page ( #8572 )
...
* Fix typos & improve contributing page
* `make style && make quality`
* fix typos
* Fix typo
---------
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
2024-06-24 14:13:03 +05:30
Vinh H. Pham
b9d52fca1d
[train_lcm_distill_lora_sdxl.py] Fix the LR schedulers when num_train_epochs is passed in a distributed training env ( #8446 )
...
fix num_train_epochs
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
2024-06-24 14:09:28 +05:30
drhead
2ada094bff
Add extra performance features for EMAModel, torch._foreach operations and better support for non-blocking CPU offloading ( #7685 )
...
* Add support for _foreach operations and non-blocking to EMAModel
* default foreach to false
* add non-blocking EMA offloading to SD1.5 T2I example script
* fix whitespace
* move foreach to cli argument
* linting
* Update README.md re: EMA weight training
* correct args.foreach_ema
* add tests for foreach ema
* code quality
* add foreach to from_pretrained
* default foreach false
* fix linting
---------
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
Co-authored-by: drhead <a@a.a>
2024-06-24 14:03:47 +05:30
Sayak Paul
8eb17315c8
[LoRA] get rid of the legacy lora remnants and make our codebase lighter ( #8623 )
...
* get rid of the legacy lora remnants and make our codebase lighter
* fix depcrecated lora argument
* fix
* empty commit to trigger ci
* remove print
* empty
2024-06-21 16:36:05 +05:30
satani99
963ee05d16
Update train_dreambooth_lora_sd3.py ( #8600 )
...
* Update train_dreambooth_lora_sd3.py
* Update train_dreambooth_lora_sd3.py
* Update train_dreambooth_sd3.py
---------
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
2024-06-20 17:42:24 +05:30
Sayak Paul
a1d55e14ba
Change the default weighting_scheme in the SD3 scripts ( #8639 )
...
* change to logit_normal as the weighting scheme
* sensible default mote
2024-06-19 13:05:26 +01:00
Sayak Paul
23a2cd3337
[LoRA] training fix the position of param casting when loading them ( #8460 )
...
fix the position of param casting when loading them
2024-06-18 14:57:34 +01:00
Sayak Paul
4edde134f6
[SD3 training] refactor the density and weighting utilities. ( #8591 )
...
refactor the density and weighting utilities.
2024-06-18 14:44:38 +01:00
Bagheera
074a7cc3c5
SD3: update default training timestep / loss weighting distribution to logit_normal ( #8592 )
...
Co-authored-by: bghira <bghira@users.github.com >
Co-authored-by: Kashif Rasul <kashif.rasul@gmail.com >
2024-06-18 14:15:19 +01:00
Álvaro Somoza
6bfd13f07a
[SD3 Training] T5 token limit ( #8564 )
...
* initial commit
* default back to 77
* better text
* text correction
---------
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
2024-06-17 16:32:56 -04:00
spacepxl
8e1b7a084a
Fix the deletion of SD3 text encoders for Dreambooth/LoRA training if the text encoders are not being trained ( #8536 )
...
* Update train_dreambooth_sd3.py to fix TE garbage collection
* Update train_dreambooth_lora_sd3.py to fix TE garbage collection
---------
Co-authored-by: Kashif Rasul <kashif.rasul@gmail.com >
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
2024-06-16 20:52:33 +01:00
Rafie Walker
6946facf69
Implement SD3 loss weighting ( #8528 )
...
* Add lognorm and cosmap weighting
* Implement mode sampling
* Update examples/dreambooth/train_dreambooth_lora_sd3.py
* Update examples/dreambooth/train_dreambooth_lora_sd3.py
* Update examples/dreambooth/train_dreambooth_sd3.py
* Update examples/dreambooth/train_dreambooth_sd3.py
* Update examples/dreambooth/train_dreambooth_sd3.py
* Update examples/dreambooth/train_dreambooth_lora_sd3.py
* Update examples/dreambooth/train_dreambooth_sd3.py
* Update examples/dreambooth/train_dreambooth_sd3.py
* Update examples/dreambooth/train_dreambooth_lora_sd3.py
* keep timestamp sampling fully on cpu
---------
Co-authored-by: Kashif Rasul <kashif.rasul@gmail.com >
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
2024-06-16 20:15:50 +01:00
Jonathan Rahn
a899e42fc7
add sentencepiece to requirements.txt for SD3 dreambooth ( #8538 )
...
* add `sentencepiece` requirement for SD3
add `sentencepiece` requirement
* Empty-Commit
---------
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
2024-06-14 22:48:36 +01:00
Sayak Paul
2e4841ef1e
post release 0.29.0 ( #8492 )
...
post release
2024-06-13 06:14:20 -10:00
Haofan Wang
8bea943714
Update requirements_sd3.txt ( #8521 )
2024-06-13 17:02:17 +01:00
Ameer Azam
0240d4191a
Update README_sd3.md ( #8490 )
...
becasue in Readme it was not correct
train_dreambooth_sd3.py to train_dreambooth_lora_sd3
2024-06-12 21:08:36 +01:00
Dhruv Nair
04717fd861
Add Stable Diffusion 3 ( #8483 )
...
* up
* add sd3
* update
* update
* add tests
* fix copies
* fix docs
* update
* add dreambooth lora
* add LoRA
* update
* update
* update
* update
* import fix
* update
* Update src/diffusers/pipelines/stable_diffusion_3/pipeline_stable_diffusion_3.py
Co-authored-by: YiYi Xu <yixu310@gmail.com >
* import fix 2
* update
* Update src/diffusers/models/autoencoders/autoencoder_kl.py
Co-authored-by: YiYi Xu <yixu310@gmail.com >
* Update src/diffusers/models/autoencoders/autoencoder_kl.py
Co-authored-by: YiYi Xu <yixu310@gmail.com >
* Update src/diffusers/models/autoencoders/autoencoder_kl.py
Co-authored-by: YiYi Xu <yixu310@gmail.com >
* Update src/diffusers/models/autoencoders/autoencoder_kl.py
Co-authored-by: YiYi Xu <yixu310@gmail.com >
* Update src/diffusers/models/autoencoders/autoencoder_kl.py
Co-authored-by: YiYi Xu <yixu310@gmail.com >
* Update src/diffusers/models/autoencoders/autoencoder_kl.py
Co-authored-by: YiYi Xu <yixu310@gmail.com >
* Update src/diffusers/models/autoencoders/autoencoder_kl.py
Co-authored-by: YiYi Xu <yixu310@gmail.com >
* Update src/diffusers/models/autoencoders/autoencoder_kl.py
Co-authored-by: YiYi Xu <yixu310@gmail.com >
* Update src/diffusers/models/autoencoders/autoencoder_kl.py
Co-authored-by: YiYi Xu <yixu310@gmail.com >
* Update src/diffusers/models/autoencoders/autoencoder_kl.py
Co-authored-by: YiYi Xu <yixu310@gmail.com >
* Update src/diffusers/models/autoencoders/autoencoder_kl.py
Co-authored-by: YiYi Xu <yixu310@gmail.com >
* update
* update
* update
* fix ckpt id
* fix more ids
* update
* missing doc
* Update src/diffusers/schedulers/scheduling_flow_match_euler_discrete.py
Co-authored-by: YiYi Xu <yixu310@gmail.com >
* Update src/diffusers/schedulers/scheduling_flow_match_euler_discrete.py
Co-authored-by: YiYi Xu <yixu310@gmail.com >
* Update docs/source/en/api/pipelines/stable_diffusion/stable_diffusion_3.md
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
* Update docs/source/en/api/pipelines/stable_diffusion/stable_diffusion_3.md
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
* update'
* fix
* update
* Update src/diffusers/models/autoencoders/autoencoder_kl.py
* Update src/diffusers/models/autoencoders/autoencoder_kl.py
* note on gated access.
* requirements
* licensing
---------
Co-authored-by: sayakpaul <spsayakpaul@gmail.com >
Co-authored-by: YiYi Xu <yixu310@gmail.com >
2024-06-12 20:44:00 +01:00
Sayak Paul
d457beed92
Update README.md to update the MaPO project ( #8470 )
...
Update README.md
2024-06-11 10:10:45 +01:00
Tolga Cangöz
98730c5dd7
Errata ( #8322 )
...
* Fix typos
* Trim trailing whitespaces
* Remove a trailing whitespace
* chore: Update MarigoldDepthPipeline checkpoint to prs-eth/marigold-lcm-v1-0
* Revert "chore: Update MarigoldDepthPipeline checkpoint to prs-eth/marigold-lcm-v1-0"
This reverts commit fd742b30b4 .
* pokemon -> naruto
* `DPMSolverMultistep` -> `DPMSolverMultistepScheduler`
* Improve Markdown stylization
* Improve style
* Improve style
* Refactor pipeline variable names for consistency
* up style
2024-06-05 13:59:09 -07:00
Hzzone
d3881f35b7
Gligen training ( #7906 )
...
* add training code of gligen
* fix code quality tests.
---------
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
2024-06-05 16:26:42 +04:00
satani99
352d96eb82
Modularize train_text_to_image_lora_sdxl inferencing during and after training in example ( #8335 )
...
* Modularized the train_lora_sdxl file
* Modularized the train_lora_sdxl file
* Modularized the train_lora_sdxl file
---------
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
2024-05-31 04:52:22 +05:30
Genius Patrick
3511a9623f
fix(training): lr scheduler doesn't work properly in distributed scenarios ( #8312 )
2024-05-30 15:23:19 +05:30
Tolga Cangöz
a2ecce26bc
Fix Copying Mechanism typo/bug ( #8232 )
...
* Fix copying mechanism typos
* fix copying mecha
* Revert, since they are in TODO
* Fix copying mechanism
2024-05-29 09:37:18 -07:00
satani99
3bc3b48c10
Modularize train_text_to_image_lora SD inferencing during and after training in example ( #8283 )
...
* Modularized the train_lora file
* Modularized the train_lora file
* Modularized the train_lora file
* Modularized the train_lora file
* Modularized the train_lora file
---------
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
2024-05-29 10:08:02 +05:30
Sayak Paul
581d8aacf7
post release v0.28.0 ( #8286 )
...
* post release v0.28.0
* style
2024-05-29 07:13:22 +05:30
Sajad Norouzi
67bef2027c
Add Kohya fix to SD pipeline for high resolution generation ( #7633 )
...
add kohya high resolution fix.
2024-05-28 10:00:04 -10:00