Alexey Zolotenkov
b8215b1c06
Fix incorrect seed initialization when args.seed is 0 ( #10964 )
...
* Fix seed initialization to handle args.seed = 0 correctly
* Apply style fixes
---------
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2025-03-04 10:09:52 -10:00
SahilCarterr
170833c22a
[Fix] fp16 unscaling in train_dreambooth_lora_sdxl ( #10889 )
...
Fix fp16 bug
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
2025-02-24 06:49:23 -10:00
Sayak Paul
f10d3c6d04
[LoRA] add LoRA support to Lumina2 and fine-tuning script ( #10818 )
...
* feat: lora support for Lumina2.
* fix-copies.
* updates
* updates
* docs.
* fix
* add: training script.
* tests
* updates
* updates
* major updates.
* updates
* fixes
* docs.
* updates
* updates
2025-02-20 09:41:51 +05:30
Leo Jiang
cd0a4a82cf
[bugfix] NPU Adaption for Sana ( #10724 )
...
* NPU Adaption for Sanna
* NPU Adaption for Sanna
* NPU Adaption for Sanna
* NPU Adaption for Sanna
* NPU Adaption for Sanna
* NPU Adaption for Sanna
* NPU Adaption for Sanna
* NPU Adaption for Sanna
* NPU Adaption for Sanna
* NPU Adaption for Sanna
* NPU Adaption for Sanna
* NPU Adaption for Sanna
* NPU Adaption for Sanna
* NPU Adaption for Sanna
* NPU Adaption for Sanna
* NPU Adaption for Sanna
* NPU Adaption for Sanna
* NPU Adaption for Sanna
* [bugfix]NPU Adaption for Sanna
---------
Co-authored-by: Jη³ι‘΅ <jiangshuo9@h-partners.com >
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
2025-02-06 19:29:58 +05:30
hlky
41571773d9
[training] Convert to ImageFolder script ( #10664 )
...
* [training] Convert to ImageFolder script
* make
2025-01-27 09:43:51 -10:00
Leo Jiang
07860f9916
NPU Adaption for Sanna ( #10409 )
...
* NPU Adaption for Sanna
---------
Co-authored-by: Jη³ι‘΅ <jiangshuo9@h-partners.com >
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
2025-01-24 09:08:52 -10:00
Muyang Li
158a5a87fb
Remove the FP32 Wrapper when evaluating ( #10617 )
...
Remove the FP32 Wrapper
Co-authored-by: Linoy Tsaban <57615435+linoytsaban@users.noreply.github.com >
2025-01-21 16:16:54 +05:30
jiqing-feng
012d08b1bc
Enable dreambooth lora finetune example on other devices ( #10602 )
...
* enable dreambooth_lora on other devices
Signed-off-by: jiqing-feng <jiqing.feng@intel.com >
* enable xpu
Signed-off-by: jiqing-feng <jiqing.feng@intel.com >
* check cuda device before empty cache
Signed-off-by: jiqing-feng <jiqing.feng@intel.com >
* fix comment
Signed-off-by: jiqing-feng <jiqing.feng@intel.com >
* import free_memory
Signed-off-by: jiqing-feng <jiqing.feng@intel.com >
---------
Signed-off-by: jiqing-feng <jiqing.feng@intel.com >
2025-01-21 14:09:45 +05:30
Sayak Paul
4ace7d0483
[chore] change licensing to 2025 from 2024. ( #10615 )
...
change licensing to 2025 from 2024.
2025-01-20 16:57:27 -10:00
Leo Jiang
b0c8973834
[Sana 4K] Add vae tiling option to avoid OOM ( #10583 )
...
Co-authored-by: Jη³ι‘΅ <jiangshuo9@h-partners.com >
2025-01-16 02:06:07 +05:30
Sayak Paul
5f72473543
[training] add ds support to lora sd3. ( #10378 )
...
* add ds support to lora sd3.
Co-authored-by: leisuzz <jiangshuonb@gmail.com >
* style.
---------
Co-authored-by: leisuzz <jiangshuonb@gmail.com >
Co-authored-by: Linoy Tsaban <57615435+linoytsaban@users.noreply.github.com >
2024-12-30 19:31:05 +05:30
Sayak Paul
92933ec36a
[chore] post release 0.32.0 ( #10361 )
...
* post release 0.32.0
* stylew
2024-12-23 10:03:34 -10:00
Sayak Paul
76e2727b5c
[SANA LoRA] sana lora training tests and misc. ( #10296 )
...
* sana lora training tests and misc.
* remove push to hub
* Update examples/dreambooth/train_dreambooth_lora_sana.py
Co-authored-by: Aryan <aryan@huggingface.co >
---------
Co-authored-by: Aryan <aryan@huggingface.co >
2024-12-23 12:35:13 +05:30
Sayak Paul
9c0e20de61
[chore] Update README_sana.md to update the default model ( #10285 )
...
Update README_sana.md to update the default model
2024-12-19 10:24:57 +05:30
Sayak Paul
63cdf9c0ba
[chore] fix: reamde -> readme ( #10276 )
...
fix: reamde -> readme
2024-12-18 10:56:08 +05:30
Sayak Paul
9408aa2dfc
[LoRA] feat: lora support for SANA. ( #10234 )
...
* feat: lora support for SANA.
* make fix-copies
* rename test class.
* attention_kwargs -> cross_attention_kwargs.
* Revert "attention_kwargs -> cross_attention_kwargs."
This reverts commit 23433bf9bc .
* exhaust 119 max line limit
* sana lora fine-tuning script.
* readme
* add a note about the supported models.
* Apply suggestions from code review
Co-authored-by: Aryan <aryan@huggingface.co >
* style
* docs for attention_kwargs.
* remove lora_scale from pag pipeline.
* copy fix
---------
Co-authored-by: Aryan <aryan@huggingface.co >
2024-12-18 08:22:31 +05:30
Ethan Smith
26e80e0143
fix min-snr implementation ( #8466 )
...
* fix min-snr implementation
https://github.com/kohya-ss/sd-scripts/blob/main/library/custom_train_functions.py#L66
* Update train_dreambooth.py
fix variable name mse_loss_weights
* fix divisor
* make style
---------
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
Co-authored-by: YiYi Xu <yixu310@gmail.com >
2024-12-12 09:55:59 +05:30
SkyCol
074e12358b
Add prompt about wandb in examples/dreambooth/readme. ( #10014 )
...
Add files via upload
2024-11-25 18:42:06 +05:30
Linoy Tsaban
c4b5d2ff6b
[SD3 dreambooth lora] smol fix to checkpoint saving ( #9993 )
...
* smol change to fix checkpoint saving & resuming (as done in train_dreambooth_sd3.py)
* style
* modify comment to explain reasoning behind hidden size check
2024-11-24 18:51:06 +02:00
Linoy Tsaban
acf479bded
[advanced flux training] bug fix + reduce memory cost as in #9829 ( #9838 )
...
* memory improvement as done here: https://github.com/huggingface/diffusers/pull/9829
* fix bug
* fix bug
* style
---------
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
2024-11-19 08:43:36 +05:30
SahilCarterr
9cc96a64f1
[FIX] Fix TypeError in DreamBooth SDXL when use_dora is False ( #9879 )
...
* fix use_dora
* fix style and quality
* fix use_dora with peft version
---------
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
2024-11-08 19:09:24 -04:00
SahilCarterr
76b7d86a9a
Updated _encode_prompt_with_clip and encode_prompt in train_dreamboth_sd3 ( #9800 )
...
* updated encode prompt and clip encod prompt
---------
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
2024-11-05 15:08:50 -10:00
Leo Jiang
a98a839de7
Reduce Memory Cost in Flux Training ( #9829 )
...
* Improve NPU performance
* Improve NPU performance
* Improve NPU performance
* Improve NPU performance
* [bugfix] bugfix for npu free memory
* [bugfix] bugfix for npu free memory
* [bugfix] bugfix for npu free memory
* Reduce memory cost for flux training process
---------
Co-authored-by: θη‘ <jiangshuo9@h-partners.com >
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
2024-11-01 12:19:32 +05:30
Boseong Jeon
3deed729e6
Handling mixed precision for dreambooth flux lora training ( #9565 )
...
Handling mixed precision and add unwarp
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
Co-authored-by: Linoy Tsaban <57615435+linoytsaban@users.noreply.github.com >
2024-11-01 10:16:05 +05:30
Leo Jiang
9dcac83057
NPU Adaption for FLUX ( #9751 )
...
* NPU implementation for FLUX
* NPU implementation for FLUX
* NPU implementation for FLUX
* NPU implementation for FLUX
* NPU implementation for FLUX
* NPU implementation for FLUX
* NPU implementation for FLUX
* NPU implementation for FLUX
* NPU implementation for FLUX
* NPU implementation for FLUX
* NPU implementation for FLUX
* NPU implementation for FLUX
* NPU implementation for FLUX
* NPU implementation for FLUX
---------
Co-authored-by: θη‘ <jiangshuo9@h-partners.com >
2024-11-01 09:03:15 +05:30
Sayak Paul
8ce37ab055
[training] use the lr when using 8bit adam. ( #9796 )
...
* use the lr when using 8bit adam.
* remove lr as we pack it in params_to_optimize.
---------
Co-authored-by: Linoy Tsaban <57615435+linoytsaban@users.noreply.github.com >
2024-10-31 15:51:42 +05:30
Linoy Tsaban
743a5697f2
[flux dreambooth lora training] make LoRA target modules configurable + small bug fix ( #9646 )
...
* make lora target modules configurable and change the default
* style
* make lora target modules configurable and change the default
* fix bug when using prodigy and training te
* fix mixed precision training as proposed in https://github.com/huggingface/diffusers/pull/9565 for full dreambooth as well
* add test and notes
* style
* address sayaks comments
* style
* fix test
---------
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
2024-10-28 17:27:41 +02:00
Linoy Tsaban
db5b6a9630
[SD 3.5 Dreambooth LoRA] support configurable training block & layers ( #9762 )
...
* configurable layers
* configurable layers
* update README
* style
* add test
* style
* add layer test, update readme, add nargs
* readme
* test style
* remove print, change nargs
* test arg change
* style
* revert nargs 2/2
* address sayaks comments
* style
* address sayaks comments
2024-10-28 16:07:54 +02:00
Biswaroop
493aa74312
[Fix] remove setting lr for T5 text encoder when using prodigy in flux dreambooth lora script ( #9473 )
...
* fix: removed setting of text encoder lr for T5 as it's not being tuned
* fix: removed setting of text encoder lr for T5 as it's not being tuned
---------
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
Co-authored-by: Linoy Tsaban <57615435+linoytsaban@users.noreply.github.com >
2024-10-28 13:07:30 +02:00
Ina
73b59f5203
[refactor] enhance readability of flux related pipelines ( #9711 )
...
* flux pipline: readability enhancement.
2024-10-25 11:01:51 -10:00
Linoy Tsaban
bfa0aa4ff2
[SD3-5 dreambooth lora] update model cards ( #9749 )
...
* improve readme
* style
---------
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
2024-10-23 23:16:53 +03:00
Sayak Paul
e45c25d03a
post-release 0.31.0 ( #9742 )
...
* post-release
* style
2024-10-22 20:42:30 +05:30
Linoy Tsaban
ee4ab23892
[SD3 dreambooth-lora training] small updates + bug fixes ( #9682 )
...
* add latent caching + smol updates
* update license
* replace with free_memory
* add --upcast_before_saving to allow saving transformer weights in lower precision
* fix models to accumulate
* fix mixed precision issue as proposed in https://github.com/huggingface/diffusers/pull/9565
* smol update to readme
* style
* fix caching latents
* style
* add tests for latent caching
* style
* fix latent caching
---------
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
2024-10-16 11:13:37 +03:00
0xεη‘γ
dccf39f01e
Dreambooth lora flux bug 3dtensor to 2dtensor ( #9653 )
...
* fixed issue #9350 , Tensor is deprecated
* ran make style
2024-10-15 17:18:13 +05:30
Sayak Paul
8e7d6c03a3
[chore] fix: retain memory utility. ( #9543 )
...
* fix: retain memory utility.
* fix
* quality
* free_memory.
2024-09-28 21:08:45 +05:30
suzukimain
b52119ae92
[docs] Replace runwayml/stable-diffusion-v1-5 with Lykon/dreamshaper-8 ( #9428 )
...
* [docs] Replace runwayml/stable-diffusion-v1-5 with Lykon/dreamshaper-8
Updated documentation as runwayml/stable-diffusion-v1-5 has been removed from Huggingface.
* Update docs/source/en/using-diffusers/inpaint.md
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com >
* Replace with stable-diffusion-v1-5/stable-diffusion-v1-5
* Update inpaint.md
---------
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com >
2024-09-16 10:18:45 -07:00
Linoy Tsaban
37e3603c4a
[Flux Dreambooth lora] add latent caching ( #9160 )
...
* add ostris trainer to README & add cache latents of vae
* add ostris trainer to README & add cache latents of vae
* style
* readme
* add test for latent caching
* add ostris noise scheduler
9ee1ef2a0a/toolkit/samplers/custom_flowmatch_sampler.py (L95)
* style
* fix import
* style
* fix tests
* style
* --change upcasting of transformer?
* update readme according to main
* keep only latent caching
* add configurable param for final saving of trained layers- --upcast_before_saving
* style
* Update examples/dreambooth/README_flux.md
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
* Update examples/dreambooth/README_flux.md
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
* use clear_objs_and_retain_memory from utilities
* style
---------
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
2024-09-15 15:30:31 +03:00
Leo Jiang
e2ead7cdcc
Fix the issue on sd3 dreambooth w./w.t. lora training ( #9419 )
...
* Fix dtype error
* [bugfix] Fixed the issue on sd3 dreambooth training
* [bugfix] Fixed the issue on sd3 dreambooth training
---------
Co-authored-by: θη‘ <jiangshuo9@h-partners.com >
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
2024-09-14 16:29:38 +05:30
Sayak Paul
adf1f911f0
[Tests] fix some fast gpu tests. ( #9379 )
...
fix some fast gpu tests.
2024-09-11 06:50:02 +05:30
Linoy Tsaban
55ac421f7b
improve README for flux dreambooth lora ( #9290 )
...
* improve readme
* improve readme
* improve readme
* improve readme
2024-09-05 17:53:23 +05:30
Sayak Paul
8ba90aa706
chore: add a cleaning utility to be useful during training. ( #9240 )
2024-09-03 15:00:17 +05:30
Linoy Tsaban
c977966502
[Dreambooth flux] bug fix for dreambooth script (align with dreambooth lora) ( #9257 )
...
* fix shape
* fix prompt encoding
* style
* fix device
* add comment
2024-08-26 17:29:58 +05:30
townwish4git
d25eb5d385
fix(sd3): fix deletion of text_encoders etc ( #8951 )
...
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
Co-authored-by: YiYi Xu <yixu310@gmail.com >
2024-08-18 15:37:40 -10:00
Tolga CangΓΆz
7ef8a46523
[Docs] Fix CPU offloading usage ( #9207 )
...
* chore: Fix cpu offloading usage
* Trim trailing white space
* docs: update Kolors model link in kolors.md
2024-08-18 13:12:12 -10:00
Γlvaro Somoza
82058a5413
post release 0.30.0 ( #9173 )
...
* post release
* fix quality
2024-08-14 12:55:55 +05:30
Linoy Tsaban
413ca29b71
[Flux Dreambooth LoRA] - te bug fixes & updates ( #9139 )
...
* add requirements + fix link to bghira's guide
* text ecnoder training fixes
* text encoder training fixes
* text encoder training fixes
* text encoder training fixes
* style
* add tests
* fix encode_prompt call
* style
* unpack_latents test
* fix lora saving
* remove default val for max_sequenece_length in encode_prompt
* remove default val for max_sequenece_length in encode_prompt
* style
* testing
* style
* testing
* testing
* style
* fix sizing issue
* style
* revert scaling
* style
* style
* scaling test
* style
* scaling test
* remove model pred operation left from pre-conditioning
* remove model pred operation left from pre-conditioning
* fix trainable params
* remove te2 from casting
* transformer to accelerator
* remove prints
* empty commit
2024-08-12 11:58:03 +05:30
Linoy Tsaban
65e30907b5
[Flux] Dreambooth LoRA training scripts ( #9086 )
...
* initial commit - dreambooth for flux
* update transformer to be FluxTransformer2DModel
* update training loop and validation inference
* fix sd3->flux docs
* add guidance handling, not sure if it makes sense(?)
* inital dreambooth lora commit
* fix text_ids in compute_text_embeddings
* fix imports of static methods
* fix pipeline loading in readme, remove auto1111 docs for now
* fix pipeline loading in readme, remove auto1111 docs for now, remove some irrelevant text_encoder_3 refs
* Update examples/dreambooth/train_dreambooth_flux.py
Co-authored-by: Bagheera <59658056+bghira@users.noreply.github.com >
* fix te2 loading and remove te2 refs from text encoder training
* fix tokenizer_2 initialization
* remove text_encoder training refs from lora script (for now)
* try with vae in bfloat16, fix model hook save
* fix tokenization
* fix static imports
* fix CLIP import
* remove text_encoder training refs (for now) from lora script
* fix minor bug in encode_prompt, add guidance def in lora script, ...
* fix unpack_latents args
* fix license in readme
* add "none" to weighting_scheme options for uniform sampling
* style
* adapt model saving - remove text encoder refs
* adapt model loading - remove text encoder refs
* initial commit for readme
* Update examples/dreambooth/train_dreambooth_lora_flux.py
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
* Update examples/dreambooth/train_dreambooth_lora_flux.py
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
* fix vae casting
* remove precondition_outputs
* readme
* readme
* style
* readme
* readme
* update weighting scheme default & docs
* style
* add text_encoder training to lora script, change vae_scale_factor value in both
* style
* text encoder training fixes
* style
* update readme
* minor fixes
* fix te params
* fix te params
---------
Co-authored-by: Bagheera <59658056+bghira@users.noreply.github.com >
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
2024-08-09 07:31:04 +05:30
Sayak Paul
2d753b6fb5
fix train_dreambooth_lora_sd3.py loading hook ( #9107 )
2024-08-07 10:09:47 +05:30
Tolga CangΓΆz
7071b7461b
Errata: Fix typos & \s+$ ( #9008 )
...
* Fix typos
* chore: Fix typos
* chore: Update README.md for promptdiffusion example
* Trim trailing white spaces
* Fix a typo
* update number
* chore: update number
* Trim trailing white space
* Update README.md
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com >
* Update README.md
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com >
---------
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com >
2024-08-02 21:24:25 -07:00
Sayak Paul
d87fe95f90
[Chore] add LoraLoaderMixin to the inits ( #8981 )
...
* introduce to promote reusability.
* up
* add more tests
* up
* remove comments.
* fix fuse_nan test
* clarify the scope of fuse_lora and unfuse_lora
* remove space
* rewrite fuse_lora a bit.
* feedback
* copy over load_lora_into_text_encoder.
* address dhruv's feedback.
* fix-copies
* fix issubclass.
* num_fused_loras
* fix
* fix
* remove mapping
* up
* fix
* style
* fix-copies
* change to SD3TransformerLoRALoadersMixin
* Apply suggestions from code review
Co-authored-by: Dhruv Nair <dhruv.nair@gmail.com >
* up
* handle wuerstchen
* up
* move lora to lora_pipeline.py
* up
* fix-copies
* fix documentation.
* comment set_adapters().
* fix-copies
* fix set_adapters() at the model level.
* fix?
* fix
* loraloadermixin.
---------
Co-authored-by: Dhruv Nair <dhruv.nair@gmail.com >
2024-07-26 08:59:33 +05:30