Sayak Paul
6683f97959
[Training] Add datasets version of LCM LoRA SDXL ( #5778 )
...
* add: script to train lcm lora for sdxl with 🤗 datasets
* suit up the args.
* remove comments.
* fix num_update_steps
* fix batch unmarshalling
* fix num_update_steps_per_epoch
* fix; dataloading.
* fix microconditions.
* unconditional predictions debug
* fix batch size.
* no need to use use_auth_token
* Apply suggestions from code review
Co-authored-by: Suraj Patil <surajp815@gmail.com >
* make vae encoding batch size an arg
* final serialization in kohya
* style
* state dict rejigging
* feat: no separate teacher unet.
* debug
* fix state dict serialization
* debug
* debug
* debug
* remove prints.
* remove kohya utility and make style
* fix serialization
* fix
* add test
* add peft dependency.
* add: peft
* remove peft
* autocast device determination from accelerator
* autocast
* reduce lora rank.
* remove unneeded space
* Apply suggestions from code review
Co-authored-by: Suraj Patil <surajp815@gmail.com >
* style
* remove prompt dropout.
* also save in native diffusers ckpt format.
* debug
* debug
* debug
* better formation of the null embeddings.
* remove space.
* autocast fixes.
* autocast fix.
* hacky
* remove lora_sayak
* Apply suggestions from code review
Co-authored-by: Younes Belkada <49240599+younesbelkada@users.noreply.github.com >
* style
* make log validation leaner.
* move back enabled in.
* fix: log_validation call.
* add: checkpointing tests
* taking my chances to see if disabling autocasting has any effect?
* start debugging
* name
* name
* name
* more debug
* more debug
* index
* remove index.
* print length
* print length
* print length
* move unet.train() after add_adapter()
* disable some prints.
* enable_adapters() manually.
* remove prints.
* some changes.
* fix params_to_optimize
* more fixes
* debug
* debug
* remove print
* disable grad for certain contexts.
* Add support for IPAdapterFull (#5911 )
* Add support for IPAdapterFull
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com >
---------
Co-authored-by: YiYi Xu <yixu310@gmail.com >
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com >
* Fix a bug in `add_noise` function (#6085 )
* fix
* copies
---------
Co-authored-by: yiyixuxu <yixu310@gmail,com>
* [Advanced Diffusion Script] Add Widget default text (#6100 )
add widget
* [Advanced Training Script] Fix pipe example (#6106 )
* IP-Adapter for StableDiffusionControlNetImg2ImgPipeline (#5901 )
* adapter for StableDiffusionControlNetImg2ImgPipeline
* fix-copies
* fix-copies
---------
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
* IP adapter support for most pipelines (#5900 )
* support ip-adapter in src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_upscale.py
* support ip-adapter in src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_attend_and_excite.py
* support ip-adapter in src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_instruct_pix2pix.py
* update tests
* support ip-adapter in src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_panorama.py
* support ip-adapter in src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_sag.py
* support ip-adapter in src/diffusers/pipelines/stable_diffusion_safe/pipeline_stable_diffusion_safe.py
* support ip-adapter in src/diffusers/pipelines/latent_consistency_models/pipeline_latent_consistency_text2img.py
* support ip-adapter in src/diffusers/pipelines/latent_consistency_models/pipeline_latent_consistency_img2img.py
* support ip-adapter in src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_ldm3d.py
* revert changes to sd_attend_and_excite and sd_upscale
* make style
* fix broken tests
* update ip-adapter implementation to latest
* apply suggestions from review
---------
Co-authored-by: YiYi Xu <yixu310@gmail.com >
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
* fix: lora_alpha
* make vae casting conditional/
* param upcasting
* propagate comments from https://github.com/huggingface/diffusers/pull/6145
Co-authored-by: dg845 <dgu8957@gmail.com >
* [Peft] fix saving / loading when unet is not "unet" (#6046 )
* [Peft] fix saving / loading when unet is not "unet"
* Update src/diffusers/loaders/lora.py
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
* undo stablediffusion-xl changes
* use unet_name to get unet for lora helpers
* use unet_name
---------
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
* [Wuerstchen] fix fp16 training and correct lora args (#6245 )
fix fp16 training
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
* [docs] fix: animatediff docs (#6339 )
fix: animatediff docs
* add: note about the new script in readme_sdxl.
* Revert "[Peft] fix saving / loading when unet is not "unet" (#6046 )"
This reverts commit 4c7e983bb5 .
* Revert "[Wuerstchen] fix fp16 training and correct lora args (#6245 )"
This reverts commit 0bb9cf0216 .
* Revert "[docs] fix: animatediff docs (#6339 )"
This reverts commit 11659a6f74 .
* remove tokenize_prompt().
* assistive comments around enable_adapters() and diable_adapters().
---------
Co-authored-by: Suraj Patil <surajp815@gmail.com >
Co-authored-by: Younes Belkada <49240599+younesbelkada@users.noreply.github.com >
Co-authored-by: Fabio Rigano <57982783+fabiorigano@users.noreply.github.com >
Co-authored-by: YiYi Xu <yixu310@gmail.com >
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com >
Co-authored-by: yiyixuxu <yixu310@gmail,com>
Co-authored-by: apolinário <joaopaulo.passos@gmail.com >
Co-authored-by: Charchit Sharma <charchitsharma11@gmail.com >
Co-authored-by: Aryan V S <contact.aryanvs@gmail.com >
Co-authored-by: dg845 <dgu8957@gmail.com >
Co-authored-by: Kashif Rasul <kashif.rasul@gmail.com >
2023-12-26 21:22:05 +05:30
Linoy Tsaban
29dfe22a8e
[advanced dreambooth lora sdxl training script] load pipeline for inference only if validation prompt is used ( #6171 )
...
* load pipeline for inference only if validation prompt is used
* move things outside
* load pipeline for inference only if validation prompt is used
* fix readme when validation prompt is used
---------
Co-authored-by: linoytsaban <linoy@huggingface.co >
Co-authored-by: apolinário <joaopaulo.passos@gmail.com >
2023-12-14 11:45:33 -06:00
apolinário
2a111bc9fe
[Advanced Training Script] Fix pipe example ( #6106 )
2023-12-08 15:56:35 +01:00
apolinário
16e6997f0d
[Advanced Diffusion Script] Add Widget default text ( #6100 )
...
add widget
2023-12-08 12:45:27 +01:00
apolinário
466d32c442
[Advanced Diffusion Training] Cache latents to avoid VAE passes for every training step ( #6076 )
...
* add cache latents
* style
2023-12-06 14:46:53 +01:00
apolinário
6e221334cd
[advanced_dreambooth_lora_sdxl_tranining_script] save embeddings locally fix ( #6058 )
...
* Update train_dreambooth_lora_sdxl_advanced.py
* remove global function args from dreamboothdataset class
* style
* style
---------
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
2023-12-05 13:52:34 +01:00
Linoy Tsaban
880c0fdd36
[advanced dreambooth lora training script][bug_fix] change token_abstraction type to str ( #6040 )
...
* improve help tags
* style fix
* changes token_abstraction type to string.
support multiple concepts for pivotal using a comma separated string.
* style fixup
* changed logger to warning (not yet available)
* moved the token_abstraction parsing to be in the same block as where we create the mapping of identifier to token
---------
Co-authored-by: Linoy <linoy@huggingface.co >
2023-12-04 18:38:44 +01:00
Levi McCallum
e185084a5d
Add variant argument to dreambooth lora sdxl advanced ( #6021 )
...
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
2023-12-04 12:04:15 +01:00
Linoy Tsaban
b785a155d6
[advanced dreambooth lora sdxl training script] improve help tags ( #6035 )
...
* improve help tags
* style fix
---------
Co-authored-by: Linoy <linoy@huggingface.co >
2023-12-04 09:41:25 +05:30
Patrick von Platen
dadd55fb36
Post Release: v0.24.0 ( #5985 )
...
* Post Release: v0.24.0
* post pone deprecation
* post pone deprecation
* Add model_index.json
2023-12-01 18:43:44 +01:00
Linoy Tsaban
c1e4529541
[advanced_dreambooth_lora_sdxl_tranining_script] readme fix ( #6019 )
...
readme
2023-12-01 15:14:57 +01:00
Linoy Tsaban
d29d97b616
[examples/advanced_diffusion_training] bug fixes and improvements for LoRA Dreambooth SDXL advanced training script ( #5935 )
...
* imports and readme bug fixes
* bug fix - ensures text_encoder params are dtype==float32 (when using pivotal tuning) even if the rest of the model is loaded in fp16
* added pivotal tuning to readme
* mapping token identifier to new inserted token in validation prompt (if used)
* correct default value of --train_text_encoder_frac
* change default value of --adam_weight_decay_text_encoder
* validation prompt generations when using pivotal tuning bug fix
* style fix
* textual inversion embeddings name change
* style fix
* bug fix - stopping text encoder optimization halfway
* readme - will include token abstraction and new inserted tokens when using pivotal tuning
- added type to --num_new_tokens_per_abstraction
* style fix
---------
Co-authored-by: Linoy Tsaban <linoy@huggingface.co >
2023-12-01 14:18:43 +01:00
Linoy Tsaban
0eeee618cf
Adds an advanced version of the SD-XL DreamBooth LoRA training script supporting pivotal tuning ( #5883 )
...
* sdxl dreambooth lora training script with pivotal tuning
* bug fix - args missing from parse_args
* code quality fixes
* comment unnecessary code from TokenEmbedding handler class
* fixup
---------
Co-authored-by: Linoy Tsaban <linoy@huggingface.co >
2023-11-22 16:27:56 +01:00