Vinh H. Pham
b9d52fca1d
[train_lcm_distill_lora_sdxl.py] Fix the LR schedulers when num_train_epochs is passed in a distributed training env ( #8446 )
...
fix num_train_epochs
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
2024-06-24 14:09:28 +05:30
Sayak Paul
2e4841ef1e
post release 0.29.0 ( #8492 )
...
post release
2024-06-13 06:14:20 -10:00
Tolga Cangöz
98730c5dd7
Errata ( #8322 )
...
* Fix typos
* Trim trailing whitespaces
* Remove a trailing whitespace
* chore: Update MarigoldDepthPipeline checkpoint to prs-eth/marigold-lcm-v1-0
* Revert "chore: Update MarigoldDepthPipeline checkpoint to prs-eth/marigold-lcm-v1-0"
This reverts commit fd742b30b4 .
* pokemon -> naruto
* `DPMSolverMultistep` -> `DPMSolverMultistepScheduler`
* Improve Markdown stylization
* Improve style
* Improve style
* Refactor pipeline variable names for consistency
* up style
2024-06-05 13:59:09 -07:00
Sayak Paul
581d8aacf7
post release v0.28.0 ( #8286 )
...
* post release v0.28.0
* style
2024-05-29 07:13:22 +05:30
Alphin Jain
1221b28eac
Fix AttributeError in train_lcm_distill_lora_sdxl_wds.py ( #7923 )
...
Fix conditional teacher model check in train_lcm_distill_lora_sdxl_wds.py
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
2024-05-16 15:49:54 +05:30
Mark Van Aken
be4afa0bb4
#7535 Update FloatTensor type hints to Tensor ( #7883 )
...
* find & replace all FloatTensors to Tensor
* apply formatting
* Update torch.FloatTensor to torch.Tensor in the remaining files
* formatting
* Fix the rest of the places where FloatTensor is used as well as in documentation
* formatting
* Update new file from FloatTensor to Tensor
2024-05-10 09:53:31 -10:00
Bagheera
8edaf3b79c
7879 - adjust documentation to use naruto dataset, since pokemon is now gated ( #7880 )
...
* 7879 - adjust documentation to use naruto dataset, since pokemon is now gated
* replace references to pokemon in docs
* more references to pokemon replaced
* Japanese translation update
---------
Co-authored-by: bghira <bghira@users.github.com >
2024-05-07 09:36:39 -07:00
dg845
0bee4d336b
LCM Distill Scripts Fix Bug when Initializing Target U-Net ( #6848 )
...
* Initialize target_unet from unet rather than teacher_unet so that we correctly add time_embedding.cond_proj if necessary.
* Use UNet2DConditionModel.from_config to initialize target_unet from unet's config.
---------
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
2024-04-11 07:52:12 -10:00
Bagheera
8e963d1c2a
7529 do not disable autocast for cuda devices ( #7530 )
...
* 7529 do not disable autocast for cuda devices
* Remove typecasting error check for non-mps platforms, as a correct autocast implementation makes it a non-issue
* add autocast fix to other training examples
* disable native_amp for dreambooth (sdxl)
* disable native_amp for pix2pix (sdxl)
* remove tests from remaining files
* disable native_amp on huggingface accelerator for every training example that uses it
* convert more usages of autocast to nullcontext, make style fixes
* make style fixes
* style.
* Empty-Commit
---------
Co-authored-by: bghira <bghira@users.github.com >
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
2024-04-02 20:15:06 +05:30
Sayak Paul
76de6a09fb
post-release v0.27.0 ( #7329 )
...
* post-release
* quality
2024-03-18 10:52:20 +05:30
Sayak Paul
4fbd310fd2
[Chore] switch to logger.warning ( #7289 )
...
switch to logger.warning
2024-03-13 06:56:43 +05:30
Sayak Paul
7c8cab313e
post release 0.26.2 ( #6885 )
...
* post release
* style
* Empty-Commit
---------
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com >
2024-02-09 07:36:38 -10:00
Sayak Paul
159885adc6
correct hub_token exposition behaviour (thanks to @bghira). ( #6918 )
2024-02-08 18:38:27 -10:00
Sayak Paul
30e5e81d58
change to 2024 in the license ( #6902 )
...
change to 2024
2024-02-08 08:19:31 -10:00
Srimanth Agastyaraju
a11b0f83b7
Fix: training resume from fp16 for SDXL Consistency Distillation ( #6840 )
...
* Fix: training resume from fp16 for lcm distill lora sdxl
* Fix coding quality - run linter
* Fix 1 - shift mixed precision cast before optimizer
* Fix 2 - State dict errors by removing load_lora_into_unet
* Update train_lcm_distill_lora_sdxl.py - Revert default cache dir to None
---------
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
2024-02-08 11:09:29 +05:30
Sayak Paul
a080f0d3a2
[Training Utils] create a utility for casting the lora params during training. ( #6553 )
...
create a utility for casting the lora params during training.
2024-01-15 13:51:13 +05:30
dg845
17cece072a
Fix bug in LCM Distillation Scripts when args.unet_time_cond_proj_dim is used ( #6523 )
...
* Fix bug where unet's time_cond_proj_dim is not set correctly if using args.unet_time_cond_proj_dim.
* make style
2024-01-11 08:21:07 +05:30
Sayak Paul
9d945b2b90
0.25.0 post release ( #6358 )
...
* post release
* style
---------
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com >
2024-01-05 16:13:27 +05:30
dg845
f3d1333e02
Improve LCM(-LoRA) Distillation Scripts ( #6420 )
...
* Make WDS pipeline interpolation type configurable.
* Make the VAE encoding batch size configurable.
* Make lora_alpha and lora_dropout configurable for LCM LoRA scripts.
* Generalize scalings_for_boundary_conditions function and make the timestep scaling configurable.
* Make LoRA target modules configurable for LCM-LoRA scripts.
* Move resolve_interpolation_mode to src/diffusers/training_utils.py and make interpolation type configurable in non-WDS script.
* apply suggestions from review
2024-01-05 06:55:13 +05:30
Sayak Paul
1ac07d8a8d
[Training examples] Follow up of #6306 ( #6346 )
...
* add to dreambooth lora.
* add: t2i lora.
* add: sdxl t2i lora.
* style
* lcm lora sdxl.
* unwrap
* fix: enable_adapters().
2023-12-28 07:37:50 +05:30
Andy W
43672b4a22
Fix "push_to_hub only create repo in consistency model lora SDXL training script" ( #6102 )
...
* fix
* style fix
---------
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
2023-12-27 15:25:19 +05:30
dg845
9df3d84382
Fix LCM distillation bug when creating the guidance scale embeddings using multiple GPUs. ( #6279 )
...
Fix bug when creating the guidance embeddings using multiple GPUs.
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
2023-12-27 14:25:21 +05:30
Sayak Paul
6683f97959
[Training] Add datasets version of LCM LoRA SDXL ( #5778 )
...
* add: script to train lcm lora for sdxl with 🤗 datasets
* suit up the args.
* remove comments.
* fix num_update_steps
* fix batch unmarshalling
* fix num_update_steps_per_epoch
* fix; dataloading.
* fix microconditions.
* unconditional predictions debug
* fix batch size.
* no need to use use_auth_token
* Apply suggestions from code review
Co-authored-by: Suraj Patil <surajp815@gmail.com >
* make vae encoding batch size an arg
* final serialization in kohya
* style
* state dict rejigging
* feat: no separate teacher unet.
* debug
* fix state dict serialization
* debug
* debug
* debug
* remove prints.
* remove kohya utility and make style
* fix serialization
* fix
* add test
* add peft dependency.
* add: peft
* remove peft
* autocast device determination from accelerator
* autocast
* reduce lora rank.
* remove unneeded space
* Apply suggestions from code review
Co-authored-by: Suraj Patil <surajp815@gmail.com >
* style
* remove prompt dropout.
* also save in native diffusers ckpt format.
* debug
* debug
* debug
* better formation of the null embeddings.
* remove space.
* autocast fixes.
* autocast fix.
* hacky
* remove lora_sayak
* Apply suggestions from code review
Co-authored-by: Younes Belkada <49240599+younesbelkada@users.noreply.github.com >
* style
* make log validation leaner.
* move back enabled in.
* fix: log_validation call.
* add: checkpointing tests
* taking my chances to see if disabling autocasting has any effect?
* start debugging
* name
* name
* name
* more debug
* more debug
* index
* remove index.
* print length
* print length
* print length
* move unet.train() after add_adapter()
* disable some prints.
* enable_adapters() manually.
* remove prints.
* some changes.
* fix params_to_optimize
* more fixes
* debug
* debug
* remove print
* disable grad for certain contexts.
* Add support for IPAdapterFull (#5911 )
* Add support for IPAdapterFull
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com >
---------
Co-authored-by: YiYi Xu <yixu310@gmail.com >
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com >
* Fix a bug in `add_noise` function (#6085 )
* fix
* copies
---------
Co-authored-by: yiyixuxu <yixu310@gmail,com>
* [Advanced Diffusion Script] Add Widget default text (#6100 )
add widget
* [Advanced Training Script] Fix pipe example (#6106 )
* IP-Adapter for StableDiffusionControlNetImg2ImgPipeline (#5901 )
* adapter for StableDiffusionControlNetImg2ImgPipeline
* fix-copies
* fix-copies
---------
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
* IP adapter support for most pipelines (#5900 )
* support ip-adapter in src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_upscale.py
* support ip-adapter in src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_attend_and_excite.py
* support ip-adapter in src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_instruct_pix2pix.py
* update tests
* support ip-adapter in src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_panorama.py
* support ip-adapter in src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_sag.py
* support ip-adapter in src/diffusers/pipelines/stable_diffusion_safe/pipeline_stable_diffusion_safe.py
* support ip-adapter in src/diffusers/pipelines/latent_consistency_models/pipeline_latent_consistency_text2img.py
* support ip-adapter in src/diffusers/pipelines/latent_consistency_models/pipeline_latent_consistency_img2img.py
* support ip-adapter in src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_ldm3d.py
* revert changes to sd_attend_and_excite and sd_upscale
* make style
* fix broken tests
* update ip-adapter implementation to latest
* apply suggestions from review
---------
Co-authored-by: YiYi Xu <yixu310@gmail.com >
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
* fix: lora_alpha
* make vae casting conditional/
* param upcasting
* propagate comments from https://github.com/huggingface/diffusers/pull/6145
Co-authored-by: dg845 <dgu8957@gmail.com >
* [Peft] fix saving / loading when unet is not "unet" (#6046 )
* [Peft] fix saving / loading when unet is not "unet"
* Update src/diffusers/loaders/lora.py
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
* undo stablediffusion-xl changes
* use unet_name to get unet for lora helpers
* use unet_name
---------
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
* [Wuerstchen] fix fp16 training and correct lora args (#6245 )
fix fp16 training
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
* [docs] fix: animatediff docs (#6339 )
fix: animatediff docs
* add: note about the new script in readme_sdxl.
* Revert "[Peft] fix saving / loading when unet is not "unet" (#6046 )"
This reverts commit 4c7e983bb5 .
* Revert "[Wuerstchen] fix fp16 training and correct lora args (#6245 )"
This reverts commit 0bb9cf0216 .
* Revert "[docs] fix: animatediff docs (#6339 )"
This reverts commit 11659a6f74 .
* remove tokenize_prompt().
* assistive comments around enable_adapters() and diable_adapters().
---------
Co-authored-by: Suraj Patil <surajp815@gmail.com >
Co-authored-by: Younes Belkada <49240599+younesbelkada@users.noreply.github.com >
Co-authored-by: Fabio Rigano <57982783+fabiorigano@users.noreply.github.com >
Co-authored-by: YiYi Xu <yixu310@gmail.com >
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com >
Co-authored-by: yiyixuxu <yixu310@gmail,com>
Co-authored-by: apolinário <joaopaulo.passos@gmail.com >
Co-authored-by: Charchit Sharma <charchitsharma11@gmail.com >
Co-authored-by: Aryan V S <contact.aryanvs@gmail.com >
Co-authored-by: dg845 <dgu8957@gmail.com >
Co-authored-by: Kashif Rasul <kashif.rasul@gmail.com >
2023-12-26 21:22:05 +05:30
dg845
a3d31e3a3e
Change LCM-LoRA README Script Example Learning Rates to 1e-4 ( #6304 )
...
Change README LCM-LoRA example learning rates to 1e-4.
2023-12-25 21:29:20 +05:30
dg845
49db233b35
Clean Up Comments in LCM(-LoRA) Distillation Scripts. ( #6145 )
...
* Clean up comments in LCM(-LoRA) distillation scripts.
* Calculate predicted source noise noise_pred correctly for all prediction_types.
* make style
* apply suggestions from review
---------
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
2023-12-15 18:18:16 +05:30
Lucain
75ada25048
Harmonize HF environment variables + deprecate use_auth_token ( #6066 )
...
* Harmonize HF environment variables + deprecate use_auth_token
* fix import
* fix
2023-12-06 22:22:31 +01:00
Pedro Cuenca
ab6672fecd
Use CC12M for LCM WDS training example ( #5908 )
...
* Fix SD scripts - there are only 2 items per batch
* Adjustments to make the SDXL scripts work with other datasets
* Use public webdataset dataset for examples
* make style
* Minor tweaks to the readmes.
* Stress that the database is illustrative.
2023-12-06 10:35:36 +01:00
Patrick von Platen
dadd55fb36
Post Release: v0.24.0 ( #5985 )
...
* Post Release: v0.24.0
* post pone deprecation
* post pone deprecation
* Add model_index.json
2023-12-01 18:43:44 +01:00
dg845
07eac4d65a
Fix LCM Stable Diffusion distillation bug related to parsing unet_time_cond_proj_dim ( #5893 )
...
* Fix bug related to parsing unet_time_cond_proj_dim.
* Fix analogous bug in the SD-XL LCM distillation script.
2023-11-27 13:00:40 +01:00
Suraj Patil
db2d8e76f8
Add LCM Scripts ( #5727 )
...
* add lcm scripts
* Co-authored-by: dgu8957@gmail.com
2023-11-09 17:29:12 +01:00