Dhruv Nair
c1e8bdf1d4
Move ControlNetXS into Community Folder ( #6316 )
...
* update
* update
* update
* update
* update
* make style
* remove docs
* update
* move to research folder.
* fix-copies
* remove _toctree entry.
---------
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
2023-12-27 08:15:23 +05:30
Sayak Paul
78b87dc25a
[LoRA] make LoRAs trained with peft loadable when peft isn't installed ( #6306 )
...
* spit diffusers-native format from the get go.
* rejig the peft_to_diffusers mapping.
2023-12-27 08:01:10 +05:30
Will Berman
0af12f1f8a
amused update links to new repo ( #6344 )
...
* amused update links to new repo
* lint
2023-12-26 22:46:28 +01:00
priprapre
fa31704420
[SDXL-IP2P] Update README_sdxl, Replace the link for wandb log with the correct run ( #6270 )
...
Replace the link for wandb log with the correct run
2023-12-26 21:13:11 +01:00
Sayak Paul
6683f97959
[Training] Add datasets version of LCM LoRA SDXL ( #5778 )
...
* add: script to train lcm lora for sdxl with 🤗 datasets
* suit up the args.
* remove comments.
* fix num_update_steps
* fix batch unmarshalling
* fix num_update_steps_per_epoch
* fix; dataloading.
* fix microconditions.
* unconditional predictions debug
* fix batch size.
* no need to use use_auth_token
* Apply suggestions from code review
Co-authored-by: Suraj Patil <surajp815@gmail.com >
* make vae encoding batch size an arg
* final serialization in kohya
* style
* state dict rejigging
* feat: no separate teacher unet.
* debug
* fix state dict serialization
* debug
* debug
* debug
* remove prints.
* remove kohya utility and make style
* fix serialization
* fix
* add test
* add peft dependency.
* add: peft
* remove peft
* autocast device determination from accelerator
* autocast
* reduce lora rank.
* remove unneeded space
* Apply suggestions from code review
Co-authored-by: Suraj Patil <surajp815@gmail.com >
* style
* remove prompt dropout.
* also save in native diffusers ckpt format.
* debug
* debug
* debug
* better formation of the null embeddings.
* remove space.
* autocast fixes.
* autocast fix.
* hacky
* remove lora_sayak
* Apply suggestions from code review
Co-authored-by: Younes Belkada <49240599+younesbelkada@users.noreply.github.com >
* style
* make log validation leaner.
* move back enabled in.
* fix: log_validation call.
* add: checkpointing tests
* taking my chances to see if disabling autocasting has any effect?
* start debugging
* name
* name
* name
* more debug
* more debug
* index
* remove index.
* print length
* print length
* print length
* move unet.train() after add_adapter()
* disable some prints.
* enable_adapters() manually.
* remove prints.
* some changes.
* fix params_to_optimize
* more fixes
* debug
* debug
* remove print
* disable grad for certain contexts.
* Add support for IPAdapterFull (#5911 )
* Add support for IPAdapterFull
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com >
---------
Co-authored-by: YiYi Xu <yixu310@gmail.com >
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com >
* Fix a bug in `add_noise` function (#6085 )
* fix
* copies
---------
Co-authored-by: yiyixuxu <yixu310@gmail,com>
* [Advanced Diffusion Script] Add Widget default text (#6100 )
add widget
* [Advanced Training Script] Fix pipe example (#6106 )
* IP-Adapter for StableDiffusionControlNetImg2ImgPipeline (#5901 )
* adapter for StableDiffusionControlNetImg2ImgPipeline
* fix-copies
* fix-copies
---------
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
* IP adapter support for most pipelines (#5900 )
* support ip-adapter in src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_upscale.py
* support ip-adapter in src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_attend_and_excite.py
* support ip-adapter in src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_instruct_pix2pix.py
* update tests
* support ip-adapter in src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_panorama.py
* support ip-adapter in src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_sag.py
* support ip-adapter in src/diffusers/pipelines/stable_diffusion_safe/pipeline_stable_diffusion_safe.py
* support ip-adapter in src/diffusers/pipelines/latent_consistency_models/pipeline_latent_consistency_text2img.py
* support ip-adapter in src/diffusers/pipelines/latent_consistency_models/pipeline_latent_consistency_img2img.py
* support ip-adapter in src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_ldm3d.py
* revert changes to sd_attend_and_excite and sd_upscale
* make style
* fix broken tests
* update ip-adapter implementation to latest
* apply suggestions from review
---------
Co-authored-by: YiYi Xu <yixu310@gmail.com >
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
* fix: lora_alpha
* make vae casting conditional/
* param upcasting
* propagate comments from https://github.com/huggingface/diffusers/pull/6145
Co-authored-by: dg845 <dgu8957@gmail.com >
* [Peft] fix saving / loading when unet is not "unet" (#6046 )
* [Peft] fix saving / loading when unet is not "unet"
* Update src/diffusers/loaders/lora.py
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
* undo stablediffusion-xl changes
* use unet_name to get unet for lora helpers
* use unet_name
---------
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
* [Wuerstchen] fix fp16 training and correct lora args (#6245 )
fix fp16 training
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
* [docs] fix: animatediff docs (#6339 )
fix: animatediff docs
* add: note about the new script in readme_sdxl.
* Revert "[Peft] fix saving / loading when unet is not "unet" (#6046 )"
This reverts commit 4c7e983bb5 .
* Revert "[Wuerstchen] fix fp16 training and correct lora args (#6245 )"
This reverts commit 0bb9cf0216 .
* Revert "[docs] fix: animatediff docs (#6339 )"
This reverts commit 11659a6f74 .
* remove tokenize_prompt().
* assistive comments around enable_adapters() and diable_adapters().
---------
Co-authored-by: Suraj Patil <surajp815@gmail.com >
Co-authored-by: Younes Belkada <49240599+younesbelkada@users.noreply.github.com >
Co-authored-by: Fabio Rigano <57982783+fabiorigano@users.noreply.github.com >
Co-authored-by: YiYi Xu <yixu310@gmail.com >
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com >
Co-authored-by: yiyixuxu <yixu310@gmail,com>
Co-authored-by: apolinário <joaopaulo.passos@gmail.com >
Co-authored-by: Charchit Sharma <charchitsharma11@gmail.com >
Co-authored-by: Aryan V S <contact.aryanvs@gmail.com >
Co-authored-by: dg845 <dgu8957@gmail.com >
Co-authored-by: Kashif Rasul <kashif.rasul@gmail.com >
2023-12-26 21:22:05 +05:30
Kashif Rasul
35b81fffae
[Wuerstchen] fix fp16 training and correct lora args ( #6245 )
...
fix fp16 training
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
2023-12-26 11:40:04 +01:00
dg845
a3d31e3a3e
Change LCM-LoRA README Script Example Learning Rates to 1e-4 ( #6304 )
...
Change README LCM-LoRA example learning rates to 1e-4.
2023-12-25 21:29:20 +05:30
Jianqi Pan
84c403aedb
fix: cannot set guidance_scale ( #6326 )
...
fix: set guidance_scale
2023-12-25 21:16:57 +05:30
Sayak Paul
f4b0b26f7e
[Tests] Speed up example tests ( #6319 )
...
* remove validation args from textual onverson tests
* reduce number of train steps in textual inversion tests
* fix: directories.
* debig
* fix: directories.
* remove validation tests from textual onversion
* try reducing the time of test_text_to_image_checkpointing_use_ema
* fix: directories
* speed up test_text_to_image_checkpointing
* speed up test_text_to_image_checkpointing_checkpoints_total_limit_removes_multiple_checkpoints
* fix
* speed up test_instruct_pix2pix_checkpointing_checkpoints_total_limit_removes_multiple_checkpoints
* set checkpoints_total_limit to 2.
* test_text_to_image_lora_checkpointing_checkpoints_total_limit_removes_multiple_checkpoints speed up
* speed up test_unconditional_checkpointing_checkpoints_total_limit_removes_multiple_checkpoints
* debug
* fix: directories.
* speed up test_instruct_pix2pix_checkpointing_checkpoints_total_limit
* speed up: test_controlnet_checkpointing_checkpoints_total_limit_removes_multiple_checkpoints
* speed up test_controlnet_sdxl
* speed up dreambooth tests
* speed up test_dreambooth_lora_checkpointing_checkpoints_total_limit_removes_multiple_checkpoints
* speed up test_custom_diffusion_checkpointing_checkpoints_total_limit_removes_multiple_checkpoints
* speed up test_text_to_image_lora_sdxl_text_encoder_checkpointing_checkpoints_total_limit
* speed up # checkpoint-2 should have been deleted
* speed up examples/text_to_image/test_text_to_image.py::TextToImage::test_text_to_image_checkpointing_checkpoints_total_limit
* additional speed ups
* style
2023-12-25 19:50:48 +05:30
mwkldeveloper
2d43094ffc
fix RuntimeError: Input type (float) and bias type (c10::Half) should be the same in train_text_to_image_lora.py ( #6259 )
...
* fix RuntimeError: Input type (float) and bias type (c10::Half) should be the same
* format source code
* format code
* remove the autocast blocks within the pipeline
* add autocast blocks to pipeline caller in train_text_to_image_lora.py
2023-12-24 14:34:35 +05:30
Sayak Paul
90b9479903
[LoRA PEFT] fix LoRA loading so that correct alphas are parsed ( #6225 )
...
* initialize alpha too.
* add: test
* remove config parsing
* store rank
* debug
* remove faulty test
2023-12-24 09:59:41 +05:30
apolinário
df76a39e1b
Fix Prodigy optimizer in SDXL Dreambooth script ( #6290 )
...
* Fix ProdigyOPT in SDXL Dreambooth script
* style
* style
2023-12-22 06:42:04 -06:00
Bingxin Ke
3369bc810a
[Community Pipeline] Add Marigold Monocular Depth Estimation ( #6249 )
...
* [Community Pipeline] Add Marigold Monocular Depth Estimation
- add single-file pipeline
- update README
* fix format - add one blank line
* format script with ruff
* use direct image link in example code
---------
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
2023-12-22 15:41:46 +05:30
Will Berman
4039815276
open muse ( #5437 )
...
amused
rename
Update docs/source/en/api/pipelines/amused.md
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com >
AdaLayerNormContinuous default values
custom micro conditioning
micro conditioning docs
put lookup from codebook in constructor
fix conversion script
remove manual fused flash attn kernel
add training script
temp remove training script
add dummy gradient checkpointing func
clarify temperatures is an instance variable by setting it
remove additional SkipFF block args
hardcode norm args
rename tests folder
fix paths and samples
fix tests
add training script
training readme
lora saving and loading
non-lora saving/loading
some readme fixes
guards
Update docs/source/en/api/pipelines/amused.md
Co-authored-by: Suraj Patil <surajp815@gmail.com >
Update examples/amused/README.md
Co-authored-by: Suraj Patil <surajp815@gmail.com >
Update examples/amused/train_amused.py
Co-authored-by: Suraj Patil <surajp815@gmail.com >
vae upcasting
add fp16 integration tests
use tuple for micro cond
copyrights
remove casts
delegate to torch.nn.LayerNorm
move temperature to pipeline call
upsampling/downsampling changes
2023-12-21 11:40:55 -08:00
YShow
35a969d297
[Training] remove depcreated method from lora scripts again ( #6266 )
...
* remove depcreated method from lora scripts
* check code quality
2023-12-21 14:17:52 +05:30
lvzi
6ca9c4af05
fix: unscale fp16 gradient problem & potential error ( #6086 ) ( #6231 )
...
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
2023-12-21 09:09:26 +05:30
dependabot[bot]
0532cece97
Bump transformers from 4.34.0 to 4.36.0 in /examples/research_projects/realfill ( #6255 )
...
Bump transformers in /examples/research_projects/realfill
Bumps [transformers](https://github.com/huggingface/transformers ) from 4.34.0 to 4.36.0.
- [Release notes](https://github.com/huggingface/transformers/releases )
- [Commits](https://github.com/huggingface/transformers/compare/v4.34.0...v4.36.0 )
---
updated-dependencies:
- dependency-name: transformers
dependency-type: direct:production
...
Signed-off-by: dependabot[bot] <support@github.com >
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-12-21 09:03:17 +05:30
hako-mikan
ff43dba7ea
[Fix] Fix Regional Prompting Pipeline ( #6188 )
...
* Update regional_prompting_stable_diffusion.py
* reformat
* reformat
* reformat
* reformat
* reformat
* reformat
* reformat
* regormat
* reformat
* reformat
* reformat
* reformat
* Update regional_prompting_stable_diffusion.py
---------
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
2023-12-20 10:37:19 +05:30
Sayak Paul
288ceebea5
[T2I LoRA training] fix: unscale fp16 gradient problem ( #6119 )
...
* fix: unscale fp16 gradient problem
* fix for dreambooth lora sdxl
* make the type-casting conditional.
* Apply suggestions from code review
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com >
---------
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com >
2023-12-19 09:54:17 +05:30
Haofan Wang
7d0a47f387
Update train_text_to_image_lora.py ( #6144 )
...
* Update train_text_to_image_lora.py
* Fix typo?
---------
Co-authored-by: M. Tolga Cangöz <46008593+standardAI@users.noreply.github.com >
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
2023-12-18 19:33:05 +01:00
Aryan V S
67b3d3267e
Support img2img and inpaint in lpw-xl ( #6114 )
...
* add img2img and inpaint support to lpw-xl
* update community README
---------
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
2023-12-18 19:19:11 +01:00
TilmannR
4e77056885
Update README.md ( #6191 )
...
Typo: The script for LoRA training is `train_text_to_image_lora_prior.py` not `train_text_to_image_prior_lora.py`.
Alternatively you could rename the file and keep the README.md unchanged.
2023-12-18 19:08:29 +01:00
Sayak Paul
b98b314b7a
[Training] remove depcreated method from lora scripts. ( #6207 )
...
remove depcreated method from lora scripts.
2023-12-18 15:52:43 +05:30
Yudong Jin
49644babd3
Fix the test script in examples/text_to_image/README.md ( #6209 )
...
* Update examples/text_to_image/README.md
* Update examples/text_to_image/README.md
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
---------
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
2023-12-18 15:36:00 +05:30
dg845
49db233b35
Clean Up Comments in LCM(-LoRA) Distillation Scripts. ( #6145 )
...
* Clean up comments in LCM(-LoRA) distillation scripts.
* Calculate predicted source noise noise_pred correctly for all prediction_types.
* make style
* apply suggestions from review
---------
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
2023-12-15 18:18:16 +05:30
Linoy Tsaban
29dfe22a8e
[advanced dreambooth lora sdxl training script] load pipeline for inference only if validation prompt is used ( #6171 )
...
* load pipeline for inference only if validation prompt is used
* move things outside
* load pipeline for inference only if validation prompt is used
* fix readme when validation prompt is used
---------
Co-authored-by: linoytsaban <linoy@huggingface.co >
Co-authored-by: apolinário <joaopaulo.passos@gmail.com >
2023-12-14 11:45:33 -06:00
Monohydroxides
c46711e895
[Community] Add SDE Drag pipeline ( #6105 )
...
* Add community pipeline: sde_drag.py
* Update README.md
* Update README.md
Update example code and visual example
* Update sde_drag.py
Update code example.
2023-12-14 20:47:20 +05:30
M. Tolga Cangöz
0a401b95b7
[Docs] Fix typos ( #6122 )
...
Fix typos and trim trailing whitespaces
2023-12-11 10:55:28 -08:00
apolinário
2a111bc9fe
[Advanced Training Script] Fix pipe example ( #6106 )
2023-12-08 15:56:35 +01:00
apolinário
16e6997f0d
[Advanced Diffusion Script] Add Widget default text ( #6100 )
...
add widget
2023-12-08 12:45:27 +01:00
Aryan V S
978dec9014
[Community] AnimateDiff + Controlnet Pipeline ( #5928 )
...
* begin work on animatediff + controlnet pipeline
* complete todos, uncomment multicontrolnet, input checks
Co-Authored-By: EdoardoBotta <botta.edoardo@gmail.com >
* update
Co-Authored-By: EdoardoBotta <botta.edoardo@gmail.com >
* add example
* update community README
* Update examples/community/README.md
---------
Co-authored-by: EdoardoBotta <botta.edoardo@gmail.com >
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com >
2023-12-06 21:01:41 -10:00
Younes Belkada
c2717317f0
[PEFT] Adapt example scripts to use PEFT ( #5388 )
...
* adapt example scripts to use PEFT
* Update examples/text_to_image/train_text_to_image_lora.py
* fix
* add for SDXL
* oops
* make sure to install peft
* fix
* fix
* fix dreambooth and lora
* more fixes
* add peft to requirements.txt
* fix
* final fix
* add peft version in requirements
* remove comment
* change variable names
* add few lines in readme
* add to reqs
* style
* fix issues
* fix lora dreambooth xl tests
* init_lora_weights to gaussian and add out proj where missing
* ammend requirements.
* ammend requirements.txt
* add correct peft versions
---------
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
2023-12-07 09:39:29 +05:30
Lucain
75ada25048
Harmonize HF environment variables + deprecate use_auth_token ( #6066 )
...
* Harmonize HF environment variables + deprecate use_auth_token
* fix import
* fix
2023-12-06 22:22:31 +01:00
apolinário
466d32c442
[Advanced Diffusion Training] Cache latents to avoid VAE passes for every training step ( #6076 )
...
* add cache latents
* style
2023-12-06 14:46:53 +01:00
Pedro Cuenca
ab6672fecd
Use CC12M for LCM WDS training example ( #5908 )
...
* Fix SD scripts - there are only 2 items per batch
* Adjustments to make the SDXL scripts work with other datasets
* Use public webdataset dataset for examples
* make style
* Minor tweaks to the readmes.
* Stress that the database is illustrative.
2023-12-06 10:35:36 +01:00
apolinário
6e221334cd
[advanced_dreambooth_lora_sdxl_tranining_script] save embeddings locally fix ( #6058 )
...
* Update train_dreambooth_lora_sdxl_advanced.py
* remove global function args from dreamboothdataset class
* style
* style
---------
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
2023-12-05 13:52:34 +01:00
Radamés Ajna
eacf5e34eb
Fix demofusion ( #6049 )
...
* Update pipeline_demofusion_sdxl.py
* Update README.md
2023-12-05 18:10:46 +05:30
Linoy Tsaban
880c0fdd36
[advanced dreambooth lora training script][bug_fix] change token_abstraction type to str ( #6040 )
...
* improve help tags
* style fix
* changes token_abstraction type to string.
support multiple concepts for pivotal using a comma separated string.
* style fixup
* changed logger to warning (not yet available)
* moved the token_abstraction parsing to be in the same block as where we create the mapping of identifier to token
---------
Co-authored-by: Linoy <linoy@huggingface.co >
2023-12-04 18:38:44 +01:00
RuoyiDu
c36f1c3160
[Community Pipeline] DemoFusion: Democratising High-Resolution Image Generation With No $$$ ( #6022 )
...
* Add files via upload
* Update README.md
* Update pipeline_demofusion_sdxl.py
* Update pipeline_demofusion_sdxl.py
* Update examples/community/README.md
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
---------
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
2023-12-04 19:44:57 +05:30
Levi McCallum
e185084a5d
Add variant argument to dreambooth lora sdxl advanced ( #6021 )
...
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
2023-12-04 12:04:15 +01:00
gujing
bf92e746c0
fix StableDiffusionTensorRT super args error ( #6009 )
2023-12-04 10:06:23 +05:30
Linoy Tsaban
b785a155d6
[advanced dreambooth lora sdxl training script] improve help tags ( #6035 )
...
* improve help tags
* style fix
---------
Co-authored-by: Linoy <linoy@huggingface.co >
2023-12-04 09:41:25 +05:30
Long(Tony) Lian
618260409f
LLMGroundedDiffusionPipeline: inherit from DiffusionPipeline and fix peft ( #6023 )
...
* LLMGroundedDiffusionPipeline: inherit from DiffusionPipeline and fix peft
* Use main in the revision in the examples
* Add "Copied from" statements in comments
* Fix formatting with ruff
2023-12-01 09:58:25 -10:00
Patrick von Platen
dadd55fb36
Post Release: v0.24.0 ( #5985 )
...
* Post Release: v0.24.0
* post pone deprecation
* post pone deprecation
* Add model_index.json
2023-12-01 18:43:44 +01:00
Patrick von Platen
0f55c17e17
fix style
2023-12-01 15:59:34 +00:00
hako-mikan
46c751e970
[Community Pipeline] Regional Prompting Pipeline ( #6015 )
...
* Update README.md
* Update README.md
* Add files via upload
* Update README.md
* Update examples/community/README.md
---------
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com >
2023-12-01 16:22:59 +01:00
Linoy Tsaban
c1e4529541
[advanced_dreambooth_lora_sdxl_tranining_script] readme fix ( #6019 )
...
readme
2023-12-01 15:14:57 +01:00
Linoy Tsaban
d29d97b616
[examples/advanced_diffusion_training] bug fixes and improvements for LoRA Dreambooth SDXL advanced training script ( #5935 )
...
* imports and readme bug fixes
* bug fix - ensures text_encoder params are dtype==float32 (when using pivotal tuning) even if the rest of the model is loaded in fp16
* added pivotal tuning to readme
* mapping token identifier to new inserted token in validation prompt (if used)
* correct default value of --train_text_encoder_frac
* change default value of --adam_weight_decay_text_encoder
* validation prompt generations when using pivotal tuning bug fix
* style fix
* textual inversion embeddings name change
* style fix
* bug fix - stopping text encoder optimization halfway
* readme - will include token abstraction and new inserted tokens when using pivotal tuning
- added type to --num_new_tokens_per_abstraction
* style fix
---------
Co-authored-by: Linoy Tsaban <linoy@huggingface.co >
2023-12-01 14:18:43 +01:00
Kristian Mischke
141cd52d56
Fix LLMGroundedDiffusionPipeline super class arguments ( #5993 )
...
* make `requires_safety_checker` a kwarg instead of a positional argument as it's more future-proof
* apply `make style` formatting edits
* add image_encoder to arguments and pass to super constructor
2023-11-30 10:15:14 -10:00
Kashif Rasul
01782c220e
[Wuerstchen] Adapt lora training example scripts to use PEFT ( #5959 )
...
* Adapt lora example scripts to use PEFT
* add to_out.0
2023-11-29 16:18:20 +01:00