Sayak Paul
63de23e3db
disable running peft non-peft lora test in the peft env. ( #6437 )
...
* disable running peft non-peft lora test in the peft env.
* Empty-Commit
2024-01-04 10:18:46 +05:30
Chi
2993257f2a
Batter way to write binarize() function. ( #6394 )
...
* I added a new doc string to the class. This is more flexible to understanding other developers what are doing and where it's using.
* Update src/diffusers/models/unet_2d_blocks.py
This changes suggest by maintener.
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
* Update src/diffusers/models/unet_2d_blocks.py
Add suggested text
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
* Update unet_2d_blocks.py
I changed the Parameter to Args text.
* Update unet_2d_blocks.py
proper indentation set in this file.
* Update unet_2d_blocks.py
a little bit of change in the act_fun argument line.
* I run the black command to reformat style in the code
* Update unet_2d_blocks.py
similar doc-string add to have in the original diffusion repository.
* Batter way to write binarize function
* Solve check_code_quality error
* My mistake to run pull request but not reformated file
* Update image_processor.py
* remove extra variable and space
* Update image_processor.py
* Run ruff libarary to reformat my file
---------
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
Co-authored-by: YiYi Xu <yixu310@gmail.com >
2024-01-04 09:32:08 +05:30
Sayak Paul
aad18faa3e
Update README_sdxl.md to update the LR ( #6432 )
...
Update README_sdxl.md
2024-01-03 20:55:51 +05:30
Sayak Paul
d700140076
[LoRA deprecation] handle rest of the stuff related to deprecated lora stuff. ( #6426 )
...
* handle rest of the stuff related to deprecated lora stuff.
* fix: copies
* don't modify the uNet in-place.
* fix: temporal autoencoder.
* manually remove lora layers.
* don't copy unet.
* alright
* remove lora attn processors from unet3d
* fix: unet3d.
* styl
* Empty-Commit
2024-01-03 20:54:09 +05:30
Sayak Paul
2e4dc3e25d
[LoRA] add: test to check if peft loras are loadable in non-peft envs. ( #6400 )
...
* add: test to check if peft loras are loadable in non-peft envs.
* add torch_device approrpiately.
* fix: get_dummy_inputs().
* test logits.
* rename
* debug
* debug
* fix: generator
* new assertion values after fixing the seed.
* shape
* remove print statements and settle this.
* to update values.
* change values when lora config is initialized under a fixed seed.
* update colab link
* update notebook link
* sanity restored by getting the exact same values without peft.
2024-01-03 09:57:49 +05:30
YiYi Xu
3e2961f0b4
[doc] update inpaint doc to use apply_overlay ( #6364 )
...
add doc
Co-authored-by: yiyixuxu <yixu310@gmail,com>
2024-01-02 11:16:36 -10:00
Vinh H. Pham
79c380bc80
Correct how apply_overlay read crop_coords ( #6417 )
...
correct reading variables
2024-01-02 19:31:12 +01:00
Aryan V S
e30b661437
Update lpw_xl pipeline to latest diffusers ( #6411 )
...
* add clip_skip, freeu, qkv
* fix
* add ip-adapter support
* callback on step end
* update
* fix NoneType bug
* fix
* add guidance scale embedding
* add textual inversion
2024-01-02 16:28:45 +01:00
Linoy Tsaban
b4077af212
[bug fix] using snr gamma and prior preservation loss in the dreambooth lora sdxl training scripts ( #6356 )
...
* change timesteps used to calculate snr when --with_prior_preservation is enabled
* change timesteps used to calculate snr when --with_prior_preservation is enabled (canonical script)
* style
* revert canonical script to before snr gamma change
---------
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
2024-01-02 09:21:39 -06:00
Daniel Socek
9f2bff502e
[svd] fix noise_aug_strength type in svd pipe ( #6389 )
2024-01-02 14:45:07 +01:00
CyrusVorwald
0cb92717f9
add StableDiffusionXLControlNetInpaintPipeline to auto pipeline ( #6302 )
...
* add StableDiffusionXLControlNetInpaintPipeline to auto pipeline
* fixed style
2024-01-02 14:44:48 +01:00
Fabio Rigano
86714b72d0
Add unload_ip_adapter method ( #6192 )
...
* Add unload_ip_adapter method
* Update attn_processors with original layers
* Add test
* Use set_default_attn_processor
---------
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
2024-01-02 14:40:46 +01:00
Sayak Paul
61f6c5472a
[LoRA] Remove the use of depcrecated loRA functionalities such as LoRAAttnProcessor ( #6369 )
...
* start deprecating loraattn.
* fix
* wrap into unet_lora_state_dict
* utilize text_encoder_lora_params
* utilize text_encoder_attn_modules
* debug
* debug
* remove print
* don't use text encoder for test_stable_diffusion_lora
* load the procs.
* set_default_attn_processor
* fix: set_default_attn_processor call.
* fix: lora_components[unet_lora_params]
* checking for 3d.
* 3d.
* more fixes.
* debug
* debug
* debug
* debug
* more debug
* more debug
* more debug
* more debug
* more debug
* more debug
* hack.
* remove comments and prep for a PR.
* appropriate set_lora_weights()
* fix
* fix: test_unload_lora_sd
* fix: test_unload_lora_sd
* use dfault attebtion processors.
* debu
* debug nan
* debug nan
* debug nan
* use NaN instead of inf
* remove comments.
* fix: test_text_encoder_lora_state_dict_unchanged
* attention processor default
* default attention processors.
* default
* style
2024-01-02 18:14:04 +05:30
lookas
17546020fc
Fix #6409 ( #6410 )
...
* Update value_guided_sampling.py
Fix #6409
* Comply code style
---------
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
2024-01-02 11:16:21 +05:30
2510
8a366b835c
Fix gradient-checkpointing option is ignored in SDXL+LoRA training. ( #6388 ) ( #6402 )
...
* Fix gradient-checkpointing option is ignored in SDXL+LoRA training. (#6388 )
* Fix gradient-checkpointing option is ignored in SD+LoRA training.
* Fix gradient checkpoint is not applied to text encoders. (SDXL+LoRA)
---------
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
2024-01-01 08:51:04 +05:30
Sayak Paul
61d223c884
add: CUDA graph details. ( #6408 )
2023-12-31 13:43:26 +05:30
apolinário
bf725e044e
Add new WebUI conversion state_dict_utils to __init__ utils ( #6404 )
...
* Add new state_dict_utils to __init__ utils
* style
---------
Co-authored-by: multimodalart <joaopaulo.passos+multimodal@gmail.com >
2023-12-30 09:31:39 -06:00
apolinário
1622265e13
Add WebUI format support to Advanced Training Script ( #6403 )
...
* Add WebUI format support to Advanced Training Script
* style
---------
Co-authored-by: multimodalart <joaopaulo.passos+multimodal@gmail.com >
2023-12-30 08:45:49 -06:00
apolinário
0b63ad5ad5
Create convert_diffusers_sdxl_lora_to_webui.py ( #6395 )
...
* Create convert_diffusers_sdxl_lora_to_webui.py
* Move some conversion logic to utils
* fix logging import
* Add usage example
---------
Co-authored-by: multimodalart <joaopaulo.passos+multimodal@gmail.com >
2023-12-30 08:15:11 -06:00
Sayak Paul
6a376ceea2
[LoRA] remove unnecessary components from lora peft test suite ( #6401 )
...
remove unnecessary components from lora peft suite/
2023-12-30 18:25:40 +05:30
gzguevara
9f283b01d2
changed w&b report link ( #6387 )
2023-12-29 19:49:11 +05:30
Sayak Paul
203724e9d9
[Docs] add note on fp16 in fast diffusion ( #6380 )
...
add note on fp16
2023-12-29 09:38:50 +05:30
gzguevara
e7044a4221
multi-subject-dreambooth-inpainting with 🤗 datasets ( #6378 )
...
* files added
* fixing code quality
* fixing code quality
* fixing code quality
* fixing code quality
* sorted import block
* seperated import wandb
* ruff on script
---------
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
2023-12-29 09:33:49 +05:30
Sayak Paul
034b39b8cb
[docs] add details concerning diffusers-specific bits. ( #6375 )
...
add details concerning diffusers-specific bits.
2023-12-28 23:12:49 +05:30
Sayak Paul
2db73f4a50
remove delete documentation trigger workflows. ( #6373 )
2023-12-28 18:26:14 +05:30
Adrian Punga
84d7faebe4
Fix support for MPS in KDPM2AncestralDiscreteScheduler ( #6365 )
...
Fix support for MPS
MPS doesn't support float64
2023-12-28 10:22:02 +01:00
YiYi Xu
4c483deb90
[refactor embeddings] gligen + ip-adapter ( #6244 )
...
* refactor ip-adapter-imageproj, gligen
---------
Co-authored-by: yiyixuxu <yixu310@gmail,com>
2023-12-27 18:48:42 -10:00
Sayak Paul
1ac07d8a8d
[Training examples] Follow up of #6306 ( #6346 )
...
* add to dreambooth lora.
* add: t2i lora.
* add: sdxl t2i lora.
* style
* lcm lora sdxl.
* unwrap
* fix: enable_adapters().
2023-12-28 07:37:50 +05:30
apolinário
1fff527702
Fix keys for lora format on advanced training scripts ( #6361 )
...
fix keys for lora format on advanced training scripts
2023-12-27 11:38:03 -06:00
apolinário
645a62bf3b
Add PEFT to advanced training script ( #6294 )
...
* Fix ProdigyOPT in SDXL Dreambooth script
* style
* style
* Add PEFT to Advanced Training Script
* style
* style
* ✨ style ✨
* change order for logic operation
* add lora alpha
* style
* Align PEFT to new format
* Update train_dreambooth_lora_sdxl_advanced.py
Apply #6355 fix
---------
Co-authored-by: multimodalart <joaopaulo.passos+multimodal@gmail.com >
2023-12-27 10:00:32 -03:00
Dhruv Nair
6414d4e4f9
Fix chunking in SVD ( #6350 )
...
fix
2023-12-27 13:07:41 +01:00
Andy W
43672b4a22
Fix "push_to_hub only create repo in consistency model lora SDXL training script" ( #6102 )
...
* fix
* style fix
---------
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
2023-12-27 15:25:19 +05:30
dg845
9df3d84382
Fix LCM distillation bug when creating the guidance scale embeddings using multiple GPUs. ( #6279 )
...
Fix bug when creating the guidance embeddings using multiple GPUs.
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
2023-12-27 14:25:21 +05:30
Jianqi Pan
c751449011
fix: use retrieve_latents ( #6337 )
2023-12-27 10:44:26 +05:30
Dhruv Nair
c1e8bdf1d4
Move ControlNetXS into Community Folder ( #6316 )
...
* update
* update
* update
* update
* update
* make style
* remove docs
* update
* move to research folder.
* fix-copies
* remove _toctree entry.
---------
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
2023-12-27 08:15:23 +05:30
Sayak Paul
78b87dc25a
[LoRA] make LoRAs trained with peft loadable when peft isn't installed ( #6306 )
...
* spit diffusers-native format from the get go.
* rejig the peft_to_diffusers mapping.
2023-12-27 08:01:10 +05:30
Will Berman
0af12f1f8a
amused update links to new repo ( #6344 )
...
* amused update links to new repo
* lint
2023-12-26 22:46:28 +01:00
Justin Ruan
6e123688dc
Remove unused parameters and fixed FutureWarning ( #6317 )
...
* Remove unused parameters and fixed `FutureWarning`
* Fixed wrong config instance
* update unittest for `DDIMInverseScheduler`
2023-12-26 22:09:10 +01:00
YiYi Xu
f0a588b8e2
adding auto1111 features to inpainting pipeline ( #6072 )
...
* add inpaint_full_res
* fix
* update
* move get_crop_region to image processor
* Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_inpaint.py
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com >
* move apply_overlay to image processor
---------
Co-authored-by: yiyixuxu <yixu310@gmail,com>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com >
2023-12-26 10:20:29 -10:00
priprapre
fa31704420
[SDXL-IP2P] Update README_sdxl, Replace the link for wandb log with the correct run ( #6270 )
...
Replace the link for wandb log with the correct run
2023-12-26 21:13:11 +01:00
Sayak Paul
9d79991da0
[Docs] fix: video rendering on svd. ( #6330 )
...
fix: video rendering on svd.
2023-12-26 21:05:22 +01:00
Will Berman
7d865ac9c6
amused other pipelines docs ( #6343 )
...
other pipelines
2023-12-26 20:20:32 +01:00
Dhruv Nair
fb02316db8
Add AnimateDiff conversion scripts ( #6340 )
...
* add scripts
* update
2023-12-26 22:40:00 +05:30
Dhruv Nair
98a2b3d2d8
Update Animatediff docs ( #6341 )
...
* update
* update
* update
2023-12-26 22:39:46 +05:30
Dhruv Nair
2026ec0a02
Interruptable Pipelines ( #5867 )
...
* add interruptable pipelines
* add tests
* updatemsmq
* add interrupt property
* make fix copies
* Revert "make fix copies"
This reverts commit 914b35332b .
* add docs
* add tutorial
* Update docs/source/en/tutorials/interrupting_diffusion_process.md
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com >
* Update docs/source/en/tutorials/interrupting_diffusion_process.md
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com >
* update
* fix quality issues
* fix
* update
---------
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com >
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com >
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
2023-12-26 22:39:26 +05:30
dg845
3706aa3305
Add rescale_betas_zero_snr Argument to DDPMScheduler ( #6305 )
...
* Add rescale_betas_zero_snr argument to DDPMScheduler.
* Propagate rescale_betas_zero_snr changes to DDPMParallelScheduler.
---------
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
2023-12-26 17:54:30 +01:00
Sayak Paul
d4f10ea362
[Diffusion fast] add doc for diffusion fast ( #6311 )
...
* add doc for diffusion fast
* add entry to _toctree
* Apply suggestions from code review
* fix titlew
* fix: title entry
* add note about fuse_qkv_projections
2023-12-26 22:19:55 +05:30
Younes Belkada
3aba99af8f
[Peft / Lora] Add adapter_names in fuse_lora ( #5823 )
...
* add adapter_name in fuse
* add tesrt
* up
* fix CI
* adapt from suggestion
* Update src/diffusers/utils/testing_utils.py
Co-authored-by: Benjamin Bossan <BenjaminBossan@users.noreply.github.com >
* change to `require_peft_version_greater`
* change variable names in test
* Update src/diffusers/loaders/lora.py
Co-authored-by: Benjamin Bossan <BenjaminBossan@users.noreply.github.com >
* break into 2 lines
* final comments
---------
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
Co-authored-by: Benjamin Bossan <BenjaminBossan@users.noreply.github.com >
2023-12-26 16:54:47 +01:00
Sayak Paul
6683f97959
[Training] Add datasets version of LCM LoRA SDXL ( #5778 )
...
* add: script to train lcm lora for sdxl with 🤗 datasets
* suit up the args.
* remove comments.
* fix num_update_steps
* fix batch unmarshalling
* fix num_update_steps_per_epoch
* fix; dataloading.
* fix microconditions.
* unconditional predictions debug
* fix batch size.
* no need to use use_auth_token
* Apply suggestions from code review
Co-authored-by: Suraj Patil <surajp815@gmail.com >
* make vae encoding batch size an arg
* final serialization in kohya
* style
* state dict rejigging
* feat: no separate teacher unet.
* debug
* fix state dict serialization
* debug
* debug
* debug
* remove prints.
* remove kohya utility and make style
* fix serialization
* fix
* add test
* add peft dependency.
* add: peft
* remove peft
* autocast device determination from accelerator
* autocast
* reduce lora rank.
* remove unneeded space
* Apply suggestions from code review
Co-authored-by: Suraj Patil <surajp815@gmail.com >
* style
* remove prompt dropout.
* also save in native diffusers ckpt format.
* debug
* debug
* debug
* better formation of the null embeddings.
* remove space.
* autocast fixes.
* autocast fix.
* hacky
* remove lora_sayak
* Apply suggestions from code review
Co-authored-by: Younes Belkada <49240599+younesbelkada@users.noreply.github.com >
* style
* make log validation leaner.
* move back enabled in.
* fix: log_validation call.
* add: checkpointing tests
* taking my chances to see if disabling autocasting has any effect?
* start debugging
* name
* name
* name
* more debug
* more debug
* index
* remove index.
* print length
* print length
* print length
* move unet.train() after add_adapter()
* disable some prints.
* enable_adapters() manually.
* remove prints.
* some changes.
* fix params_to_optimize
* more fixes
* debug
* debug
* remove print
* disable grad for certain contexts.
* Add support for IPAdapterFull (#5911 )
* Add support for IPAdapterFull
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com >
---------
Co-authored-by: YiYi Xu <yixu310@gmail.com >
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com >
* Fix a bug in `add_noise` function (#6085 )
* fix
* copies
---------
Co-authored-by: yiyixuxu <yixu310@gmail,com>
* [Advanced Diffusion Script] Add Widget default text (#6100 )
add widget
* [Advanced Training Script] Fix pipe example (#6106 )
* IP-Adapter for StableDiffusionControlNetImg2ImgPipeline (#5901 )
* adapter for StableDiffusionControlNetImg2ImgPipeline
* fix-copies
* fix-copies
---------
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
* IP adapter support for most pipelines (#5900 )
* support ip-adapter in src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_upscale.py
* support ip-adapter in src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_attend_and_excite.py
* support ip-adapter in src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_instruct_pix2pix.py
* update tests
* support ip-adapter in src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_panorama.py
* support ip-adapter in src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_sag.py
* support ip-adapter in src/diffusers/pipelines/stable_diffusion_safe/pipeline_stable_diffusion_safe.py
* support ip-adapter in src/diffusers/pipelines/latent_consistency_models/pipeline_latent_consistency_text2img.py
* support ip-adapter in src/diffusers/pipelines/latent_consistency_models/pipeline_latent_consistency_img2img.py
* support ip-adapter in src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_ldm3d.py
* revert changes to sd_attend_and_excite and sd_upscale
* make style
* fix broken tests
* update ip-adapter implementation to latest
* apply suggestions from review
---------
Co-authored-by: YiYi Xu <yixu310@gmail.com >
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
* fix: lora_alpha
* make vae casting conditional/
* param upcasting
* propagate comments from https://github.com/huggingface/diffusers/pull/6145
Co-authored-by: dg845 <dgu8957@gmail.com >
* [Peft] fix saving / loading when unet is not "unet" (#6046 )
* [Peft] fix saving / loading when unet is not "unet"
* Update src/diffusers/loaders/lora.py
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
* undo stablediffusion-xl changes
* use unet_name to get unet for lora helpers
* use unet_name
---------
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
* [Wuerstchen] fix fp16 training and correct lora args (#6245 )
fix fp16 training
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
* [docs] fix: animatediff docs (#6339 )
fix: animatediff docs
* add: note about the new script in readme_sdxl.
* Revert "[Peft] fix saving / loading when unet is not "unet" (#6046 )"
This reverts commit 4c7e983bb5 .
* Revert "[Wuerstchen] fix fp16 training and correct lora args (#6245 )"
This reverts commit 0bb9cf0216 .
* Revert "[docs] fix: animatediff docs (#6339 )"
This reverts commit 11659a6f74 .
* remove tokenize_prompt().
* assistive comments around enable_adapters() and diable_adapters().
---------
Co-authored-by: Suraj Patil <surajp815@gmail.com >
Co-authored-by: Younes Belkada <49240599+younesbelkada@users.noreply.github.com >
Co-authored-by: Fabio Rigano <57982783+fabiorigano@users.noreply.github.com >
Co-authored-by: YiYi Xu <yixu310@gmail.com >
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com >
Co-authored-by: yiyixuxu <yixu310@gmail,com>
Co-authored-by: apolinário <joaopaulo.passos@gmail.com >
Co-authored-by: Charchit Sharma <charchitsharma11@gmail.com >
Co-authored-by: Aryan V S <contact.aryanvs@gmail.com >
Co-authored-by: dg845 <dgu8957@gmail.com >
Co-authored-by: Kashif Rasul <kashif.rasul@gmail.com >
2023-12-26 21:22:05 +05:30
Sayak Paul
4e7b0cb396
[docs] fix: animatediff docs ( #6339 )
...
fix: animatediff docs
2023-12-26 19:13:49 +05:30