Sayak Paul
203724e9d9
[Docs] add note on fp16 in fast diffusion ( #6380 )
...
add note on fp16
2023-12-29 09:38:50 +05:30
gzguevara
e7044a4221
multi-subject-dreambooth-inpainting with 🤗 datasets ( #6378 )
...
* files added
* fixing code quality
* fixing code quality
* fixing code quality
* fixing code quality
* sorted import block
* seperated import wandb
* ruff on script
---------
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
2023-12-29 09:33:49 +05:30
Sayak Paul
034b39b8cb
[docs] add details concerning diffusers-specific bits. ( #6375 )
...
add details concerning diffusers-specific bits.
2023-12-28 23:12:49 +05:30
Sayak Paul
2db73f4a50
remove delete documentation trigger workflows. ( #6373 )
2023-12-28 18:26:14 +05:30
Adrian Punga
84d7faebe4
Fix support for MPS in KDPM2AncestralDiscreteScheduler ( #6365 )
...
Fix support for MPS
MPS doesn't support float64
2023-12-28 10:22:02 +01:00
YiYi Xu
4c483deb90
[refactor embeddings] gligen + ip-adapter ( #6244 )
...
* refactor ip-adapter-imageproj, gligen
---------
Co-authored-by: yiyixuxu <yixu310@gmail,com>
2023-12-27 18:48:42 -10:00
Sayak Paul
1ac07d8a8d
[Training examples] Follow up of #6306 ( #6346 )
...
* add to dreambooth lora.
* add: t2i lora.
* add: sdxl t2i lora.
* style
* lcm lora sdxl.
* unwrap
* fix: enable_adapters().
2023-12-28 07:37:50 +05:30
apolinário
1fff527702
Fix keys for lora format on advanced training scripts ( #6361 )
...
fix keys for lora format on advanced training scripts
2023-12-27 11:38:03 -06:00
apolinário
645a62bf3b
Add PEFT to advanced training script ( #6294 )
...
* Fix ProdigyOPT in SDXL Dreambooth script
* style
* style
* Add PEFT to Advanced Training Script
* style
* style
* ✨ style ✨
* change order for logic operation
* add lora alpha
* style
* Align PEFT to new format
* Update train_dreambooth_lora_sdxl_advanced.py
Apply #6355 fix
---------
Co-authored-by: multimodalart <joaopaulo.passos+multimodal@gmail.com >
2023-12-27 10:00:32 -03:00
Dhruv Nair
6414d4e4f9
Fix chunking in SVD ( #6350 )
...
fix
2023-12-27 13:07:41 +01:00
Andy W
43672b4a22
Fix "push_to_hub only create repo in consistency model lora SDXL training script" ( #6102 )
...
* fix
* style fix
---------
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
2023-12-27 15:25:19 +05:30
dg845
9df3d84382
Fix LCM distillation bug when creating the guidance scale embeddings using multiple GPUs. ( #6279 )
...
Fix bug when creating the guidance embeddings using multiple GPUs.
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
2023-12-27 14:25:21 +05:30
Jianqi Pan
c751449011
fix: use retrieve_latents ( #6337 )
2023-12-27 10:44:26 +05:30
Dhruv Nair
c1e8bdf1d4
Move ControlNetXS into Community Folder ( #6316 )
...
* update
* update
* update
* update
* update
* make style
* remove docs
* update
* move to research folder.
* fix-copies
* remove _toctree entry.
---------
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
2023-12-27 08:15:23 +05:30
Sayak Paul
78b87dc25a
[LoRA] make LoRAs trained with peft loadable when peft isn't installed ( #6306 )
...
* spit diffusers-native format from the get go.
* rejig the peft_to_diffusers mapping.
2023-12-27 08:01:10 +05:30
Will Berman
0af12f1f8a
amused update links to new repo ( #6344 )
...
* amused update links to new repo
* lint
2023-12-26 22:46:28 +01:00
Justin Ruan
6e123688dc
Remove unused parameters and fixed FutureWarning ( #6317 )
...
* Remove unused parameters and fixed `FutureWarning`
* Fixed wrong config instance
* update unittest for `DDIMInverseScheduler`
2023-12-26 22:09:10 +01:00
YiYi Xu
f0a588b8e2
adding auto1111 features to inpainting pipeline ( #6072 )
...
* add inpaint_full_res
* fix
* update
* move get_crop_region to image processor
* Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_inpaint.py
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com >
* move apply_overlay to image processor
---------
Co-authored-by: yiyixuxu <yixu310@gmail,com>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com >
2023-12-26 10:20:29 -10:00
priprapre
fa31704420
[SDXL-IP2P] Update README_sdxl, Replace the link for wandb log with the correct run ( #6270 )
...
Replace the link for wandb log with the correct run
2023-12-26 21:13:11 +01:00
Sayak Paul
9d79991da0
[Docs] fix: video rendering on svd. ( #6330 )
...
fix: video rendering on svd.
2023-12-26 21:05:22 +01:00
Will Berman
7d865ac9c6
amused other pipelines docs ( #6343 )
...
other pipelines
2023-12-26 20:20:32 +01:00
Dhruv Nair
fb02316db8
Add AnimateDiff conversion scripts ( #6340 )
...
* add scripts
* update
2023-12-26 22:40:00 +05:30
Dhruv Nair
98a2b3d2d8
Update Animatediff docs ( #6341 )
...
* update
* update
* update
2023-12-26 22:39:46 +05:30
Dhruv Nair
2026ec0a02
Interruptable Pipelines ( #5867 )
...
* add interruptable pipelines
* add tests
* updatemsmq
* add interrupt property
* make fix copies
* Revert "make fix copies"
This reverts commit 914b35332b .
* add docs
* add tutorial
* Update docs/source/en/tutorials/interrupting_diffusion_process.md
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com >
* Update docs/source/en/tutorials/interrupting_diffusion_process.md
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com >
* update
* fix quality issues
* fix
* update
---------
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com >
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com >
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
2023-12-26 22:39:26 +05:30
dg845
3706aa3305
Add rescale_betas_zero_snr Argument to DDPMScheduler ( #6305 )
...
* Add rescale_betas_zero_snr argument to DDPMScheduler.
* Propagate rescale_betas_zero_snr changes to DDPMParallelScheduler.
---------
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
2023-12-26 17:54:30 +01:00
Sayak Paul
d4f10ea362
[Diffusion fast] add doc for diffusion fast ( #6311 )
...
* add doc for diffusion fast
* add entry to _toctree
* Apply suggestions from code review
* fix titlew
* fix: title entry
* add note about fuse_qkv_projections
2023-12-26 22:19:55 +05:30
Younes Belkada
3aba99af8f
[Peft / Lora] Add adapter_names in fuse_lora ( #5823 )
...
* add adapter_name in fuse
* add tesrt
* up
* fix CI
* adapt from suggestion
* Update src/diffusers/utils/testing_utils.py
Co-authored-by: Benjamin Bossan <BenjaminBossan@users.noreply.github.com >
* change to `require_peft_version_greater`
* change variable names in test
* Update src/diffusers/loaders/lora.py
Co-authored-by: Benjamin Bossan <BenjaminBossan@users.noreply.github.com >
* break into 2 lines
* final comments
---------
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
Co-authored-by: Benjamin Bossan <BenjaminBossan@users.noreply.github.com >
2023-12-26 16:54:47 +01:00
Sayak Paul
6683f97959
[Training] Add datasets version of LCM LoRA SDXL ( #5778 )
...
* add: script to train lcm lora for sdxl with 🤗 datasets
* suit up the args.
* remove comments.
* fix num_update_steps
* fix batch unmarshalling
* fix num_update_steps_per_epoch
* fix; dataloading.
* fix microconditions.
* unconditional predictions debug
* fix batch size.
* no need to use use_auth_token
* Apply suggestions from code review
Co-authored-by: Suraj Patil <surajp815@gmail.com >
* make vae encoding batch size an arg
* final serialization in kohya
* style
* state dict rejigging
* feat: no separate teacher unet.
* debug
* fix state dict serialization
* debug
* debug
* debug
* remove prints.
* remove kohya utility and make style
* fix serialization
* fix
* add test
* add peft dependency.
* add: peft
* remove peft
* autocast device determination from accelerator
* autocast
* reduce lora rank.
* remove unneeded space
* Apply suggestions from code review
Co-authored-by: Suraj Patil <surajp815@gmail.com >
* style
* remove prompt dropout.
* also save in native diffusers ckpt format.
* debug
* debug
* debug
* better formation of the null embeddings.
* remove space.
* autocast fixes.
* autocast fix.
* hacky
* remove lora_sayak
* Apply suggestions from code review
Co-authored-by: Younes Belkada <49240599+younesbelkada@users.noreply.github.com >
* style
* make log validation leaner.
* move back enabled in.
* fix: log_validation call.
* add: checkpointing tests
* taking my chances to see if disabling autocasting has any effect?
* start debugging
* name
* name
* name
* more debug
* more debug
* index
* remove index.
* print length
* print length
* print length
* move unet.train() after add_adapter()
* disable some prints.
* enable_adapters() manually.
* remove prints.
* some changes.
* fix params_to_optimize
* more fixes
* debug
* debug
* remove print
* disable grad for certain contexts.
* Add support for IPAdapterFull (#5911 )
* Add support for IPAdapterFull
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com >
---------
Co-authored-by: YiYi Xu <yixu310@gmail.com >
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com >
* Fix a bug in `add_noise` function (#6085 )
* fix
* copies
---------
Co-authored-by: yiyixuxu <yixu310@gmail,com>
* [Advanced Diffusion Script] Add Widget default text (#6100 )
add widget
* [Advanced Training Script] Fix pipe example (#6106 )
* IP-Adapter for StableDiffusionControlNetImg2ImgPipeline (#5901 )
* adapter for StableDiffusionControlNetImg2ImgPipeline
* fix-copies
* fix-copies
---------
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
* IP adapter support for most pipelines (#5900 )
* support ip-adapter in src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_upscale.py
* support ip-adapter in src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_attend_and_excite.py
* support ip-adapter in src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_instruct_pix2pix.py
* update tests
* support ip-adapter in src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_panorama.py
* support ip-adapter in src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_sag.py
* support ip-adapter in src/diffusers/pipelines/stable_diffusion_safe/pipeline_stable_diffusion_safe.py
* support ip-adapter in src/diffusers/pipelines/latent_consistency_models/pipeline_latent_consistency_text2img.py
* support ip-adapter in src/diffusers/pipelines/latent_consistency_models/pipeline_latent_consistency_img2img.py
* support ip-adapter in src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_ldm3d.py
* revert changes to sd_attend_and_excite and sd_upscale
* make style
* fix broken tests
* update ip-adapter implementation to latest
* apply suggestions from review
---------
Co-authored-by: YiYi Xu <yixu310@gmail.com >
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
* fix: lora_alpha
* make vae casting conditional/
* param upcasting
* propagate comments from https://github.com/huggingface/diffusers/pull/6145
Co-authored-by: dg845 <dgu8957@gmail.com >
* [Peft] fix saving / loading when unet is not "unet" (#6046 )
* [Peft] fix saving / loading when unet is not "unet"
* Update src/diffusers/loaders/lora.py
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
* undo stablediffusion-xl changes
* use unet_name to get unet for lora helpers
* use unet_name
---------
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
* [Wuerstchen] fix fp16 training and correct lora args (#6245 )
fix fp16 training
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
* [docs] fix: animatediff docs (#6339 )
fix: animatediff docs
* add: note about the new script in readme_sdxl.
* Revert "[Peft] fix saving / loading when unet is not "unet" (#6046 )"
This reverts commit 4c7e983bb5 .
* Revert "[Wuerstchen] fix fp16 training and correct lora args (#6245 )"
This reverts commit 0bb9cf0216 .
* Revert "[docs] fix: animatediff docs (#6339 )"
This reverts commit 11659a6f74 .
* remove tokenize_prompt().
* assistive comments around enable_adapters() and diable_adapters().
---------
Co-authored-by: Suraj Patil <surajp815@gmail.com >
Co-authored-by: Younes Belkada <49240599+younesbelkada@users.noreply.github.com >
Co-authored-by: Fabio Rigano <57982783+fabiorigano@users.noreply.github.com >
Co-authored-by: YiYi Xu <yixu310@gmail.com >
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com >
Co-authored-by: yiyixuxu <yixu310@gmail,com>
Co-authored-by: apolinário <joaopaulo.passos@gmail.com >
Co-authored-by: Charchit Sharma <charchitsharma11@gmail.com >
Co-authored-by: Aryan V S <contact.aryanvs@gmail.com >
Co-authored-by: dg845 <dgu8957@gmail.com >
Co-authored-by: Kashif Rasul <kashif.rasul@gmail.com >
2023-12-26 21:22:05 +05:30
Sayak Paul
4e7b0cb396
[docs] fix: animatediff docs ( #6339 )
...
fix: animatediff docs
2023-12-26 19:13:49 +05:30
Kashif Rasul
35b81fffae
[Wuerstchen] fix fp16 training and correct lora args ( #6245 )
...
fix fp16 training
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
2023-12-26 11:40:04 +01:00
Kashif Rasul
e0d8c910e9
[Peft] fix saving / loading when unet is not "unet" ( #6046 )
...
* [Peft] fix saving / loading when unet is not "unet"
* Update src/diffusers/loaders/lora.py
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
* undo stablediffusion-xl changes
* use unet_name to get unet for lora helpers
* use unet_name
---------
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
2023-12-26 11:39:28 +01:00
dg845
a3d31e3a3e
Change LCM-LoRA README Script Example Learning Rates to 1e-4 ( #6304 )
...
Change README LCM-LoRA example learning rates to 1e-4.
2023-12-25 21:29:20 +05:30
Jianqi Pan
84c403aedb
fix: cannot set guidance_scale ( #6326 )
...
fix: set guidance_scale
2023-12-25 21:16:57 +05:30
Sayak Paul
f4b0b26f7e
[Tests] Speed up example tests ( #6319 )
...
* remove validation args from textual onverson tests
* reduce number of train steps in textual inversion tests
* fix: directories.
* debig
* fix: directories.
* remove validation tests from textual onversion
* try reducing the time of test_text_to_image_checkpointing_use_ema
* fix: directories
* speed up test_text_to_image_checkpointing
* speed up test_text_to_image_checkpointing_checkpoints_total_limit_removes_multiple_checkpoints
* fix
* speed up test_instruct_pix2pix_checkpointing_checkpoints_total_limit_removes_multiple_checkpoints
* set checkpoints_total_limit to 2.
* test_text_to_image_lora_checkpointing_checkpoints_total_limit_removes_multiple_checkpoints speed up
* speed up test_unconditional_checkpointing_checkpoints_total_limit_removes_multiple_checkpoints
* debug
* fix: directories.
* speed up test_instruct_pix2pix_checkpointing_checkpoints_total_limit
* speed up: test_controlnet_checkpointing_checkpoints_total_limit_removes_multiple_checkpoints
* speed up test_controlnet_sdxl
* speed up dreambooth tests
* speed up test_dreambooth_lora_checkpointing_checkpoints_total_limit_removes_multiple_checkpoints
* speed up test_custom_diffusion_checkpointing_checkpoints_total_limit_removes_multiple_checkpoints
* speed up test_text_to_image_lora_sdxl_text_encoder_checkpointing_checkpoints_total_limit
* speed up # checkpoint-2 should have been deleted
* speed up examples/text_to_image/test_text_to_image.py::TextToImage::test_text_to_image_checkpointing_checkpoints_total_limit
* additional speed ups
* style
2023-12-25 19:50:48 +05:30
Sayak Paul
89459a5d56
fix: lora peft dummy components ( #6308 )
...
* fix: lora peft dummy components
* fix: dummy components
2023-12-25 11:26:45 +05:30
Sayak Paul
008d9818a2
fix: t2i apdater paper link ( #6314 )
2023-12-25 10:45:14 +05:30
mwkldeveloper
2d43094ffc
fix RuntimeError: Input type (float) and bias type (c10::Half) should be the same in train_text_to_image_lora.py ( #6259 )
...
* fix RuntimeError: Input type (float) and bias type (c10::Half) should be the same
* format source code
* format code
* remove the autocast blocks within the pipeline
* add autocast blocks to pipeline caller in train_text_to_image_lora.py
2023-12-24 14:34:35 +05:30
Celestial Phineas
7c05b975b7
Fix typos in the ValueError for a nested image list as StableDiffusionControlNetPipeline input. ( #6286 )
...
Fixed typos in the `ValueError` for a nested image list as input.
2023-12-24 14:32:24 +05:30
Dhruv Nair
fe574c8b29
LoRA Unfusion test fix ( #6291 )
...
update
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
2023-12-24 14:31:48 +05:30
Sayak Paul
90b9479903
[LoRA PEFT] fix LoRA loading so that correct alphas are parsed ( #6225 )
...
* initialize alpha too.
* add: test
* remove config parsing
* store rank
* debug
* remove faulty test
2023-12-24 09:59:41 +05:30
apolinário
df76a39e1b
Fix Prodigy optimizer in SDXL Dreambooth script ( #6290 )
...
* Fix ProdigyOPT in SDXL Dreambooth script
* style
* style
2023-12-22 06:42:04 -06:00
Bingxin Ke
3369bc810a
[Community Pipeline] Add Marigold Monocular Depth Estimation ( #6249 )
...
* [Community Pipeline] Add Marigold Monocular Depth Estimation
- add single-file pipeline
- update README
* fix format - add one blank line
* format script with ruff
* use direct image link in example code
---------
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
2023-12-22 15:41:46 +05:30
Pedro Cuenca
7fe47596af
Allow diffusers to load with Flax, w/o PyTorch ( #6272 )
2023-12-22 09:37:30 +01:00
Dhruv Nair
59d1caa238
Remove peft tests from old lora backend tests ( #6273 )
...
update
2023-12-22 13:35:52 +05:30
Dhruv Nair
c022e52923
Remove ONNX inpaint legacy ( #6269 )
...
update
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
2023-12-22 13:35:21 +05:30
Will Berman
4039815276
open muse ( #5437 )
...
amused
rename
Update docs/source/en/api/pipelines/amused.md
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com >
AdaLayerNormContinuous default values
custom micro conditioning
micro conditioning docs
put lookup from codebook in constructor
fix conversion script
remove manual fused flash attn kernel
add training script
temp remove training script
add dummy gradient checkpointing func
clarify temperatures is an instance variable by setting it
remove additional SkipFF block args
hardcode norm args
rename tests folder
fix paths and samples
fix tests
add training script
training readme
lora saving and loading
non-lora saving/loading
some readme fixes
guards
Update docs/source/en/api/pipelines/amused.md
Co-authored-by: Suraj Patil <surajp815@gmail.com >
Update examples/amused/README.md
Co-authored-by: Suraj Patil <surajp815@gmail.com >
Update examples/amused/train_amused.py
Co-authored-by: Suraj Patil <surajp815@gmail.com >
vae upcasting
add fp16 integration tests
use tuple for micro cond
copyrights
remove casts
delegate to torch.nn.LayerNorm
move temperature to pipeline call
upsampling/downsampling changes
2023-12-21 11:40:55 -08:00
Sayak Paul
5b186b7128
[Refactor] move ldm3d out of stable_diffusion. ( #6263 )
...
ldm3d.
2023-12-21 18:59:55 +05:30
Sayak Paul
ab0459f2b7
[Deprecated pipelines] remove pix2pix zero from init ( #6268 )
...
remove pix2pix zero from init
2023-12-21 18:17:28 +05:30
Sayak Paul
9c7cc36011
[Refactor] move panorama out of stable_diffusion ( #6262 )
...
* move panorama out.
* fix: diffedit
* fix: import.
* fix: impirt
2023-12-21 18:17:05 +05:30
Sayak Paul
325f6c53ed
[Refactor] move attend and excite out of stable_diffusion. ( #6261 )
...
* move attend and excite out.
* fix: import
* fix diffedit
2023-12-21 16:49:32 +05:30