dg845
648d968cfc
Enable Gradient Checkpointing for UNet2DModel (New) ( #7201 )
...
* Port UNet2DModel gradient checkpointing code from #6718 .
---------
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
Co-authored-by: Vincent Neemie <92559302+VincentNeemie@users.noreply.github.com >
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com >
Co-authored-by: Dhruv Nair <dhruv.nair@gmail.com >
Co-authored-by: hlky <hlky@hlky.ac >
2024-12-19 14:45:45 -10:00
djm
b756ec6e80
unet's sample_size attribute is to accept tuple(h, w) in StableDiffusionPipeline ( #10181 )
2024-12-19 22:24:18 +00:00
Aryan
d8825e7697
Fix failing lora tests after HunyuanVideo lora ( #10307 )
...
fix
2024-12-20 02:35:41 +05:30
hlky
074798b299
Fix local_files_only for checkpoints with shards ( #10294 )
2024-12-19 07:04:57 -10:00
Dhruv Nair
3ee966950b
Allow Mochi Transformer to be split across multiple GPUs ( #10300 )
...
update
2024-12-19 22:34:44 +05:30
Dhruv Nair
9764f229d4
[Single File] Add single file support for Mochi Transformer ( #10268 )
...
update
2024-12-19 22:20:40 +05:30
Shenghai Yuan
1826a1e7d3
[LoRA] Support HunyuanVideo ( #10254 )
...
* 1217
* 1217
* 1217
* update
* reverse
* add test
* update test
* make style
* update
* make style
---------
Co-authored-by: Aryan <aryan@huggingface.co >
2024-12-19 16:22:20 +05:30
hlky
0ed09a17bb
Check correct model type is passed to from_pretrained ( #10189 )
...
* Check correct model type is passed to `from_pretrained`
* Flax, skip scheduler
* test_wrong_model
* Fix for scheduler
* Update tests/pipelines/test_pipelines.py
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
* EnumMeta
* Flax
* scheduler in expected types
* make
* type object 'CLIPTokenizer' has no attribute '_PipelineFastTests__name'
* support union
* fix typing in kandinsky
* make
* add LCMScheduler
* 'LCMScheduler' object has no attribute 'sigmas'
* tests for wrong scheduler
* make
* update
* warning
* tests
* Update src/diffusers/pipelines/pipeline_utils.py
Co-authored-by: Dhruv Nair <dhruv.nair@gmail.com >
* import FlaxSchedulerMixin
* skip scheduler
---------
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
Co-authored-by: Dhruv Nair <dhruv.nair@gmail.com >
2024-12-19 09:24:52 +00:00
赵三石
2f7a417d1f
Update lora_conversion_utils.py ( #9980 )
...
x-flux single-blocks lora load
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
Co-authored-by: YiYi Xu <yixu310@gmail.com >
2024-12-18 23:07:50 -10:00
hlky
4450d26b63
Add Flux Control to AutoPipeline ( #10292 )
2024-12-18 22:28:56 -10:00
Aryan
f781b8c30c
Hunyuan VAE tiling fixes and transformer docs ( #10295 )
...
* update
* udpate
* fix test
2024-12-19 10:28:10 +05:30
Sayak Paul
9c0e20de61
[chore] Update README_sana.md to update the default model ( #10285 )
...
Update README_sana.md to update the default model
2024-12-19 10:24:57 +05:30
Aryan
f35a38725b
[tests] remove nullop import checks from lora tests ( #10273 )
...
remove nullop imports
2024-12-19 01:19:08 +05:30
Aryan
f66bd3261c
Rename Mochi integration test correctly ( #10220 )
...
rename integration test
2024-12-18 22:41:23 +05:30
Aryan
c4c99c3907
[tests] Fix broken cuda, nightly and lora tests on main for CogVideoX ( #10270 )
...
fix joint pos embedding device
2024-12-18 22:36:08 +05:30
Dhruv Nair
862a7d5038
[Single File] Add single file support for Flux Canny, Depth and Fill ( #10288 )
...
update
2024-12-18 19:19:47 +05:30
Dhruv Nair
8304adce2a
Make zeroing prompt embeds for Mochi Pipeline configurable ( #10284 )
...
update
2024-12-18 18:32:53 +05:30
Dhruv Nair
b389f339ec
Fix Doc links in GGUF and Quantization overview docs ( #10279 )
...
* update
* Update docs/source/en/quantization/gguf.md
Co-authored-by: Aryan <aryan@huggingface.co >
---------
Co-authored-by: Aryan <aryan@huggingface.co >
2024-12-18 18:32:36 +05:30
hlky
e222246b4e
Fix sigma_last with use_flow_sigmas ( #10267 )
2024-12-18 12:22:10 +00:00
Andrés Romero
83709d5a06
Flux Control(Depth/Canny) + Inpaint ( #10192 )
...
* flux_control_inpaint - failing test_flux_different_prompts
* removing test_flux_different_prompts?
* fix style
* fix from PR comments
* fix style
* reducing guidance_scale in demo
* Update src/diffusers/pipelines/flux/pipeline_flux_control_inpaint.py
Co-authored-by: hlky <hlky@hlky.ac >
* make
* prepare_latents is not copied from
* update docs
* typos
---------
Co-authored-by: affromero <ubuntu@ip-172-31-17-146.ec2.internal >
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
Co-authored-by: hlky <hlky@hlky.ac >
2024-12-18 09:14:16 +00:00
Qin Zhou
8eb73c872a
Support pass kwargs to sd3 custom attention processor ( #9818 )
...
* Support pass kwargs to sd3 custom attention processor
---------
Co-authored-by: hlky <hlky@hlky.ac >
Co-authored-by: YiYi Xu <yixu310@gmail.com >
2024-12-17 21:58:33 -10:00
Xinyuan Zhao
88b015dc9f
Make time_embed_dim of UNet2DModel changeable ( #10262 )
2024-12-17 21:55:18 -10:00
Sayak Paul
63cdf9c0ba
[chore] fix: reamde -> readme ( #10276 )
...
fix: reamde -> readme
2024-12-18 10:56:08 +05:30
hlky
0ac52d6f09
Use torch in get_2d_rotary_pos_embed ( #10155 )
...
* Use `torch` in `get_2d_rotary_pos_embed`
* Add deprecation
2024-12-17 18:26:52 -10:00
Sayak Paul
ba6fd6eb30
[chore] fix: licensing headers in mochi and ltx ( #10275 )
...
fix: licensing header.
2024-12-18 08:43:57 +05:30
Sayak Paul
9408aa2dfc
[LoRA] feat: lora support for SANA. ( #10234 )
...
* feat: lora support for SANA.
* make fix-copies
* rename test class.
* attention_kwargs -> cross_attention_kwargs.
* Revert "attention_kwargs -> cross_attention_kwargs."
This reverts commit 23433bf9bc .
* exhaust 119 max line limit
* sana lora fine-tuning script.
* readme
* add a note about the supported models.
* Apply suggestions from code review
Co-authored-by: Aryan <aryan@huggingface.co >
* style
* docs for attention_kwargs.
* remove lora_scale from pag pipeline.
* copy fix
---------
Co-authored-by: Aryan <aryan@huggingface.co >
2024-12-18 08:22:31 +05:30
hlky
ec1c7a793f
Add set_shift to FlowMatchEulerDiscreteScheduler ( #10269 )
2024-12-17 21:40:09 +00:00
cjkangme
9c68c945e9
[Community Pipeline] Fix typo that cause error on regional prompting pipeline ( #10251 )
...
fix: fix typo that cause error
2024-12-17 21:09:50 +00:00
Steven Liu
2739241ad1
[docs] delete_adapters() ( #10245 )
...
delete_adapters
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
2024-12-17 09:26:45 -08:00
Aryan
1524781b88
[tests] Remove/rename unsupported quantization torchao type ( #10263 )
...
update
2024-12-17 21:43:15 +05:30
Dhruv Nair
128b96f369
Fix Mochi Quality Issues ( #10033 )
...
* update
* update
* update
* update
* update
* update
* update
* update
* update
* update
* update
* update
* update
* update
* update
* update
* update
* update
* update
* update
* update
* update
* update
* update
* update
* update
* update
* update
* update
* update
* update
* update
* update
* update
* update
* update
* update
* update
* update
* update
* update
* update
* update
* update
* update
* update
* Update src/diffusers/models/transformers/transformer_mochi.py
Co-authored-by: Aryan <aryan@huggingface.co >
---------
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
Co-authored-by: Aryan <aryan@huggingface.co >
2024-12-17 19:40:00 +05:30
Dhruv Nair
e24941b2a7
[Single File] Add GGUF support ( #9964 )
...
* update
* update
* update
* update
* update
* update
* update
* update
* update
* update
* update
* update
* update
* update
* update
* update
* update
* update
* update
* update
* update
* update
* update
* update
* update
* update
* update
* Update src/diffusers/quantizers/gguf/utils.py
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
* update
* update
* update
* update
* update
* update
* update
* update
* update
* update
* Update docs/source/en/quantization/gguf.md
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com >
* update
* update
* update
* update
---------
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com >
2024-12-17 16:09:37 +05:30
Aryan
f9d5a9324d
[docs] Clarify dtypes for Sana ( #10248 )
...
update
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
2024-12-17 13:43:24 +05:30
Aryan
ac86393487
[LoRA] Support LTX Video ( #10228 )
...
* add lora support for ltx
* add tests
* fix copied from comments
* update
---------
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
2024-12-17 12:05:05 +05:30
Aryan
0d96a894a7
Fix copied from comment in Mochi lora loader ( #10255 )
...
update
2024-12-17 11:09:57 +05:30
Sayak Paul
6fb94d51cb
[chore] add contribution note for lawrence. ( #10253 )
...
add contribution note for lawrence.
2024-12-17 09:17:40 +05:30
Steven Liu
7667cfcb41
[docs] Add missing AttnProcessors ( #10246 )
...
* attnprocessors
* lora
* make style
* fix
* fix
* sana
* typo
2024-12-16 15:36:26 -08:00
Aryan
9f00c617a0
[core] TorchAO Quantizer ( #10009 )
...
* torchao quantizer
---------
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com >
2024-12-16 13:35:40 -10:00
Kaiwen Sheng
aafed3f8dd
fix downsample bug in MidResTemporalBlock1D ( #10250 )
2024-12-17 04:55:16 +05:30
hlky
5ed761a6f2
Add ControlNetUnion to AutoPipeline from_pretrained ( #10219 )
2024-12-16 10:25:08 -10:00
hlky
2f023d7b84
Fix RePaint Scheduler ( #10185 )
...
Fix repaint scheduler
2024-12-16 09:38:13 -10:00
hlky
e9a3911b67
Fix checkpoint in CogView3PlusPipeline example ( #10211 )
2024-12-16 09:31:22 -10:00
hlky
7186bb45f0
Add enable_vae_tiling to AllegroPipeline, fix example ( #10212 )
2024-12-16 09:31:02 -10:00
hlky
438bd60549
Use non-human subject in StableDiffusion3ControlNetPipeline example ( #10214 )
...
* Use non-human subject in StableDiffusion3ControlNetPipeline example
* make style
2024-12-16 09:30:26 -10:00
hlky
87e8157437
Fix ControlNetUnion _callback_tensor_inputs ( #10218 )
2024-12-16 09:29:12 -10:00
hlky
3f421fe09f
Fix use_flow_sigmas ( #10242 )
...
use_flow_sigmas copy
2024-12-16 09:27:22 -10:00
hlky
a7d50524dd
Add dynamic_shifting to SD3 ( #10236 )
...
* Add `dynamic_shifting` to SD3
* calculate_shift
* FlowMatchHeunDiscreteScheduler doesn't support mu
* Inpaint/img2img
2024-12-16 09:25:21 -10:00
hlky
672bd49573
Use t instead of timestep in _apply_perturbed_attention_guidance ( #10243 )
2024-12-16 09:24:16 -10:00
Sayak Paul
ea893a9ae7
[Docs] add rest of the lora loader mixins to the docs. ( #10230 )
...
add rest of the lora loader mixins to the docs.
2024-12-16 08:50:27 -08:00
fancy45daddy
5fb3a98517
Update pipeline_controlnet.py add support for pytorch_xla ( #10222 )
...
* Update pipeline_controlnet.py
* make style
---------
Co-authored-by: hlky <hlky@hlky.ac >
2024-12-16 09:05:50 +00:00