1
0
mirror of https://github.com/huggingface/diffusers.git synced 2026-01-27 17:22:53 +03:00
Commit Graph

6211 Commits

Author SHA1 Message Date
sayakpaul
463367d31d up 2026-01-13 11:52:04 +05:30
sayakpaul
987412b252 up 2026-01-13 10:19:08 +05:30
sayakpaul
b30be7d90f fix ip adapter type checking. 2026-01-13 09:52:22 +05:30
sayakpaul
4cbe1aad54 up 2026-01-13 09:26:32 +05:30
Sayak Paul
aca3b7845b Merge branch 'main' into remove-explicit-typing 2026-01-13 09:23:41 +05:30
Bissmella Bahaduri
9d68742214 Add Unified Sequence Parallel attention (#12693)
* initial scheme of unified-sp

* initial all_to_all_double

* bug fixes, added cmnts

* unified attention prototype done

* remove raising value error in contextParallelConfig to enable unified attention

* bug fix

* feat: Adds Test for Unified SP Attention and Fixes a bug in Template Ring Attention

* bug fix, lse calculation, testing

bug fixes, lse calculation

-

switched to _all_to_all_single helper in _all_to_all_dim_exchange due contiguity issues

bug fix

bug fix

bug fix

* addressing comments

* sequence parallelsim bug fixes

* code format fixes

* Apply style fixes

* code formatting fix

* added unified attention docs and removed test file

* Apply style fixes

* tip for unified attention in docs at distributed_inference.md

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>

* Update distributed_inference.md, adding benchmarks

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>

* Update docs/source/en/training/distributed_inference.md

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>

* function name fix

* fixed benchmark in docs

---------

Co-authored-by: KarthikSundar2002 <karthiksundar30092002@gmail.com>
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
2026-01-13 09:16:51 +05:30
sayakpaul
1426c33aa5 up 2026-01-13 09:13:46 +05:30
Sayak Paul
34388bdaa8 Merge branch 'main' into remove-explicit-typing 2026-01-13 09:00:49 +05:30
sayakpaul
5ee4e19c58 handle modern types. 2026-01-13 09:00:34 +05:30
dg845
f1a93c765f Add Flag to PeftLoraLoaderMixinTests to Enable/Disable Text Encoder LoRA Tests (#12962)
* Improve incorrect LoRA format error message

* Add flag in PeftLoraLoaderMixinTests to disable text encoder LoRA tests

* Apply changes to LTX2LoraTests

* Further improve incorrect LoRA format error msg following review

---------

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
2026-01-12 16:01:58 -08:00
Sayak Paul
337ac577af Merge branch 'main' into remove-explicit-typing 2026-01-12 21:36:43 +05:30
sayakpaul
beede725d1 up 2026-01-12 20:15:18 +05:30
Leo Jiang
29a930a142 Bugfix for flux2 img2img2 prediction (#12855)
* Bugfix for dreambooth flux2 img2img2

* Bugfix for dreambooth flux2 img2img2

* Bugfix for dreambooth flux2 img2img2

* Bugfix for dreambooth flux2 img2img2

* Bugfix for dreambooth flux2 img2img2

* Bugfix for dreambooth flux2 img2img2

Co-authored-by: tcaimm <93749364+tcaimm@users.noreply.github.com>

---------

Co-authored-by: tcaimm <93749364+tcaimm@users.noreply.github.com>
2026-01-12 20:07:02 +05:30
sayakpaul
4b020c5213 up 2026-01-12 20:05:24 +05:30
sayakpaul
3a0efa38f5 up 2026-01-12 20:01:13 +05:30
sayakpaul
78233be7b4 up 2026-01-12 19:49:53 +05:30
sayakpaul
53a943dca6 up 2026-01-12 19:35:36 +05:30
sayakpaul
e60485498b up 2026-01-12 19:30:19 +05:30
sayakpaul
19558cbb15 fix 2026-01-12 19:23:17 +05:30
sayakpaul
0d5218856c up 2026-01-12 19:19:48 +05:30
sayakpaul
4e1ce3d417 up 2026-01-12 19:09:09 +05:30
sayakpaul
f2ced21349 up 2026-01-12 16:42:55 +05:30
sayakpaul
7192d4b5ff up 2026-01-12 16:35:13 +05:30
sayakpaul
a72f61a7ab up 2026-01-12 16:25:26 +05:30
sayakpaul
f364948359 up 2026-01-12 15:58:39 +05:30
sayakpaul
2b1f19d56b fix typing utils. 2026-01-12 15:47:23 +05:30
sayakpaul
8063353819 up 2026-01-12 15:39:12 +05:30
sayakpaul
d77d61bae4 final 2026-01-12 15:36:20 +05:30
sayakpaul
a9af091700 up 2026-01-12 14:03:28 +05:30
sayakpaul
db627652b1 up 2026-01-12 13:57:55 +05:30
sayakpaul
f9f6758533 up 2026-01-12 13:50:04 +05:30
sayakpaul
6983485eed up 2026-01-12 13:47:32 +05:30
Kashif Rasul
dad5cb55e6 Fix QwenImage txt_seq_lens handling (#12702)
* Fix QwenImage txt_seq_lens handling

* formatting

* formatting

* remove txt_seq_lens and use bool  mask

* use compute_text_seq_len_from_mask

* add seq_lens to dispatch_attention_fn

* use joint_seq_lens

* remove unused index_block

* WIP: Remove seq_lens parameter and use mask-based approach

- Remove seq_lens parameter from dispatch_attention_fn
- Update varlen backends to extract seqlens from masks
- Update QwenImage to pass 2D joint_attention_mask
- Fix native backend to handle 2D boolean masks
- Fix sage_varlen seqlens_q to match seqlens_k for self-attention

Note: sage_varlen still producing black images, needs further investigation

* fix formatting

* undo sage changes

* xformers support

* hub fix

* fix torch compile issues

* fix tests

* use _prepare_attn_mask_native

* proper deprecation notice

* add deprecate to txt_seq_lens

* Update src/diffusers/models/transformers/transformer_qwenimage.py

Co-authored-by: YiYi Xu <yixu310@gmail.com>

* Update src/diffusers/models/transformers/transformer_qwenimage.py

Co-authored-by: YiYi Xu <yixu310@gmail.com>

* Only create the mask if there's actual padding

* fix order of docstrings

* Adds performance benchmarks and optimization details for QwenImage

Enhances documentation with comprehensive performance insights for QwenImage pipeline:

* rope_text_seq_len = text_seq_len

* rename to max_txt_seq_len

* removed deprecated args

* undo unrelated change

* Updates QwenImage performance documentation

Removes detailed attention backend benchmarks and simplifies torch.compile performance description

Focuses on key performance improvement with torch.compile, highlighting the specific speedup from 4.70s to 1.93s on an A100 GPU

Streamlines the documentation to provide more concise and actionable performance insights

* Updates deprecation warnings for txt_seq_lens parameter

Extends deprecation timeline for txt_seq_lens from version 0.37.0 to 0.39.0 across multiple Qwen image-related models

Adds a new unit test to verify the deprecation warning behavior for the txt_seq_lens parameter

* fix compile

* formatting

* fix compile tests

* rename helper

* remove duplicate

* smaller values

* removed

* use torch.cond for torch compile

* Construct joint attention mask once

* test different backends

* construct joint attention mask once to avoid reconstructing in every block

* Update src/diffusers/models/attention_dispatch.py

Co-authored-by: YiYi Xu <yixu310@gmail.com>

* formatting

* raising an error from the EditPlus pipeline when batch_size > 1

---------

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
Co-authored-by: YiYi Xu <yixu310@gmail.com>
Co-authored-by: cdutr <dutra_carlos@hotmail.com>
2026-01-12 13:45:09 +05:30
sayakpaul
c13b264589 up 2026-01-12 13:43:11 +05:30
sayakpaul
ac3bd4baee ifx 2026-01-12 13:36:41 +05:30
sayakpaul
e47af9bada python 3.10. 2026-01-12 13:00:50 +05:30
sayakpaul
7407adabdc up. 2026-01-12 12:59:19 +05:30
sayakpaul
8390581b5d getting pro at resolving conflicts. 2026-01-12 12:40:56 +05:30
Sayak Paul
a2a6abc0d6 Update setup.py
Co-authored-by: Dhruv Nair <dhruv.nair@gmail.com>
2026-01-12 10:29:43 +05:30
Francisco Kurucz
b86bd99eac Fix link to diffedit implementation reference (#12708) 2026-01-10 11:13:23 -08:00
omahs
5b202111bf Fix typos (#12705) 2026-01-10 11:11:15 -08:00
Sayak Paul
4ac2b4a521 [docs] polish caching docs. (#12684)
* polish caching docs.

* Update docs/source/en/optimization/cache.md

Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>

* Update docs/source/en/optimization/cache.md

Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>

* up

---------

Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
2026-01-10 10:09:05 -08:00
YiYi Xu
418313bbf6 [Modular] better docstring (#12932)
add output to auto blocks + core denoising block for better doc string
2026-01-09 23:53:56 -10:00
Rafael Tvelov
2120c3096f Fix: typo in autoencoder_dc.py (#12687)
Fix typo in autoencoder_dc.py

Fixing typo in `get_block` function's parameter name:
"qkv_mutliscales" -> "qkv_multiscales"

Co-authored-by: YiYi Xu <yixu310@gmail.com>
2026-01-09 22:01:54 -10:00
Sayak Paul
ed6e5ecf67 [LoRA] add LoRA support to LTX-2 (#12933)
* up

* fixes

* tests

* docs.

* fix

* change loading info.

* up

* up
2026-01-10 11:27:22 +05:30
Sayak Paul
d44b5f86e6 fix how is_fsdp is determined (#12960)
up
2026-01-10 10:34:25 +05:30
Jay Wu
02c7adc356 [ChronoEdit] support multiple loras (#12679)
Co-authored-by: wjay <wjay@nvidia.com>
Co-authored-by: Dhruv Nair <dhruv.nair@gmail.com>
2026-01-09 15:50:16 -10:00
Sayak Paul
a3cc0e7a52 [modular] error early in enable_auto_cpu_offload (#12578)
error early in auto_cpu_offload
2026-01-09 15:30:52 -10:00
Daniel Socek
2a6cdc0b3e Fix ftfy name error in Wan pipeline (#12314)
Signed-off-by: Daniel Socek <daniel.socek@intel.com>
Co-authored-by: YiYi Xu <yixu310@gmail.com>
2026-01-09 14:02:40 -10:00
SahilCarterr
1791306739 [Fix] syntax in QwenImageEditPlusPipeline (#12371)
* Fixes syntax for consistency among pipelines

* Update test_qwenimage_edit_plus.py
2026-01-09 13:55:42 -10:00