1
0
mirror of https://github.com/huggingface/diffusers.git synced 2026-01-27 17:22:53 +03:00
Commit Graph

6219 Commits

Author SHA1 Message Date
sayakpaul
765eb50ff1 up 2026-01-15 08:50:35 +05:30
sayakpaul
7ad97d492d resolve conflicts. 2026-01-15 08:32:22 +05:30
hlky
1ecfbfe12b disable_mmap in pipeline from_pretrained (#12854)
* update

* `disable_mmap` in `from_pretrained`

---------

Co-authored-by: DN6 <dhruv.nair@gmail.com>
2026-01-14 21:29:36 +05:30
Marc Sun
d7fa445453 Remove 8bit device restriction (#12972)
* allow to

* update version

* fix version again

* again

* Update src/diffusers/pipelines/pipeline_utils.py

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

* style

* xfail

* add pr

---------

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
2026-01-14 20:33:30 +05:30
Sayak Paul
7feb4fc791 [chore] make transformers version check stricter for glm image. (#12974)
* make transformers version check stricter for glm image.

* public checkpoint.
2026-01-14 10:29:48 +05:30
Sayak Paul
3c70440d26 Update distributed_inference.md to reposition sections (#12971) 2026-01-13 11:07:39 -08:00
Sayak Paul
7299121413 Z rz rz rz rz rz rz r cogview (#12973)
* init

* add

* add 1

* Update __init__.py

* rename

* 2

* update

* init with encoder

* merge2pipeline

* Update pipeline_glm_image.py

* remove sop

* remove useless func

* Update pipeline_glm_image.py

* up

(cherry picked from commit cfe19a31b9)

* review for work only

* change place

* Update pipeline_glm_image.py

* update

* Update transformer_glm_image.py

* 1

* no  negative_prompt for GLM-Image

* remove CogView4LoraLoaderMixin

* refactor attention processor.

* update

* fix

* use staticmethod

* update

* up

* up

* update

* Update glm_image.md

* 1

* Update pipeline_glm_image.py

* Update transformer_glm_image.py

* using new transformers impl

* support

* resolution change

* fix-copies

* Update src/diffusers/pipelines/glm_image/pipeline_glm_image.py

Co-authored-by: YiYi Xu <yixu310@gmail.com>

* Update pipeline_glm_image.py

* use cogview4

* Update pipeline_glm_image.py

* Update pipeline_glm_image.py

* revert

* update

* batch support

* update

* version guard glm image pipeline

* validate prompt_embeds and prior_token_ids

* try docs.

* 4

* up

* up

* skip properly

* fix tests

* up

* up

---------

Co-authored-by: zRzRzRzRzRzRzR <2448370773@qq.com>
Co-authored-by: yiyixuxu <yixu310@gmail.com>
2026-01-13 06:39:22 -10:00
Álvaro Somoza
3114f6a796 [Modular] Changes for using WAN I2V (#12959)
* initial

* add kayers
2026-01-13 05:25:54 -10:00
sayakpaul
463367d31d up 2026-01-13 11:52:04 +05:30
sayakpaul
987412b252 up 2026-01-13 10:19:08 +05:30
sayakpaul
b30be7d90f fix ip adapter type checking. 2026-01-13 09:52:22 +05:30
sayakpaul
4cbe1aad54 up 2026-01-13 09:26:32 +05:30
Sayak Paul
aca3b7845b Merge branch 'main' into remove-explicit-typing 2026-01-13 09:23:41 +05:30
Bissmella Bahaduri
9d68742214 Add Unified Sequence Parallel attention (#12693)
* initial scheme of unified-sp

* initial all_to_all_double

* bug fixes, added cmnts

* unified attention prototype done

* remove raising value error in contextParallelConfig to enable unified attention

* bug fix

* feat: Adds Test for Unified SP Attention and Fixes a bug in Template Ring Attention

* bug fix, lse calculation, testing

bug fixes, lse calculation

-

switched to _all_to_all_single helper in _all_to_all_dim_exchange due contiguity issues

bug fix

bug fix

bug fix

* addressing comments

* sequence parallelsim bug fixes

* code format fixes

* Apply style fixes

* code formatting fix

* added unified attention docs and removed test file

* Apply style fixes

* tip for unified attention in docs at distributed_inference.md

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>

* Update distributed_inference.md, adding benchmarks

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>

* Update docs/source/en/training/distributed_inference.md

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>

* function name fix

* fixed benchmark in docs

---------

Co-authored-by: KarthikSundar2002 <karthiksundar30092002@gmail.com>
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
2026-01-13 09:16:51 +05:30
sayakpaul
1426c33aa5 up 2026-01-13 09:13:46 +05:30
Sayak Paul
34388bdaa8 Merge branch 'main' into remove-explicit-typing 2026-01-13 09:00:49 +05:30
sayakpaul
5ee4e19c58 handle modern types. 2026-01-13 09:00:34 +05:30
dg845
f1a93c765f Add Flag to PeftLoraLoaderMixinTests to Enable/Disable Text Encoder LoRA Tests (#12962)
* Improve incorrect LoRA format error message

* Add flag in PeftLoraLoaderMixinTests to disable text encoder LoRA tests

* Apply changes to LTX2LoraTests

* Further improve incorrect LoRA format error msg following review

---------

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
2026-01-12 16:01:58 -08:00
Sayak Paul
337ac577af Merge branch 'main' into remove-explicit-typing 2026-01-12 21:36:43 +05:30
sayakpaul
beede725d1 up 2026-01-12 20:15:18 +05:30
Leo Jiang
29a930a142 Bugfix for flux2 img2img2 prediction (#12855)
* Bugfix for dreambooth flux2 img2img2

* Bugfix for dreambooth flux2 img2img2

* Bugfix for dreambooth flux2 img2img2

* Bugfix for dreambooth flux2 img2img2

* Bugfix for dreambooth flux2 img2img2

* Bugfix for dreambooth flux2 img2img2

Co-authored-by: tcaimm <93749364+tcaimm@users.noreply.github.com>

---------

Co-authored-by: tcaimm <93749364+tcaimm@users.noreply.github.com>
2026-01-12 20:07:02 +05:30
sayakpaul
4b020c5213 up 2026-01-12 20:05:24 +05:30
sayakpaul
3a0efa38f5 up 2026-01-12 20:01:13 +05:30
sayakpaul
78233be7b4 up 2026-01-12 19:49:53 +05:30
sayakpaul
53a943dca6 up 2026-01-12 19:35:36 +05:30
sayakpaul
e60485498b up 2026-01-12 19:30:19 +05:30
sayakpaul
19558cbb15 fix 2026-01-12 19:23:17 +05:30
sayakpaul
0d5218856c up 2026-01-12 19:19:48 +05:30
sayakpaul
4e1ce3d417 up 2026-01-12 19:09:09 +05:30
sayakpaul
f2ced21349 up 2026-01-12 16:42:55 +05:30
sayakpaul
7192d4b5ff up 2026-01-12 16:35:13 +05:30
sayakpaul
a72f61a7ab up 2026-01-12 16:25:26 +05:30
sayakpaul
f364948359 up 2026-01-12 15:58:39 +05:30
sayakpaul
2b1f19d56b fix typing utils. 2026-01-12 15:47:23 +05:30
sayakpaul
8063353819 up 2026-01-12 15:39:12 +05:30
sayakpaul
d77d61bae4 final 2026-01-12 15:36:20 +05:30
sayakpaul
a9af091700 up 2026-01-12 14:03:28 +05:30
sayakpaul
db627652b1 up 2026-01-12 13:57:55 +05:30
sayakpaul
f9f6758533 up 2026-01-12 13:50:04 +05:30
sayakpaul
6983485eed up 2026-01-12 13:47:32 +05:30
Kashif Rasul
dad5cb55e6 Fix QwenImage txt_seq_lens handling (#12702)
* Fix QwenImage txt_seq_lens handling

* formatting

* formatting

* remove txt_seq_lens and use bool  mask

* use compute_text_seq_len_from_mask

* add seq_lens to dispatch_attention_fn

* use joint_seq_lens

* remove unused index_block

* WIP: Remove seq_lens parameter and use mask-based approach

- Remove seq_lens parameter from dispatch_attention_fn
- Update varlen backends to extract seqlens from masks
- Update QwenImage to pass 2D joint_attention_mask
- Fix native backend to handle 2D boolean masks
- Fix sage_varlen seqlens_q to match seqlens_k for self-attention

Note: sage_varlen still producing black images, needs further investigation

* fix formatting

* undo sage changes

* xformers support

* hub fix

* fix torch compile issues

* fix tests

* use _prepare_attn_mask_native

* proper deprecation notice

* add deprecate to txt_seq_lens

* Update src/diffusers/models/transformers/transformer_qwenimage.py

Co-authored-by: YiYi Xu <yixu310@gmail.com>

* Update src/diffusers/models/transformers/transformer_qwenimage.py

Co-authored-by: YiYi Xu <yixu310@gmail.com>

* Only create the mask if there's actual padding

* fix order of docstrings

* Adds performance benchmarks and optimization details for QwenImage

Enhances documentation with comprehensive performance insights for QwenImage pipeline:

* rope_text_seq_len = text_seq_len

* rename to max_txt_seq_len

* removed deprecated args

* undo unrelated change

* Updates QwenImage performance documentation

Removes detailed attention backend benchmarks and simplifies torch.compile performance description

Focuses on key performance improvement with torch.compile, highlighting the specific speedup from 4.70s to 1.93s on an A100 GPU

Streamlines the documentation to provide more concise and actionable performance insights

* Updates deprecation warnings for txt_seq_lens parameter

Extends deprecation timeline for txt_seq_lens from version 0.37.0 to 0.39.0 across multiple Qwen image-related models

Adds a new unit test to verify the deprecation warning behavior for the txt_seq_lens parameter

* fix compile

* formatting

* fix compile tests

* rename helper

* remove duplicate

* smaller values

* removed

* use torch.cond for torch compile

* Construct joint attention mask once

* test different backends

* construct joint attention mask once to avoid reconstructing in every block

* Update src/diffusers/models/attention_dispatch.py

Co-authored-by: YiYi Xu <yixu310@gmail.com>

* formatting

* raising an error from the EditPlus pipeline when batch_size > 1

---------

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
Co-authored-by: YiYi Xu <yixu310@gmail.com>
Co-authored-by: cdutr <dutra_carlos@hotmail.com>
2026-01-12 13:45:09 +05:30
sayakpaul
c13b264589 up 2026-01-12 13:43:11 +05:30
sayakpaul
ac3bd4baee ifx 2026-01-12 13:36:41 +05:30
sayakpaul
e47af9bada python 3.10. 2026-01-12 13:00:50 +05:30
sayakpaul
7407adabdc up. 2026-01-12 12:59:19 +05:30
sayakpaul
8390581b5d getting pro at resolving conflicts. 2026-01-12 12:40:56 +05:30
Sayak Paul
a2a6abc0d6 Update setup.py
Co-authored-by: Dhruv Nair <dhruv.nair@gmail.com>
2026-01-12 10:29:43 +05:30
Francisco Kurucz
b86bd99eac Fix link to diffedit implementation reference (#12708) 2026-01-10 11:13:23 -08:00
omahs
5b202111bf Fix typos (#12705) 2026-01-10 11:11:15 -08:00
Sayak Paul
4ac2b4a521 [docs] polish caching docs. (#12684)
* polish caching docs.

* Update docs/source/en/optimization/cache.md

Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>

* Update docs/source/en/optimization/cache.md

Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>

* up

---------

Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
2026-01-10 10:09:05 -08:00