1
0
mirror of https://github.com/huggingface/diffusers.git synced 2026-01-27 17:22:53 +03:00
Commit Graph

5922 Commits

Author SHA1 Message Date
DN6
39115294b3 update 2025-10-21 11:51:18 +05:30
Steven Liu
5b5fa49a89 [docs] Organize toctree by modality (#12514)
* reorganize

* fix

---------

Co-authored-by: Álvaro Somoza <asomoza@users.noreply.github.com>
2025-10-21 10:18:54 +05:30
Fei Xie
decfa3c9e1 Fix: Use incorrect temporary variable key when replacing adapter name… (#12502)
Fix: Use incorrect temporary variable key when replacing adapter name in state dict within load_lora_adapter function

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
2025-10-20 15:45:37 -10:00
Dhruv Nair
48305755bf Raise warning instead of error when imports are missing for custom code (#12513)
update
2025-10-20 07:02:23 -10:00
dg845
7853bfbed7 Remove Qwen Image Redundant RoPE Cache (#12452)
Refactor QwenEmbedRope to only use the LRU cache for RoPE caching
2025-10-19 18:41:58 -07:00
Lev Novitskiy
23ebbb4bc8 Kandinsky 5 is finally in Diffusers! (#12478)
* add kandinsky5 transformer pipeline first version

---------

Co-authored-by: Álvaro Somoza <asomoza@users.noreply.github.com>
Co-authored-by: YiYi Xu <yixu310@gmail.com>
Co-authored-by: Charles <charles@huggingface.co>
2025-10-17 18:34:30 -10:00
Ali Imran
1b456bd5d5 docs: cleanup of runway model (#12503)
* cleanup of runway model

* quality fixes
2025-10-17 14:10:50 -07:00
Sayak Paul
af769881d3 [tests] introduce VAETesterMixin to consolidate tests for slicing and tiling (#12374)
* up

* up

* up

* up

* up

* u[

* up

* up

* up
2025-10-17 12:02:29 +05:30
Sayak Paul
4715c5c769 [ci] xfail more incorrect transformer imports. (#12455)
* xfail more incorrect transformer imports.

* xfail more.

* up

* up

* up
2025-10-17 10:35:19 +05:30
Steven Liu
dbe413668d [CI] Check links (#12491)
* check links

* update

* feedback

* remove
2025-10-16 10:38:16 -07:00
Steven Liu
26475082cb [docs] Attention checks (#12486)
* checks

* feedback

---------

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
2025-10-16 09:19:30 -07:00
YiYi Xu
f072c64bf2 ltx0.9.8 (without IC lora, autoregressive sampling) (#12493)
update

Co-authored-by: Aryan <aryan@huggingface.co>
2025-10-15 07:41:17 -10:00
Sayak Paul
aed636f5f0 [tests] fix clapconfig for text backbone in audioldm2 (#12490)
fix clapconfig for text backbone in audioldm2
2025-10-15 10:57:09 +05:30
Sayak Paul
53a10518b9 remove unneeded checkpoint imports. (#12488) 2025-10-15 09:51:18 +05:30
Steven Liu
b4e6dc3037 [docs] Fix broken links (#12487)
fix broken links
2025-10-15 06:42:10 +05:30
Steven Liu
3eb40786ca [docs] Prompting (#12312)
* init

* fix

* batch inf

* feedback

* update
2025-10-14 13:53:56 -07:00
Meatfucker
a4bc845478 Fix missing load_video documentation and load_video import in WanVideoToVideoPipeline example code (#12472)
* Update utilities.md

Update missing load_video documentation

* Update pipeline_wan_video2video.py

Fix missing load_video import in example code
2025-10-14 10:43:21 -07:00
Manith Ratnayake
fa468c5d57 docs: api-pipelines-qwenimage typo fix (#12461) 2025-10-13 08:57:46 -07:00
Steven Liu
8abc7aeb71 [docs] Fix syntax (#12464)
* fix syntax

* fix

* style

* fix
2025-10-11 08:13:30 +05:30
Sayak Paul
693d8a3a52 [modular] i2i and t2i support for kontext modular (#12454)
* up

* get ready

* fix import

* up

* up
2025-10-10 18:10:17 +05:30
Sayak Paul
a9df12ab45 Update Dockerfile to include zip wget for doc-builder (#12451) 2025-10-09 15:25:03 +05:30
Sayak Paul
a519272d97 [ci] revisit the installations in CI. (#12450)
* revisit the installations in CI.

* up

* up

* up

* empty

* up

* up

* up
2025-10-08 19:21:24 +05:30
Sayak Paul
345864eb85 fix more torch.distributed imports (#12425)
* up

* unguard.
2025-10-08 10:45:39 +05:30
Sayak Paul
35e538d46a fix dockerfile definitions. (#12424)
* fix dockerfile definitions.

* python 3.10 slim.

* up

* up

* up

* up

* up

* revert pr_tests.yml changes

* up

* up

* reduce python version for torch 2.1.0
2025-10-08 09:46:18 +05:30
Sayak Paul
2dc31677e1 Align Flux modular more and more with Qwen modular (#12445)
* start

* fix

* up
2025-10-08 09:22:34 +05:30
Linoy Tsaban
1066de8c69 [Qwen LoRA training] fix bug when offloading (#12440)
* fix bug when offload and cache_latents both enabled

* fix bug when offload and cache_latents both enabled

* fix bug when offload and cache_latents both enabled

* fix bug when offload and cache_latents both enabled

* fix bug when offload and cache_latents both enabled

* fix bug when offload and cache_latents both enabled

* fix bug when offload and cache_latents both enabled

* fix bug when offload and cache_latents both enabled

* fix bug when offload and cache_latents both enabled
2025-10-07 18:27:15 +03:00
Sayak Paul
2d69bacb00 handle offload_state_dict when initing transformers models (#12438) 2025-10-07 13:51:20 +05:30
Changseop Yeom
0974b4c606 [i18n-KO] Fix typo and update translation in ethical_guidelines.md (#12435) 2025-10-06 14:24:05 -07:00
Charles
cf4b97b233 [perf] Cache version checks (#12399) 2025-10-06 17:45:34 +02:00
Sayak Paul
7f3e9b8695 make flux ready for mellon (#12419)
* make flux ready for mellon

* up

* Apply suggestions from code review

Co-authored-by: Álvaro Somoza <asomoza@users.noreply.github.com>

---------

Co-authored-by: Álvaro Somoza <asomoza@users.noreply.github.com>
2025-10-06 13:15:54 +05:30
SahilCarterr
ce90f9b2db [FIX] Text to image training peft version (#12434)
Fix peft error
2025-10-06 08:24:54 +05:30
Sayak Paul
c3675d4c9b [core] support QwenImage Edit Plus in modular (#12416)
* up

* up

* up

* up

* up

* up

* remove saves

* move things around a bit.

* get ready.
2025-10-05 21:57:13 +05:30
Vladimir Mandic
2b7deffe36 fix scale_shift_factor being on cpu for wan and ltx (#12347)
* wan fix scale_shift_factor being on cpu

* apply device cast to ltx transformer

* Apply style fixes

---------

Co-authored-by: Dhruv Nair <dhruv.nair@gmail.com>
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2025-10-05 09:23:38 +05:30
Linoy Tsaban
941ac9c3d9 [training-scripts] Make more examples UV-compatible (follow up on #12000) (#12407)
* make qwen and kontext uv compatible

* add torchvision

* add torchvision

* add datasets, bitsandbytes, prodigyopt

---------

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
2025-10-03 17:46:47 +03:00
Benjamin Bossan
7242b5ff62 FIX Test to ignore warning for enable_lora_hotswap (#12421)
I noticed that the test should be for the option check_compiled="ignore"
but it was using check_compiled="warn". This has been fixed, now the
correct argument is passed.

However, the fact that the test passed means that it was incorrect to
begin with. The way that logs are collected does not collect the
logger.warning call here (not sure why). To amend this, I'm now using
assertNoLogs. With this change, the test correctly fails when the wrong
argument is passed.
2025-10-02 20:57:11 +02:00
Sayak Paul
b4297967a0 [core] conditionally import torch distributed stuff. (#12420)
conditionally import torch distributed stuff.
2025-10-02 20:38:02 +05:30
Sayak Paul
9ae5b6299d [ci] xfail failing tests in CI. (#12418)
xfail failing tests in CI.
2025-10-02 17:46:15 +05:30
Sayak Paul
814d710e56 [tests] cache non lora pipeline outputs. (#12298)
* cache non lora pipeline outputs.

* up

* up

* up

* up

* Revert "up"

This reverts commit 772c32e433.

* up

* Revert "up"

This reverts commit cca03df7fc.

* up

* up

* add .

* up

* up

* up

* up

* up

* up
2025-10-01 09:02:55 +05:30
Steven Liu
cc5b31ffc9 [docs] Migrate syntax (#12390)
* change syntax

* make style
2025-09-30 10:11:19 -07:00
Steven Liu
d7a1a0363f [docs] CP (#12331)
* init

* feedback

* feedback

* feedback

* feedback

* feedback

* feedback
2025-09-30 09:33:41 -07:00
Lucain
b59654544b Install latest prerelease from huggingface_hub when installing transformers from main (#12395)
* Allow prerelease when installing transformers from main

* maybe better

* maybe better

* and now?

* just bored

* should be better

* works now
2025-09-30 17:02:33 +05:30
Yao Matrix
0e12ba7454 fix 3 xpu failures uts w/ latest pytorch (#12408)
fix xpu ut failures w/ latest pytorch

Signed-off-by: Yao, Matrix <matrix.yao@intel.com>
2025-09-30 14:07:48 +05:30
Dhruv Nair
20fd00b14b [Tests] Add single file tester mixin for Models and remove unittest dependency (#12352)
* update

* update

* update

* update

* update
2025-09-30 13:28:34 +05:30
YiYi Xu
76d4e416bc [modular]some small fix (#12307)
* fix

* add mellon node registry

* style

* update docstring to include more info!

* support custom node mellon

* HTTPErrpr -> HfHubHTTPErrpr

* up

* Update src/diffusers/modular_pipelines/qwenimage/node_utils.py
2025-09-29 11:42:34 -10:00
Steven Liu
c07fcf780a [docs] Model formats (#12256)
* init

* config

* lora metadata

* feedback

* fix

* cache allocator warmup for from_single_file

* feedback

* feedback
2025-09-29 11:36:14 -07:00
Steven Liu
ccedeca96e [docs] Distributed inference (#12285)
* init

* feedback

---------

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
2025-09-29 11:24:26 -07:00
Sayak Paul
64a5187d96 [quantization] feat: support aobaseconfig classes in TorchAOConfig (#12275)
* feat: support aobaseconfig classes.

* [docs] AOBaseConfig (#12302)

init

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>

* up

* replace with is_torchao_version

* up

* up

---------

Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
2025-09-29 18:04:18 +05:30
Akshay Babbar
0a151115bb Fix #12116: preserve boolean dtype for attention masks in ChromaPipeline (#12263)
* fix: preserve boolean dtype for attention masks in ChromaPipeline

- Convert attention masks to bool and prevent dtype corruption
- Fix both positive and negative mask handling in _get_t5_prompt_embeds
- Remove float conversion in _prepare_attention_mask method

Fixes #12116

* test: add ChromaPipeline attention mask dtype tests

* test: add slow ChromaPipeline attention mask tests

* chore: removed comments

* refactor: removing redundant type conversion

* Remove dedicated dtype tests as per  feedback

---------

Co-authored-by: Dhruv Nair <dhruv.nair@gmail.com>
2025-09-29 14:20:05 +05:30
Sayak Paul
19085ac8f4 Don't skip Qwen model tests for group offloading with disk (#12382)
u[
2025-09-29 13:08:05 +05:30
Sayak Paul
041501aea9 [docs] remove docstrings from repeated methods in lora_pipeline.py (#12393)
* start unbloating docstrings (save_lora_weights).

* load_lora_weights()

* lora_state_dict

* fuse_lora

* unfuse_lora

* load_lora_into_transformer
2025-09-26 22:38:43 +05:30