Sayak Paul
23c173ea58
Merge branch 'main' into sage-kernels
2025-10-13 10:47:20 +05:30
Steven Liu
8abc7aeb71
[docs] Fix syntax ( #12464 )
...
* fix syntax
* fix
* style
* fix
2025-10-11 08:13:30 +05:30
Sayak Paul
693d8a3a52
[modular] i2i and t2i support for kontext modular ( #12454 )
...
* up
* get ready
* fix import
* up
* up
2025-10-10 18:10:17 +05:30
Sayak Paul
a9df12ab45
Update Dockerfile to include zip wget for doc-builder ( #12451 )
2025-10-09 15:25:03 +05:30
Sayak Paul
a519272d97
[ci] revisit the installations in CI. ( #12450 )
...
* revisit the installations in CI.
* up
* up
* up
* empty
* up
* up
* up
2025-10-08 19:21:24 +05:30
Sayak Paul
345864eb85
fix more torch.distributed imports ( #12425 )
...
* up
* unguard.
2025-10-08 10:45:39 +05:30
Sayak Paul
35e538d46a
fix dockerfile definitions. ( #12424 )
...
* fix dockerfile definitions.
* python 3.10 slim.
* up
* up
* up
* up
* up
* revert pr_tests.yml changes
* up
* up
* reduce python version for torch 2.1.0
2025-10-08 09:46:18 +05:30
Sayak Paul
3688c9d443
Merge branch 'main' into sage-kernels
2025-10-08 09:35:09 +05:30
Sayak Paul
2dc31677e1
Align Flux modular more and more with Qwen modular ( #12445 )
...
* start
* fix
* up
2025-10-08 09:22:34 +05:30
Linoy Tsaban
1066de8c69
[Qwen LoRA training] fix bug when offloading ( #12440 )
...
* fix bug when offload and cache_latents both enabled
* fix bug when offload and cache_latents both enabled
* fix bug when offload and cache_latents both enabled
* fix bug when offload and cache_latents both enabled
* fix bug when offload and cache_latents both enabled
* fix bug when offload and cache_latents both enabled
* fix bug when offload and cache_latents both enabled
* fix bug when offload and cache_latents both enabled
* fix bug when offload and cache_latents both enabled
2025-10-07 18:27:15 +03:00
sayakpaul
d3441340b9
support automatic dispatch.
2025-10-07 18:40:12 +05:30
Sayak Paul
18c3e8ee0c
Merge branch 'main' into sage-kernels
2025-10-07 14:59:01 +05:30
Sayak Paul
2d69bacb00
handle offload_state_dict when initing transformers models ( #12438 )
2025-10-07 13:51:20 +05:30
Changseop Yeom
0974b4c606
[i18n-KO] Fix typo and update translation in ethical_guidelines.md ( #12435 )
2025-10-06 14:24:05 -07:00
Charles
cf4b97b233
[perf] Cache version checks ( #12399 )
2025-10-06 17:45:34 +02:00
Sayak Paul
f630dab8a2
Merge branch 'main' into sage-kernels
2025-10-06 19:15:00 +05:30
Sayak Paul
7f3e9b8695
make flux ready for mellon ( #12419 )
...
* make flux ready for mellon
* up
* Apply suggestions from code review
Co-authored-by: Álvaro Somoza <asomoza@users.noreply.github.com >
---------
Co-authored-by: Álvaro Somoza <asomoza@users.noreply.github.com >
2025-10-06 13:15:54 +05:30
sayakpaul
e9ea1c5b2c
up
2025-10-06 10:47:12 +05:30
SahilCarterr
ce90f9b2db
[FIX] Text to image training peft version ( #12434 )
...
Fix peft error
2025-10-06 08:24:54 +05:30
Sayak Paul
c3675d4c9b
[core] support QwenImage Edit Plus in modular ( #12416 )
...
* up
* up
* up
* up
* up
* up
* remove saves
* move things around a bit.
* get ready.
2025-10-05 21:57:13 +05:30
Vladimir Mandic
2b7deffe36
fix scale_shift_factor being on cpu for wan and ltx ( #12347 )
...
* wan fix scale_shift_factor being on cpu
* apply device cast to ltx transformer
* Apply style fixes
---------
Co-authored-by: Dhruv Nair <dhruv.nair@gmail.com >
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2025-10-05 09:23:38 +05:30
Linoy Tsaban
941ac9c3d9
[training-scripts] Make more examples UV-compatible (follow up on #12000 ) ( #12407 )
...
* make qwen and kontext uv compatible
* add torchvision
* add torchvision
* add datasets, bitsandbytes, prodigyopt
---------
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
2025-10-03 17:46:47 +03:00
Benjamin Bossan
7242b5ff62
FIX Test to ignore warning for enable_lora_hotswap ( #12421 )
...
I noticed that the test should be for the option check_compiled="ignore"
but it was using check_compiled="warn". This has been fixed, now the
correct argument is passed.
However, the fact that the test passed means that it was incorrect to
begin with. The way that logs are collected does not collect the
logger.warning call here (not sure why). To amend this, I'm now using
assertNoLogs. With this change, the test correctly fails when the wrong
argument is passed.
2025-10-02 20:57:11 +02:00
Sayak Paul
b4297967a0
[core] conditionally import torch distributed stuff. ( #12420 )
...
conditionally import torch distributed stuff.
2025-10-02 20:38:02 +05:30
Sayak Paul
9ae5b6299d
[ci] xfail failing tests in CI. ( #12418 )
...
xfail failing tests in CI.
2025-10-02 17:46:15 +05:30
Sayak Paul
814d710e56
[tests] cache non lora pipeline outputs. ( #12298 )
...
* cache non lora pipeline outputs.
* up
* up
* up
* up
* Revert "up"
This reverts commit 772c32e433 .
* up
* Revert "up"
This reverts commit cca03df7fc .
* up
* up
* add .
* up
* up
* up
* up
* up
* up
2025-10-01 09:02:55 +05:30
Steven Liu
cc5b31ffc9
[docs] Migrate syntax ( #12390 )
...
* change syntax
* make style
2025-09-30 10:11:19 -07:00
Steven Liu
d7a1a0363f
[docs] CP ( #12331 )
...
* init
* feedback
* feedback
* feedback
* feedback
* feedback
* feedback
2025-09-30 09:33:41 -07:00
Lucain
b59654544b
Install latest prerelease from huggingface_hub when installing transformers from main ( #12395 )
...
* Allow prerelease when installing transformers from main
* maybe better
* maybe better
* and now?
* just bored
* should be better
* works now
2025-09-30 17:02:33 +05:30
Yao Matrix
0e12ba7454
fix 3 xpu failures uts w/ latest pytorch ( #12408 )
...
fix xpu ut failures w/ latest pytorch
Signed-off-by: Yao, Matrix <matrix.yao@intel.com >
2025-09-30 14:07:48 +05:30
Dhruv Nair
20fd00b14b
[Tests] Add single file tester mixin for Models and remove unittest dependency ( #12352 )
...
* update
* update
* update
* update
* update
2025-09-30 13:28:34 +05:30
YiYi Xu
76d4e416bc
[modular]some small fix ( #12307 )
...
* fix
* add mellon node registry
* style
* update docstring to include more info!
* support custom node mellon
* HTTPErrpr -> HfHubHTTPErrpr
* up
* Update src/diffusers/modular_pipelines/qwenimage/node_utils.py
2025-09-29 11:42:34 -10:00
Steven Liu
c07fcf780a
[docs] Model formats ( #12256 )
...
* init
* config
* lora metadata
* feedback
* fix
* cache allocator warmup for from_single_file
* feedback
* feedback
2025-09-29 11:36:14 -07:00
Steven Liu
ccedeca96e
[docs] Distributed inference ( #12285 )
...
* init
* feedback
---------
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
2025-09-29 11:24:26 -07:00
Sayak Paul
64a5187d96
[quantization] feat: support aobaseconfig classes in TorchAOConfig ( #12275 )
...
* feat: support aobaseconfig classes.
* [docs] AOBaseConfig (#12302 )
init
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
* up
* replace with is_torchao_version
* up
* up
---------
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com >
2025-09-29 18:04:18 +05:30
Akshay Babbar
0a151115bb
Fix #12116 : preserve boolean dtype for attention masks in ChromaPipeline ( #12263 )
...
* fix: preserve boolean dtype for attention masks in ChromaPipeline
- Convert attention masks to bool and prevent dtype corruption
- Fix both positive and negative mask handling in _get_t5_prompt_embeds
- Remove float conversion in _prepare_attention_mask method
Fixes #12116
* test: add ChromaPipeline attention mask dtype tests
* test: add slow ChromaPipeline attention mask tests
* chore: removed comments
* refactor: removing redundant type conversion
* Remove dedicated dtype tests as per feedback
---------
Co-authored-by: Dhruv Nair <dhruv.nair@gmail.com >
2025-09-29 14:20:05 +05:30
Sayak Paul
19085ac8f4
Don't skip Qwen model tests for group offloading with disk ( #12382 )
...
u[
2025-09-29 13:08:05 +05:30
Sayak Paul
041501aea9
[docs] remove docstrings from repeated methods in lora_pipeline.py ( #12393 )
...
* start unbloating docstrings (save_lora_weights).
* load_lora_weights()
* lora_state_dict
* fuse_lora
* unfuse_lora
* load_lora_into_transformer
2025-09-26 22:38:43 +05:30
Sayak Paul
9c0944581a
[docs] slight edits to the attention backends docs. ( #12394 )
...
* slight edits to the attention backends docs.
* Update docs/source/en/optimization/attention_backends.md
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com >
---------
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com >
2025-09-26 21:50:16 +05:30
Sayak Paul
4588bbeb42
[CI] disable installing transformers from main in ci for now. ( #12397 )
...
* disable installing transformers from main in ci for now.
* up
* u[p
2025-09-26 18:41:17 +05:30
Lucain
ec5449f3a1
Support both huggingface_hub v0.x and v1.x ( #12389 )
...
* Support huggingface_hub 0.x and 1.x
* httpx
2025-09-25 18:28:54 +02:00
DefTruth
310fdaf556
Introduce cache-dit to community optimization ( #12366 )
...
* docs: introduce cache-dit to diffusers
* docs: introduce cache-dit to diffusers
* docs: introduce cache-dit to diffusers
* docs: introduce cache-dit to diffusers
* docs: introduce cache-dit to diffusers
* docs: introduce cache-dit to diffusers
* docs: introduce cache-dit to diffusers
* misc: update examples link
* misc: update examples link
* docs: introduce cache-dit to diffusers
* docs: introduce cache-dit to diffusers
* docs: introduce cache-dit to diffusers
* docs: introduce cache-dit to diffusers
* docs: introduce cache-dit to diffusers
* Refine documentation for CacheDiT features
Updated the wording for clarity and consistency in the documentation. Adjusted sections on cache acceleration, automatic block adapter, patch functor, and hybrid cache configuration.
2025-09-24 10:50:57 -07:00
Aryan
dcb6dd9b7a
Context Parallel w/ Ring & Ulysses & Unified Attention ( #11941 )
...
* update
* update
* add coauthor
Co-Authored-By: Dhruv Nair <dhruv.nair@gmail.com >
* improve test
* handle ip adapter params correctly
* fix chroma qkv fusion test
* fix fastercache implementation
* fix more tests
* fight more tests
* add back set_attention_backend
* update
* update
* make style
* make fix-copies
* make ip adapter processor compatible with attention dispatcher
* refactor chroma as well
* remove rmsnorm assert
* minify and deprecate npu/xla processors
* update
* refactor
* refactor; support flash attention 2 with cp
* fix
* support sage attention with cp
* make torch compile compatible
* update
* refactor
* update
* refactor
* refactor
* add ulysses backward
* try to make dreambooth script work; accelerator backward not playing well
* Revert "try to make dreambooth script work; accelerator backward not playing well"
This reverts commit 768d0ea6fa .
* workaround compilation problems with triton when doing all-to-all
* support wan
* handle backward correctly
* support qwen
* support ltx
* make fix-copies
* Update src/diffusers/models/modeling_utils.py
Co-authored-by: Dhruv Nair <dhruv.nair@gmail.com >
* apply review suggestions
* update docs
* add explanation
* make fix-copies
* add docstrings
* support passing parallel_config to from_pretrained
* apply review suggestions
* make style
* update
* Update docs/source/en/api/parallel.md
Co-authored-by: Aryan <aryan@huggingface.co >
* up
---------
Co-authored-by: Dhruv Nair <dhruv.nair@gmail.com >
Co-authored-by: sayakpaul <spsayakpaul@gmail.com >
2025-09-24 19:03:25 +05:30
Alberto Chimenti
043ab2520f
Fix WanVACEPipeline to allow prompt to be None and skip encoding step ( #12251 )
...
Fixed WanVACEPipeline to allow prompt to be None and skip encoding step
2025-09-24 15:15:04 +05:30
Yao Matrix
08c29020dd
fix marigold ut case fail on xpu ( #12350 )
...
Signed-off-by: Yao, Matrix <matrix.yao@intel.com >
2025-09-24 09:32:06 +05:30
Yao Matrix
7a58734994
xpu enabling for 4 cases ( #12345 )
...
Signed-off-by: Yao, Matrix <matrix.yao@intel.com >
2025-09-24 09:31:45 +05:30
Sayak Paul
9ef118509e
[tests] disable xformer tests for pipelines it isn't popular. ( #12277 )
...
disable xformer tests for pipelines it isn't popular.
2025-09-24 09:02:25 +05:30
Dhruv Nair
7c54a7b38a
Fix Custom Code loading ( #12378 )
...
* update
* update
* update
2025-09-24 08:53:41 +05:30
Sayak Paul
09e777a3e1
[tests] Single scheduler in lora tests ( #12315 )
...
* single scheduler please.
* up
* up
* up
2025-09-24 08:36:50 +05:30
Steven Liu
a72bc0c4bb
[docs] Attention backends ( #12320 )
...
* init
* feedback
* update
* feedback
* fixes
2025-09-23 10:59:46 -07:00