DN6
80702d222d
update
2025-07-17 13:05:43 +05:30
DN6
625cc8ede8
update
2025-07-17 07:14:35 +05:30
yiyixuxu
a2a9e4eadb
Merge branch 'modular-test' of github.com:huggingface/diffusers into modular-test
2025-07-16 12:03:09 +02:00
yiyixuxu
0998bd75ad
up
2025-07-16 12:02:58 +02:00
yiyixuxu
5f560d05a2
up
2025-07-16 11:58:23 +02:00
yiyixuxu
4b7a9e9fa9
prepare_latents_inpaint always return noise and image_latents
2025-07-16 11:57:29 +02:00
yiyixuxu
d8fa2de36f
remove more unused func
2025-07-16 04:29:27 +02:00
YiYi Xu
4df2739a5e
Merge branch 'main' into modular-test
2025-07-15 16:27:33 -10:00
yiyixuxu
d92855ddf0
style
2025-07-16 04:26:27 +02:00
yiyixuxu
0a5c90ed47
add names property to pipeline blocks
2025-07-16 04:25:26 +02:00
Álvaro Somoza
aa14f090f8
[ControlnetUnion] Propagate #11888 to img2img ( #11929 )
...
img2img fixes
2025-07-15 21:41:35 -04:00
Guoqing Zhu
c5d6e0b537
Fixed bug: Uncontrolled recursive calls that caused an infinite loop when loading certain pipelines containing Transformer2DModel ( #11923 )
...
* fix a bug about loop call
* fix a bug about loop call
* ruff format
---------
Co-authored-by: Álvaro Somoza <asomoza@users.noreply.github.com >
2025-07-15 14:58:37 -10:00
lostdisc
39831599f1
Remove forced float64 from onnx stable diffusion pipelines ( #11054 )
...
* Update pipeline_onnx_stable_diffusion.py to remove float64
init_noise_sigma was being set as float64 before multiplying with latents, which changed latents into float64 too, which caused errors with onnxruntime since the latter wanted float16.
* Update pipeline_onnx_stable_diffusion_inpaint.py to remove float64
init_noise_sigma was being set as float64 before multiplying with latents, which changed latents into float64 too, which caused errors with onnxruntime since the latter wanted float16.
* Update pipeline_onnx_stable_diffusion_upscale.py to remove float64
init_noise_sigma was being set as float64 before multiplying with latents, which changed latents into float64 too, which caused errors with onnxruntime since the latter wanted float16.
* Update pipeline_onnx_stable_diffusion.py with comment for previous commit
Added comment on purpose of init_noise_sigma. This comment exists in related scripts that use the same line of code, but it was missing here.
---------
Co-authored-by: YiYi Xu <yixu310@gmail.com >
2025-07-15 14:57:28 -10:00
Aryan
b73c738392
Remove device synchronization when loading weights ( #11927 )
...
* update
* make style
2025-07-15 21:40:57 +05:30
Aryan
06fd427797
[tests] Improve Flux tests ( #11919 )
...
update
2025-07-15 10:47:41 +05:30
dependabot[bot]
48a551251d
Bump aiohttp from 3.10.10 to 3.12.14 in /examples/server ( #11924 )
...
Bumps [aiohttp](https://github.com/aio-libs/aiohttp ) from 3.10.10 to 3.12.14.
- [Release notes](https://github.com/aio-libs/aiohttp/releases )
- [Changelog](https://github.com/aio-libs/aiohttp/blob/master/CHANGES.rst )
- [Commits](https://github.com/aio-libs/aiohttp/compare/v3.10.10...v3.12.14 )
---
updated-dependencies:
- dependency-name: aiohttp
dependency-version: 3.12.14
dependency-type: direct:production
...
Signed-off-by: dependabot[bot] <support@github.com >
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-07-15 09:15:57 +05:30
yiyixuxu
0fa58127f8
make style
2025-07-15 03:05:36 +02:00
yiyixuxu
b165cf3742
rearrage the params to groups: default params /image params /batch params / callback params
2025-07-15 03:03:29 +02:00
Hengyue-Bi
6398fbc391
Fix: Align VAE processing in ControlNet SD3 training with inference ( #11909 )
...
Fix: Apply vae_shift_factor in ControlNet SD3 training
2025-07-14 14:54:38 -04:00
Colle
3c8b67b371
Flux: pass joint_attention_kwargs when using gradient_checkpointing ( #11814 )
...
Flux: pass joint_attention_kwargs when gradient_checkpointing
2025-07-11 08:35:18 -10:00
Steven Liu
9feb946432
[docs] torch.compile blog post ( #11837 )
...
* add blog post
* feedback
* feedback
2025-07-11 10:29:40 -07:00
Aryan
c90352754a
Speedup model loading by 4-5x ⚡ ( #11904 )
...
* update
* update
* update
* pin accelerate version
* add comment explanations
* update docstring
* make style
* non_blocking does not matter for dtype cast
* _empty_cache -> clear_cache
* update
* Update src/diffusers/models/model_loading_utils.py
Co-authored-by: Marc Sun <57196510+SunMarc@users.noreply.github.com >
* Update src/diffusers/models/model_loading_utils.py
---------
Co-authored-by: Marc Sun <57196510+SunMarc@users.noreply.github.com >
2025-07-11 21:43:53 +05:30
Sayak Paul
7a935a0bbe
[tests] Unify compilation + offloading tests in quantization ( #11910 )
...
* unify the quant compile + offloading tests.
* fix
* update
2025-07-11 17:02:29 +05:30
chenxiao
941b7fc084
Avoid creating tensor in CosmosAttnProcessor2_0 ( #11761 ) ( #11763 )
...
* Avoid creating tensor in CosmosAttnProcessor2_0 (#11761 )
* up
---------
Co-authored-by: yiyixuxu <yixu310@gmail.com >
2025-07-10 11:51:05 -10:00
Álvaro Somoza
76a62ac9cc
[ControlnetUnion] Multiple Fixes ( #11888 )
...
fixes
---------
Co-authored-by: hlky <hlky@hlky.ac >
2025-07-10 14:35:28 -04:00
Sayak Paul
1c6ab9e900
[utils] account for MPS when available in get_device(). ( #11905 )
...
* account for MPS when available in get_device().
* fix
2025-07-10 13:30:54 +05:30
Sayak Paul
265840a098
[LoRA] fix: disabling hooks when loading loras. ( #11896 )
...
fix: disabling hooks when loading loras.
2025-07-10 10:30:10 +05:30
dependabot[bot]
9f4d997d8f
Bump torch from 2.4.1 to 2.7.0 in /examples/server ( #11429 )
...
Bumps [torch](https://github.com/pytorch/pytorch ) from 2.4.1 to 2.7.0.
- [Release notes](https://github.com/pytorch/pytorch/releases )
- [Changelog](https://github.com/pytorch/pytorch/blob/main/RELEASE.md )
- [Commits](https://github.com/pytorch/pytorch/compare/v2.4.1...v2.7.0 )
---
updated-dependencies:
- dependency-name: torch
dependency-version: 2.7.0
dependency-type: direct:production
...
Signed-off-by: dependabot[bot] <support@github.com >
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Dhruv Nair <dhruv.nair@gmail.com >
2025-07-10 09:24:10 +05:30
Sayak Paul
b41abb2230
[quant] QoL improvements for pipeline-level quant config ( #11876 )
...
* add repr for pipelinequantconfig.
* update
2025-07-10 08:53:01 +05:30
YiYi Xu
f33b89bafb
The Modular Diffusers ( #9672 )
...
adding modular diffusers as experimental feature
---------
Co-authored-by: hlky <hlky@hlky.ac >
Co-authored-by: Álvaro Somoza <asomoza@users.noreply.github.com >
Co-authored-by: Aryan <aryan@huggingface.co >
Co-authored-by: Dhruv Nair <dhruv.nair@gmail.com >
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
2025-07-09 16:00:28 -10:00
Álvaro Somoza
48a6d29550
[SD3] CFG Cutoff fix and official callback ( #11890 )
...
fix and official callback
Co-authored-by: YiYi Xu <yixu310@gmail.com >
2025-07-09 14:31:11 -04:00
Sayak Paul
2d3d376bc0
Fix unique memory address when doing group-offloading with disk ( #11767 )
...
* fix memory address problem
* add more tests
* updates
* updates
* update
* _group_id = group_id
* update
* Apply suggestions from code review
Co-authored-by: Dhruv Nair <dhruv.nair@gmail.com >
* update
* update
* update
* fix
---------
Co-authored-by: Dhruv Nair <dhruv.nair@gmail.com >
2025-07-09 21:29:34 +05:30
Sébastien Iooss
db715e2c8c
feat: add multiple input image support in Flux Kontext ( #11880 )
...
* feat: add multiple input image support in Flux Kontext
* move model to community
* fix linter
2025-07-09 11:09:59 -04:00
Sayak Paul
754fe85cac
[tests] add compile + offload tests for GGUF. ( #11740 )
...
* add compile + offload tests for GGUF.
* quality
* add init.
* prop.
* change to flux.
---------
Co-authored-by: Dhruv Nair <dhruv.nair@gmail.com >
2025-07-09 13:42:13 +05:30
Sayak Paul
cc1f9a2ce3
[tests] mark the wanvace lora tester flaky ( #11883 )
...
* mark wanvace lora tests as flaky
* ability to apply is_flaky at a class-level
* update
* increase max_attempt.
* increase attemtp.
2025-07-09 13:27:15 +05:30
Sayak Paul
737d7fc3b0
[tests] Remove more deprecated tests ( #11895 )
...
* remove k diffusion tests
* remove script
2025-07-09 13:10:44 +05:30
Sayak Paul
be23f7df00
[Docker] update doc builder dockerfile to include quant libs. ( #11728 )
...
update doc builder dockerfile to include quant libs.
2025-07-09 12:27:22 +05:30
Sayak Paul
86becea77f
Pin k-diffusion for CI ( #11894 )
...
* remove k-diffusion as we don't use it from the core.
* Revert "remove k-diffusion as we don't use it from the core."
This reverts commit 8bc86925a0 .
* pin k-diffusion
2025-07-09 12:17:45 +05:30
Dhruv Nair
7e3bf4aff6
[CI] Speed up GPU PR Tests ( #11887 )
...
update
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
2025-07-09 11:00:23 +05:30
shm4r7
de043c6044
Update chroma.md ( #11891 )
...
Fix typo in Inference example code
2025-07-09 09:58:38 +05:30
Sayak Paul
4c20624cc6
[tests] annotate compilation test classes with bnb ( #11715 )
...
annotate compilation test classes with bnb
2025-07-09 09:24:52 +05:30
Aryan
0454fbb30b
First Block Cache ( #11180 )
...
* update
* modify flux single blocks to make compatible with cache techniques (without too much model-specific intrusion code)
* remove debug logs
* update
* cache context for different batches of data
* fix hs residual bug for single return outputs; support ltx
* fix controlnet flux
* support flux, ltx i2v, ltx condition
* update
* update
* Update docs/source/en/api/cache.md
* Update src/diffusers/hooks/hooks.py
Co-authored-by: Dhruv Nair <dhruv.nair@gmail.com >
* address review comments pt. 1
* address review comments pt. 2
* cache context refacotr; address review pt. 3
* address review comments
* metadata registration with decorators instead of centralized
* support cogvideox
* support mochi
* fix
* remove unused function
* remove central registry based on review
* update
---------
Co-authored-by: Dhruv Nair <dhruv.nair@gmail.com >
2025-07-09 03:27:15 +05:30
Dhruv Nair
cbc8ced20f
[CI] Fix big GPU test marker ( #11786 )
...
* update
* update
2025-07-08 22:09:09 +05:30
Sayak Paul
01240fecb0
[training ] add Kontext i2i training ( #11858 )
...
* feat: enable i2i fine-tuning in Kontext script.
* readme
* more checks.
* Apply suggestions from code review
Co-authored-by: Linoy Tsaban <57615435+linoytsaban@users.noreply.github.com >
* fixes
* fix
* add proj_mlp to the mix
* Update README_flux.md
add note on installing from commit `05e7a854d0a5661f5b433f6dd5954c224b104f0b`
* fix
* fix
---------
Co-authored-by: Linoy Tsaban <57615435+linoytsaban@users.noreply.github.com >
2025-07-08 21:04:16 +05:30
Steven Liu
ce338d4e4a
[docs] LoRA metadata ( #11848 )
...
* draft
* hub image
* update
* fix
2025-07-08 08:29:38 -07:00
Sayak Paul
bc55b631fd
[tests] remove tests for deprecated pipelines. ( #11879 )
...
* remove tests for deprecated pipelines.
* remove folders
* test_pipelines_common
2025-07-08 07:13:16 +05:30
Sayak Paul
15d50f16f2
[docs] fix references in flux pipelines. ( #11857 )
...
* fix references in flux.
* Update src/diffusers/pipelines/flux/pipeline_flux_kontext.py
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com >
---------
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com >
2025-07-07 22:20:34 +05:30
Sayak Paul
2c30287958
[chore] deprecate blip controlnet pipeline. ( #11877 )
...
* deprecate blip controlnet pipeline.
* last_supported_version
2025-07-07 13:25:40 +05:30
Aryan
425a715e35
Fix Wan AccVideo/CausVid fuse_lora ( #11856 )
...
* fix
* actually, better fix
* empty commit; trigger tests again
* mark wanvace test as flaky
2025-07-04 21:10:35 +05:30
Benjamin Bossan
2527917528
FIX set_lora_device when target layers differ ( #11844 )
...
* FIX set_lora_device when target layers differ
Resolves #11833
Fixes a bug that occurs after calling set_lora_device when multiple LoRA
adapters are loaded that target different layers.
Note: Technically, the accompanying test does not require a GPU because
the bug is triggered even if the parameters are already on the
corresponding device, i.e. loading on CPU and then changing the device
to CPU is sufficient to cause the bug. However, this may be optimized
away in the future, so I decided to test with GPU.
* Update docstring to warn about device mismatch
* Extend docstring with an example
* Fix docstring
---------
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
2025-07-04 19:26:17 +05:30