leffff
7084106eaa
remove unused imports
2025-10-14 20:38:40 +00:00
Lev Novitskiy
d62dffcb21
Update src/diffusers/models/transformers/transformer_kandinsky.py
...
Co-authored-by: Álvaro Somoza <asomoza@users.noreply.github.com >
2025-10-14 23:25:14 +03:00
Lev Novitskiy
0190e55641
Update src/diffusers/pipelines/kandinsky5/pipeline_kandinsky.py
...
Co-authored-by: Álvaro Somoza <asomoza@users.noreply.github.com >
2025-10-14 21:54:21 +03:00
Lev Novitskiy
f52f3b45b7
Update src/diffusers/pipelines/kandinsky5/pipeline_kandinsky.py
...
Co-authored-by: Álvaro Somoza <asomoza@users.noreply.github.com >
2025-10-14 21:54:10 +03:00
Lev Novitskiy
88a8eea096
Update src/diffusers/pipelines/kandinsky5/pipeline_kandinsky.py
...
Co-authored-by: Álvaro Somoza <asomoza@users.noreply.github.com >
2025-10-14 21:53:47 +03:00
Lev Novitskiy
235f0d5df8
Update src/diffusers/pipelines/kandinsky5/pipeline_kandinsky.py
...
Co-authored-by: Álvaro Somoza <asomoza@users.noreply.github.com >
2025-10-14 21:53:32 +03:00
Lev Novitskiy
4aa22f3abe
Merge branch 'main' into main
2025-10-14 21:49:46 +03:00
Meatfucker
a4bc845478
Fix missing load_video documentation and load_video import in WanVideoToVideoPipeline example code ( #12472 )
...
* Update utilities.md
Update missing load_video documentation
* Update pipeline_wan_video2video.py
Fix missing load_video import in example code
2025-10-14 10:43:21 -07:00
leffff
04efb19b1a
add usage example
2025-10-14 12:14:37 +00:00
leffff
7af80e9ffc
add gradient checkpointing and peft support
2025-10-14 11:24:24 +00:00
leffff
e3a3e9d1b6
Merge branch 'main' of https://github.com/leffff/diffusers
2025-10-13 22:38:33 +00:00
leffff
149fd53df8
fix prompt type
2025-10-13 22:38:03 +00:00
Lev Novitskiy
f35c279439
Merge branch 'huggingface:main' into main
2025-10-13 19:53:35 +03:00
Manith Ratnayake
fa468c5d57
docs: api-pipelines-qwenimage typo fix ( #12461 )
2025-10-13 08:57:46 -07:00
leffff
43bd1e81d2
fix license
2025-10-13 14:41:50 +00:00
leffff
45240a7317
Wrap Transformer in Diffusers style
2025-10-13 12:27:03 +00:00
Lev Novitskiy
07e11b270f
Merge branch 'huggingface:main' into main
2025-10-13 01:00:03 +03:00
leffff
70fa62baea
add nabla attention
2025-10-12 21:59:23 +00:00
Steven Liu
8abc7aeb71
[docs] Fix syntax ( #12464 )
...
* fix syntax
* fix
* style
* fix
2025-10-11 08:13:30 +05:30
leffff
22e14bdac8
remove prints in pipeline
2025-10-10 17:03:09 +00:00
leffff
723d149dc1
add multiprompt support
2025-10-10 17:00:23 +00:00
Lev Novitskiy
86b6c2b686
Merge branch 'huggingface:main' into main
2025-10-10 17:41:14 +03:00
leffff
c8f3a36fba
rewrite Kandinsky5T2VPipeline to diffusers style
2025-10-10 14:39:59 +00:00
Sayak Paul
693d8a3a52
[modular] i2i and t2i support for kontext modular ( #12454 )
...
* up
* get ready
* fix import
* up
* up
2025-10-10 18:10:17 +05:30
Lev Novitskiy
0bd738f52b
Merge branch 'huggingface:main' into main
2025-10-09 18:12:09 +03:00
leffff
a0cf07f7e0
fix 5sec generation
2025-10-09 15:09:50 +00:00
Sayak Paul
a9df12ab45
Update Dockerfile to include zip wget for doc-builder ( #12451 )
2025-10-09 15:25:03 +05:30
Sayak Paul
a519272d97
[ci] revisit the installations in CI. ( #12450 )
...
* revisit the installations in CI.
* up
* up
* up
* empty
* up
* up
* up
2025-10-08 19:21:24 +05:30
Sayak Paul
345864eb85
fix more torch.distributed imports ( #12425 )
...
* up
* unguard.
2025-10-08 10:45:39 +05:30
Sayak Paul
35e538d46a
fix dockerfile definitions. ( #12424 )
...
* fix dockerfile definitions.
* python 3.10 slim.
* up
* up
* up
* up
* up
* revert pr_tests.yml changes
* up
* up
* reduce python version for torch 2.1.0
2025-10-08 09:46:18 +05:30
Sayak Paul
2dc31677e1
Align Flux modular more and more with Qwen modular ( #12445 )
...
* start
* fix
* up
2025-10-08 09:22:34 +05:30
Linoy Tsaban
1066de8c69
[Qwen LoRA training] fix bug when offloading ( #12440 )
...
* fix bug when offload and cache_latents both enabled
* fix bug when offload and cache_latents both enabled
* fix bug when offload and cache_latents both enabled
* fix bug when offload and cache_latents both enabled
* fix bug when offload and cache_latents both enabled
* fix bug when offload and cache_latents both enabled
* fix bug when offload and cache_latents both enabled
* fix bug when offload and cache_latents both enabled
* fix bug when offload and cache_latents both enabled
2025-10-07 18:27:15 +03:00
Sayak Paul
2d69bacb00
handle offload_state_dict when initing transformers models ( #12438 )
2025-10-07 13:51:20 +05:30
Changseop Yeom
0974b4c606
[i18n-KO] Fix typo and update translation in ethical_guidelines.md ( #12435 )
2025-10-06 14:24:05 -07:00
Charles
cf4b97b233
[perf] Cache version checks ( #12399 )
2025-10-06 17:45:34 +02:00
leffff
7db6093c53
updates
2025-10-06 12:43:04 +00:00
Sayak Paul
7f3e9b8695
make flux ready for mellon ( #12419 )
...
* make flux ready for mellon
* up
* Apply suggestions from code review
Co-authored-by: Álvaro Somoza <asomoza@users.noreply.github.com >
---------
Co-authored-by: Álvaro Somoza <asomoza@users.noreply.github.com >
2025-10-06 13:15:54 +05:30
SahilCarterr
ce90f9b2db
[FIX] Text to image training peft version ( #12434 )
...
Fix peft error
2025-10-06 08:24:54 +05:30
Sayak Paul
c3675d4c9b
[core] support QwenImage Edit Plus in modular ( #12416 )
...
* up
* up
* up
* up
* up
* up
* remove saves
* move things around a bit.
* get ready.
2025-10-05 21:57:13 +05:30
Vladimir Mandic
2b7deffe36
fix scale_shift_factor being on cpu for wan and ltx ( #12347 )
...
* wan fix scale_shift_factor being on cpu
* apply device cast to ltx transformer
* Apply style fixes
---------
Co-authored-by: Dhruv Nair <dhruv.nair@gmail.com >
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2025-10-05 09:23:38 +05:30
leffff
d53f848720
add transformer pipeline first version
2025-10-04 10:10:23 +00:00
Linoy Tsaban
941ac9c3d9
[training-scripts] Make more examples UV-compatible (follow up on #12000 ) ( #12407 )
...
* make qwen and kontext uv compatible
* add torchvision
* add torchvision
* add datasets, bitsandbytes, prodigyopt
---------
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
2025-10-03 17:46:47 +03:00
Benjamin Bossan
7242b5ff62
FIX Test to ignore warning for enable_lora_hotswap ( #12421 )
...
I noticed that the test should be for the option check_compiled="ignore"
but it was using check_compiled="warn". This has been fixed, now the
correct argument is passed.
However, the fact that the test passed means that it was incorrect to
begin with. The way that logs are collected does not collect the
logger.warning call here (not sure why). To amend this, I'm now using
assertNoLogs. With this change, the test correctly fails when the wrong
argument is passed.
2025-10-02 20:57:11 +02:00
Sayak Paul
b4297967a0
[core] conditionally import torch distributed stuff. ( #12420 )
...
conditionally import torch distributed stuff.
2025-10-02 20:38:02 +05:30
Sayak Paul
9ae5b6299d
[ci] xfail failing tests in CI. ( #12418 )
...
xfail failing tests in CI.
2025-10-02 17:46:15 +05:30
Sayak Paul
814d710e56
[tests] cache non lora pipeline outputs. ( #12298 )
...
* cache non lora pipeline outputs.
* up
* up
* up
* up
* Revert "up"
This reverts commit 772c32e433 .
* up
* Revert "up"
This reverts commit cca03df7fc .
* up
* up
* add .
* up
* up
* up
* up
* up
* up
2025-10-01 09:02:55 +05:30
Steven Liu
cc5b31ffc9
[docs] Migrate syntax ( #12390 )
...
* change syntax
* make style
2025-09-30 10:11:19 -07:00
Steven Liu
d7a1a0363f
[docs] CP ( #12331 )
...
* init
* feedback
* feedback
* feedback
* feedback
* feedback
* feedback
2025-09-30 09:33:41 -07:00
Lucain
b59654544b
Install latest prerelease from huggingface_hub when installing transformers from main ( #12395 )
...
* Allow prerelease when installing transformers from main
* maybe better
* maybe better
* and now?
* just bored
* should be better
* works now
2025-09-30 17:02:33 +05:30
Yao Matrix
0e12ba7454
fix 3 xpu failures uts w/ latest pytorch ( #12408 )
...
fix xpu ut failures w/ latest pytorch
Signed-off-by: Yao, Matrix <matrix.yao@intel.com >
2025-09-30 14:07:48 +05:30