kijai
8002d8a2f9
This still needed for some reason too
2025-11-04 09:58:40 +02:00
kijai
9a588a42ec
Fix some precision issues with unmerged lora
2025-11-04 09:47:38 +02:00
kijai
509d6922f5
Update custom_linear.py
2025-11-04 09:44:07 +02:00
kijai
9fa4140159
Make lora torch.compile optional for unmerged lora application
...
This change has caused issues especially with LoRAs that have dynamic rank. Will now be disabled by default, to allow full graph with unmerged LoRAs the option to allow compile is available in the Torch Compile Settings -node
2025-11-04 01:34:51 +02:00
kijai
ce6e7b501d
Fix unmerged LoRA application for certain LoRAs
2025-10-30 23:28:38 +02:00
kijai
1cd8df5c00
Update custom_linear.py
2025-10-29 02:50:11 +02:00
kijai
083a8458c4
Register lora diffs as buffers to allow them to work with block swap
...
unmerged loras (non GGUF for now) will now be moved with block swap instead of always loaded from cpu to reduce device transfers and allow torch compile full graph
2025-10-29 02:33:26 +02:00
kijai
2633119505
Update custom_linear.py
2025-10-28 01:55:25 +02:00
kijai
54c45500b0
Apply lora diffs with unmerged loras too
2025-10-27 21:01:45 +02:00
kijai
f880b321c6
Allow compile in lora application
2025-10-27 01:34:32 +02:00
kijai
51fcbd6b3d
Revert "Allow compile here"
...
This reverts commit f583b56878 .
2025-10-27 01:07:03 +02:00
kijai
f583b56878
Allow compile here
2025-10-27 01:06:39 +02:00
kijai
88a60d71ab
Fix
2025-10-23 23:25:20 +03:00
kijai
67fcf0ba52
Reduce needless torch.compile recompiles
2025-10-22 13:29:13 +03:00
kijai
7d88316bf8
Fix for the case of applying unmerged lora to already compiled model
2025-10-18 14:00:18 +03:00
kijai
da43a51683
Unmerged lora application tweak
...
Should resolve some issues when using same model with multiple sampler and different loras
2025-10-13 20:18:43 +03:00
kijai
6075807e03
Fix for some fp8_scaled models
2025-10-07 13:30:41 +03:00
kijai
79583d4be3
Support WanAnimate
2025-09-19 10:14:08 +03:00
kijai
c538cd4c8a
Fix unmerged lora
...
oops
2025-09-02 18:48:25 +03:00
kijai
4cf23a5a40
Can't do this inplace with all models
2025-09-02 17:09:05 +03:00
kijai
42d7180759
keep scale_weight on gpu to avoid needless casts
2025-09-02 01:04:16 +03:00
kijai
09710f9ca0
Basic MTV Crafter support
...
https://github.com/DINGYANB/MTVCrafter
2025-08-18 22:42:11 +03:00
kijai
7fc5ec305d
Refactor model loading
2025-08-16 16:03:15 +03:00
kijai
7c93aea182
rename fp8 optimization in code for clarity as it does more than that with loras
2025-08-11 14:32:57 +03:00