kijai
cc9bf1e4f5
Store lora diffs in buffers for GGUF as well
2025-10-30 16:44:03 +02:00
kijai
d3010e37ab
Fix some torch compile + fp8_fast issues
2025-10-05 12:31:45 +03:00
kijai
3167e088ed
Update fp8_optimization.py
2025-10-05 12:26:44 +03:00
kijai
e5ef9752a7
Fix fp8_fast when not using loras
2025-09-08 18:54:29 +03:00
kijai
09710f9ca0
Basic MTV Crafter support
...
https://github.com/DINGYANB/MTVCrafter
2025-08-18 22:42:11 +03:00
kijai
99c946ae43
Fix lora not always working with compile
2025-08-09 11:20:30 +03:00
kijai
f0e81d01c9
prints
2025-08-09 10:52:27 +03:00
kijai
48fa904ad8
fp8 matmul for scaled models
...
Fp8 matmul (fp8_fast) doesn't seem feasible with unmerged LoRAs as you'd need to first upcast, then apply LoRA, then downcast back to fp8 and that is too slow. Direct adding in fp8 is also not possible since that's just not something fp8 dtypes support.
2025-08-09 10:17:11 +03:00
kijai
d446e97309
Experimental RAAG (Ratio Aware Adaptive Guidance) implementation
...
https://arxiv.org/abs/2508.03442
Available in experimental_args, alpha > 0 = enabled, default value is 1.0
2025-08-08 00:17:32 +03:00
kijai
6ec463288f
Fix LoRA scheduling with multiple LoRAs
2025-08-04 23:47:02 +03:00
kijai
801f543a7a
Allow LoRA timestep scheduling
...
LoRA strength can now be a list of floats that represents the strength at any given step
2025-08-04 00:02:19 +03:00
kijai
bdce322e65
remove loras if SetLora bypassed/disconnected
2025-07-23 17:51:59 +03:00
kijai
9d9b188b0b
Update fp8_optimization.py
2025-07-23 01:22:42 +03:00
kijai
296baa30ce
Add WanVideoSetLoRAs
...
Node to set the LoRA weights to use with the unmerged LoRA mode, not able to merge LoRAs but allows instant LoRA switching without any loading times. The effect of unmerged LoRAs is stronger and differs from merged LoRAs.
2025-07-21 22:39:17 +03:00
kijai
aeac12ed7a
Only convert layers that actually have scaled weights...
2025-07-21 01:57:27 +03:00
kijai
29ce253bb6
Support fp8_scaled models and allow running LoRAs unmerged on other models as well
2025-07-21 01:37:30 +03:00
kijai
863829d083
dtype fixes and maybe allow fp8_fast to work
...
Unsure when this has changed but seems that _scaled_mm now works with same dtype? This fixes the quality degradation of fp8_fast in initial tests.
2025-04-25 03:12:10 +03:00
kijai
2ee9543d48
init
2025-02-25 20:03:08 +02:00