kijai
64191921d4
Squashed commit of the following:
...
commit fdb23dec7d
Author: kijai <40791699+kijai@users.noreply.github.com >
Date: Mon Jan 5 22:11:04 2026 +0200
Update model.py
commit 07d7d8ca8e
Author: kijai <40791699+kijai@users.noreply.github.com >
Date: Mon Jan 5 22:10:02 2026 +0200
remove prints
commit 01869d4bf5
Merge: 55c6720 bf1d77f
Author: kijai <40791699+kijai@users.noreply.github.com >
Date: Mon Jan 5 18:47:48 2026 +0200
Merge branch 'main' into longvie2
commit 55c672028b
Merge: b551ec9 be41f67
Author: kijai <40791699+kijai@users.noreply.github.com >
Date: Mon Dec 29 15:39:43 2025 +0200
Merge branch 'main' into longvie2
commit b551ec9e31
Merge: 9f019d7 19bcee6
Author: kijai <40791699+kijai@users.noreply.github.com >
Date: Mon Dec 29 15:03:53 2025 +0200
Merge branch 'main' into longvie2
commit 9f019d7dfb
Merge: fc5322f c5d3fb4
Author: kijai <40791699+kijai@users.noreply.github.com >
Date: Tue Dec 23 23:40:25 2025 +0200
Merge branch 'main' into longvie2
commit fc5322fae4
Merge: 222fc70 e75f814
Author: kijai <40791699+kijai@users.noreply.github.com >
Date: Tue Dec 23 22:04:15 2025 +0200
Merge branch 'main' into longvie2
commit 222fc70eb7
Author: kijai <40791699+kijai@users.noreply.github.com >
Date: Tue Dec 23 17:18:55 2025 +0200
Update nodes.py
commit 8509236da1
Author: kijai <40791699+kijai@users.noreply.github.com >
Date: Tue Dec 23 14:20:18 2025 +0200
init
2026-01-05 22:11:20 +02:00
kijai
164a6bbebd
Restore LoRA step scheduling functionality
2025-12-13 00:54:22 +02:00
kijai
fa57681424
Reduce recompiles with unmerged loras
2025-12-09 14:40:52 +02:00
kijai
c4ca252fea
Fix compiled LoRA application
...
Still needs to be unfused
2025-12-01 20:38:01 +02:00
kijai
e5be3e5263
Use torch custom_ops to avoid graph breaks with torch.compile
...
Hopefully finally fixes the torch.compile VRAM issues...
2025-12-01 00:29:27 +02:00
kijai
0cba1edd4e
Better just not compile this as it's causing issues
2025-11-30 17:53:28 +02:00
kijai
8002d8a2f9
This still needed for some reason too
2025-11-04 09:58:40 +02:00
kijai
9a588a42ec
Fix some precision issues with unmerged lora
2025-11-04 09:47:38 +02:00
kijai
509d6922f5
Update custom_linear.py
2025-11-04 09:44:07 +02:00
kijai
9fa4140159
Make lora torch.compile optional for unmerged lora application
...
This change has caused issues especially with LoRAs that have dynamic rank. Will now be disabled by default, to allow full graph with unmerged LoRAs the option to allow compile is available in the Torch Compile Settings -node
2025-11-04 01:34:51 +02:00
kijai
ce6e7b501d
Fix unmerged LoRA application for certain LoRAs
2025-10-30 23:28:38 +02:00
kijai
1cd8df5c00
Update custom_linear.py
2025-10-29 02:50:11 +02:00
kijai
083a8458c4
Register lora diffs as buffers to allow them to work with block swap
...
unmerged loras (non GGUF for now) will now be moved with block swap instead of always loaded from cpu to reduce device transfers and allow torch compile full graph
2025-10-29 02:33:26 +02:00
kijai
2633119505
Update custom_linear.py
2025-10-28 01:55:25 +02:00
kijai
54c45500b0
Apply lora diffs with unmerged loras too
2025-10-27 21:01:45 +02:00
kijai
f880b321c6
Allow compile in lora application
2025-10-27 01:34:32 +02:00
kijai
51fcbd6b3d
Revert "Allow compile here"
...
This reverts commit f583b56878 .
2025-10-27 01:07:03 +02:00
kijai
f583b56878
Allow compile here
2025-10-27 01:06:39 +02:00
kijai
88a60d71ab
Fix
2025-10-23 23:25:20 +03:00
kijai
67fcf0ba52
Reduce needless torch.compile recompiles
2025-10-22 13:29:13 +03:00
kijai
7d88316bf8
Fix for the case of applying unmerged lora to already compiled model
2025-10-18 14:00:18 +03:00
kijai
da43a51683
Unmerged lora application tweak
...
Should resolve some issues when using same model with multiple sampler and different loras
2025-10-13 20:18:43 +03:00
kijai
6075807e03
Fix for some fp8_scaled models
2025-10-07 13:30:41 +03:00
kijai
79583d4be3
Support WanAnimate
2025-09-19 10:14:08 +03:00
kijai
c538cd4c8a
Fix unmerged lora
...
oops
2025-09-02 18:48:25 +03:00
kijai
4cf23a5a40
Can't do this inplace with all models
2025-09-02 17:09:05 +03:00
kijai
42d7180759
keep scale_weight on gpu to avoid needless casts
2025-09-02 01:04:16 +03:00
kijai
09710f9ca0
Basic MTV Crafter support
...
https://github.com/DINGYANB/MTVCrafter
2025-08-18 22:42:11 +03:00
kijai
7fc5ec305d
Refactor model loading
2025-08-16 16:03:15 +03:00
kijai
7c93aea182
rename fp8 optimization in code for clarity as it does more than that with loras
2025-08-11 14:32:57 +03:00