1
0
mirror of https://github.com/kijai/ComfyUI-WanVideoWrapper.git synced 2026-01-28 12:20:55 +03:00

1257 Commits

Author SHA1 Message Date
kijai
7a0da7708e Merge branch 'main' into vap 2025-11-04 23:15:20 +02:00
kijai
a51a53d5b7 Use proper self_attn output layers, and cleanup 2025-11-04 21:13:29 +02:00
kijai
475f96aede Fix accidental positional arg 2025-11-04 20:53:46 +02:00
kijai
977f4a5c3a Update model.py 2025-11-04 11:31:08 +02:00
kijai
2a45675498 Merge branch 'main' into vap 2025-11-04 10:39:35 +02:00
kijai
1d0516a2a9 Avoid graph break for LongCat 2025-11-04 10:26:37 +02:00
kijai
8002d8a2f9 This still needed for some reason too 2025-11-04 09:58:40 +02:00
kijai
9a588a42ec Fix some precision issues with unmerged lora 2025-11-04 09:47:38 +02:00
kijai
509d6922f5 Update custom_linear.py 2025-11-04 09:44:07 +02:00
kijai
9fa4140159 Make lora torch.compile optional for unmerged lora application
This change has caused issues especially with LoRAs that have dynamic rank. Will now be disabled by default, to allow full graph with unmerged LoRAs the option to allow compile is available in the Torch Compile Settings -node
2025-11-04 01:34:51 +02:00
kijai
8ce6916d72 Fix for some cases of using comfy_chunked rope 2025-11-03 10:36:55 +02:00
kijai
0d0d28569a Fix cases where text encoder isn't used (eg. Minimax remover) 2025-11-03 10:29:10 +02:00
kijai
75109fdb79 Fix custom sigmas with euler 2025-11-03 10:12:05 +02:00
kijai
5eae7087fa Fix for S2V 2025-11-02 01:24:50 +02:00
kijai
ea414c54ac Create wanvideo_I2V_video-as-prompt_testing_WIP.json 2025-11-01 16:53:15 +02:00
kijai
0013ae0ece Init VAP 2025-11-01 16:47:37 +02:00
kijai
393fe78ec2 Update model.py 2025-10-31 23:39:55 +02:00
kijai
0e904e6035 Remove unnecessary casts 2025-10-31 17:28:12 +02:00
kijai
5f4020b12d Fix a possible issue with Ovi audio model loading 2025-10-31 17:24:24 +02:00
kijai
5da8a6b169 Fix MultiTalk on some models 2025-10-31 16:50:00 +02:00
kijai
ce6e7b501d Fix unmerged LoRA application for certain LoRAs 2025-10-30 23:28:38 +02:00
kijai
366f740d28 Update readme.md 2025-10-30 23:10:06 +02:00
Jukka Seppänen
d45fe1ee22 Add note about blocking new accounts from posting issues
Added a note regarding issue posting restrictions due to bot activity.
2025-10-30 18:19:45 +02:00
kijai
da24890d53 Update pyproject.toml 2025-10-30 17:56:20 +02:00
kijai
95391f403d Update nodes_sampler.py 2025-10-30 17:55:55 +02:00
kijai
9e0b3afe4e version checkpoint 2025-10-30 17:53:29 +02:00
kijai
ba1beba982 Create LongCat_TI2V_example_01.json 2025-10-30 17:51:52 +02:00
kijai
64c195167b Update nodes.py 2025-10-30 17:33:42 +02:00
kijai
cc9bf1e4f5 Store lora diffs in buffers for GGUF as well 2025-10-30 16:44:03 +02:00
kijai
a64f115d35 Fix to previous 2025-10-29 11:06:38 +02:00
kijai
e45f6f2fc4 Allow WanVideoScheduler -node to work with the looping samplers
Was broken for Multitalk/WanAnimate/S2V
2025-10-29 10:41:33 +02:00
kijai
1cd8df5c00 Update custom_linear.py 2025-10-29 02:50:11 +02:00
kijai
d2614a9a49 Merge branch 'main' into longcat 2025-10-29 02:33:37 +02:00
kijai
083a8458c4 Register lora diffs as buffers to allow them to work with block swap
unmerged loras (non GGUF for now) will now be moved with block swap instead of always loaded from cpu to reduce device transfers and allow torch compile full graph
2025-10-29 02:33:26 +02:00
kijai
1c2f17e8d7 Add utility node to split sampler from settings
For cleaner previews
2025-10-29 02:24:24 +02:00
kijai
9d45b9f0de Use comfy core Conv3D workaround for VAE rather than the fp32 cast 2025-10-29 02:23:49 +02:00
Jukka Seppänen
833c6f50c7 Merge pull request #1581 from chengzeyi/fix-ref-conv-dtype-mismatch
Fix dtype mismatch in ref_conv forward pass
2025-10-28 14:27:43 +02:00
chengzeyi
d15cf3001f Fix dtype mismatch in ref_conv forward pass
This commit fixes a RuntimeError that occurs when using Fun-Control
reference images: "Input type (float) and bias type (c10::Half)
should be the same"

Root cause:
- Commit 1ba1a16 changed the dtype handling strategy to convert
  the main latent `x` to `base_dtype` instead of converting
  embeddings to match `x.dtype`
- This caused `fun_ref` input to be in a different dtype than
  the `ref_conv` layer's weights and bias
- Line 2324 already handles this correctly for `attn_cond` by
  converting to `self.attn_conv_in.weight.dtype`

Solution:
- Convert `fun_ref` to match `self.ref_conv.weight.dtype` before
  passing through the convolution layer
- This follows the same pattern used for `attn_cond` on line 2324

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-10-28 12:08:23 +00:00
kijai
eebbcd5ee0 Update model.py 2025-10-28 02:04:50 +02:00
kijai
2633119505 Update custom_linear.py 2025-10-28 01:55:25 +02:00
kijai
90908df260 Update model.py 2025-10-28 01:54:42 +02:00
kijai
c80a488f70 Use fp32 norms for other models too and other fixes 2025-10-28 01:52:48 +02:00
kijai
e69e068b57 Update gguf.py 2025-10-27 21:02:59 +02:00
kijai
54c45500b0 Apply lora diffs with unmerged loras too 2025-10-27 21:01:45 +02:00
kijai
e560366600 Update model.py 2025-10-27 18:55:30 +02:00
kijai
f880b321c6 Allow compile in lora application 2025-10-27 01:34:32 +02:00
kijai
51fcbd6b3d Revert "Allow compile here"
This reverts commit f583b56878.
2025-10-27 01:07:03 +02:00
kijai
f583b56878 Allow compile here 2025-10-27 01:06:39 +02:00
kijai
c59e52ca44 Precision adjustments 2025-10-27 00:23:32 +02:00
kijai
a0bdf20817 Some cleanup and allow full block swap 2025-10-26 23:05:17 +02:00