1
0
mirror of https://github.com/comfyanonymous/ComfyUI.git synced 2026-01-28 11:40:54 +03:00

Commit Graph

  • 2e21122aab Add a node to set the model compute dtype for debugging. comfyanonymous 2025-02-15 04:15:37 -05:00
  • 1cd6cd6080 Disable pytorch attention in VAE for AMD. comfyanonymous 2025-02-14 05:42:14 -05:00
  • d7b4bf21a2 Auto enable mem efficient attention on gfx1100 on pytorch nightly 2.7 comfyanonymous 2025-02-14 04:17:56 -05:00
  • 042a905c37 Open yaml files with utf-8 encoding for extra_model_paths.yaml (#6807) Robin Huang 2025-02-13 17:39:04 -08:00
  • 019c7029ea Add a way to set a different compute dtype for the model at runtime. comfyanonymous 2025-02-13 20:34:03 -05:00
  • 8773ccf74d Better memory estimation for ROCm that support mem efficient attention. comfyanonymous 2025-02-13 08:32:36 -05:00
  • 1d5d6586f3 Fix ruff. comfyanonymous 2025-02-12 06:49:16 -05:00
  • 35740259de mix_ascend_bf16_infer_err (#6794) zhoufan2956 2025-02-12 19:48:11 +08:00
  • ab888e1e0b Add add_weight_wrapper function to model patcher. comfyanonymous 2025-02-12 05:49:00 -05:00
  • d2504fb701 Merge branch 'master' into worksplit-multigpu Jedrzej Kosinski 2025-02-11 22:34:51 -06:00
  • d9f0fcdb0c Cleanup. comfyanonymous 2025-02-11 17:17:03 -05:00
  • b124256817 Fix for running via DirectML (#6542) HishamC 2025-02-11 14:11:32 -08:00
  • af4b7c91be Make --force-fp16 actually force the diffusion model to be fp16. comfyanonymous 2025-02-11 08:31:46 -05:00
  • e57d2282d1 Fix incorrect Content-Type for WebP images (#6752) bananasss00 2025-02-11 12:48:35 +03:00
  • 4027466c80 Make lumina model work with any latent resolution. comfyanonymous 2025-02-10 00:24:20 -05:00
  • 095d867147 Remove useless function. model-paths-helper comfyanonymous 2025-02-09 07:01:38 -05:00
  • caeb27c3a5 res_multistep: Fix cfgpp and add ancestral samplers (#6731) Pam 2025-02-09 05:39:58 +05:00
  • 3d06e1c555 Make error more clear to user. comfyanonymous 2025-02-08 18:57:24 -05:00
  • 43a74c0de1 Allow FP16 accumulation with --fast (#6453) catboxanon 2025-02-08 17:00:56 -05:00
  • af93c8d1ee Document which text encoder to use for lumina 2. comfyanonymous 2025-02-08 06:54:03 -05:00
  • 832e3f5ca3 Fix another small bug in attention_bias redux (#6737) Raphael Walker 2025-02-07 20:44:43 +01:00
  • b03763bca6 Merge branch 'multigpu_support' into worksplit-multigpu Jedrzej Kosinski 2025-02-07 13:27:49 -06:00
  • 079eccc92a Don't compress http response by default. comfyanonymous 2025-02-07 03:29:12 -05:00
  • b6951768c4 fix a bug in the attn_masked redux code when using weight=1.0 (#6721) Raphael Walker 2025-02-06 22:51:16 +01:00
  • fca304debf Update frontend to v1.8.14 (#6724) Comfy Org PR Bot 2025-02-07 00:43:10 +09:00
  • 476aa79b64 Let --cuda-device take in a string to allow multiple devices (or device order) to be chosen, print available devices on startup, potentially support MultiGPU Intel and Ascend setups Jedrzej Kosinski 2025-02-06 08:44:07 -06:00
  • 441cfd1a7a Merge branch 'master' into multigpu_support Jedrzej Kosinski 2025-02-06 08:10:48 -06:00
  • 14880e6dba Remove some useless code. comfyanonymous 2025-02-06 05:00:19 -05:00
  • f1059b0b82 Remove unused GET /files API endpoint (#6714) Chenlei Hu 2025-02-05 18:48:36 -05:00
  • debabccb84 Bump ComfyUI version to v0.3.14 v0.3.14 comfyanonymous 2025-02-05 15:47:46 -05:00
  • 37cd448529 Set the shift for Lumina back to 6. comfyanonymous 2025-02-05 14:49:52 -05:00
  • 94f21f9301 Upcasting rope to fp32 seems to make no difference in this model. comfyanonymous 2025-02-05 04:32:47 -05:00
  • 60653004e5 Use regular numbers for rope in lumina model. comfyanonymous 2025-02-05 04:16:59 -05:00
  • a57d635c5f Fix lumina 2 batches. comfyanonymous 2025-02-04 21:48:11 -05:00
  • 016b219dcc Add Lumina Image 2.0 to Readme. comfyanonymous 2025-02-04 08:08:36 -05:00
  • 8ac2dddeed Lower the default shift of lumina to reduce artifacts. comfyanonymous 2025-02-04 06:50:37 -05:00
  • 3e880ac709 Fix on python 3.9 comfyanonymous 2025-02-04 04:20:56 -05:00
  • e5ea112a90 Support Lumina 2 model. comfyanonymous 2025-02-04 03:56:00 -05:00
  • 8d88bfaff9 allow searching for new .pt2 extension, which can contain AOTI compiled modules (#6689) Raphael Walker 2025-02-03 23:07:35 +01:00
  • ed4d92b721 Model merging nodes for cosmos. comfyanonymous 2025-02-03 03:31:39 -05:00
  • 932ae8d9ca Update frontend to v1.8.13 (#6682) Comfy Org PR Bot 2025-02-03 07:54:44 +09:00
  • 44e19a28d3 Use maximum negative value instead of -inf for masks in text encoders. comfyanonymous 2025-02-02 09:45:07 -05:00
  • 0a0df5f136 better guide message for sageattention (#6634) Dr.Lt.Data 2025-02-02 23:26:47 +09:00
  • 24d6871e47 add disable-compres-response-body cli args; add compress middleware; (#6672) KarryCharon 2025-02-02 22:24:55 +08:00
  • 99a5c1068a Merge branch 'master' into multigpu_support Jedrzej Kosinski 2025-02-02 03:19:18 -06:00
  • 9e1d301129 Only use stable cascade lora format with cascade model. comfyanonymous 2025-02-01 06:35:22 -05:00
  • 768e035868 Add node for preview 3d animation (#6594) Terry Jia 2025-01-31 13:09:07 -05:00
  • 669e0497ea Update frontend to v1.8.12 (#6662) Comfy Org PR Bot 2025-02-01 03:07:37 +09:00
  • 541dc08547 Update Readme. comfyanonymous 2025-01-31 08:35:48 -05:00
  • b6b475191d Add sqlite db pythongosssss 2025-01-30 21:48:53 +00:00
  • 8d8dc9a262 Allow batch of different sigmas when noise scaling. comfyanonymous 2025-01-30 06:49:52 -05:00
  • 2f98c24360 Update Readme with link to instruction for Nvidia 50 series. comfyanonymous 2025-01-30 02:12:43 -05:00
  • ef85058e97 Bump ComfyUI version to v0.3.13 v0.3.13 comfyanonymous 2025-01-29 16:07:12 -05:00
  • f9230bd357 Update the python version in some workflows. comfyanonymous 2025-01-29 15:54:13 -05:00
  • 02747cde7d Carry over change from _calc_cond_batch into _calc_cond_batch_multigpu Jedrzej Kosinski 2025-01-29 11:10:23 -06:00
  • 537c27cbf3 Bump default cuda version in standalone package to 126. comfyanonymous 2025-01-29 08:13:33 -05:00
  • 6ff2e4d550 Remove logging call added in last commit. comfyanonymous 2025-01-29 08:08:01 -05:00
  • 222f48c0f2 Allow changing folder_paths.base_path via command line argument. (#6600) filtered 2025-01-30 00:06:28 +11:00
  • 13fd4d6e45 More friendly error messages for corrupted safetensors files. comfyanonymous 2025-01-28 09:41:09 -05:00
  • 1210d094c7 Convert latents_ubyte to 8-bit unsigned int before converting to CPU (#6300) Bradley Reynolds 2025-01-28 07:22:54 -06:00
  • 0b3233b4e2 Merge remote-tracking branch 'origin/master' into multigpu_support Jedrzej Kosinski 2025-01-28 06:11:07 -06:00
  • eda866bf51 Extracted multigpu core code into multigpu.py, added load_balance_devices to get subdivision of work based on available devices and splittable work item count, added MultiGPU Options nodes to set relative_speed of specific devices; does not change behavior yet Jedrzej Kosinski 2025-01-27 06:25:48 -06:00
  • 255edf2246 Lower minimum ratio of loaded weights on Nvidia. comfyanonymous 2025-01-27 05:26:51 -05:00
  • e3298b84de Create proper MultiGPU Initialize node, create gpu_options to create scaffolding for asymmetrical GPU support Jedrzej Kosinski 2025-01-26 09:34:20 -06:00
  • c7feef9060 Cast transformer_options for multigpu Jedrzej Kosinski 2025-01-26 05:29:27 -06:00
  • 4f011b9a00 Better CLIPTextEncode error when clip input is None. comfyanonymous 2025-01-26 06:04:57 -05:00
  • 67feb05299 Remove redundant code. comfyanonymous 2025-01-25 19:04:53 -05:00
  • 6d21740346 Print ComfyUI version. comfyanonymous 2025-01-25 15:03:57 -05:00
  • 51af7fa1b4 Fix multigpu ControlBase get_models and cleanup calls to avoid multiple calls of functions on multigpu_clones versions of controlnets Jedrzej Kosinski 2025-01-25 06:05:01 -06:00
  • 46969c380a Initial MultiGPU support for controlnets Jedrzej Kosinski 2025-01-24 05:39:38 -06:00
  • 7fbf4b72fe Update nightly pytorch ROCm command in Readme. comfyanonymous 2025-01-24 06:15:38 -05:00
  • 14ca5f5a10 Remove useless code. comfyanonymous 2025-01-24 06:15:05 -05:00
  • 5db4277449 Make sure additional_models are unloaded as well when perform Jedrzej Kosinski 2025-01-23 19:06:05 -06:00
  • ce557cfb88 Remove redundant code (#6576) filtered 2025-01-23 21:57:41 +11:00
  • 96e2a45193 Remove useless code. comfyanonymous 2025-01-23 05:56:23 -05:00
  • dfa2b6d129 Remove unused function lcm in conds.py (#6572) Chenlei Hu 2025-01-23 05:54:09 -05:00
  • 02a4d0ad7d Added unload_model_and_clones to model_management.py to allow unloading only relevant models Jedrzej Kosinski 2025-01-23 01:20:00 -06:00
  • 17b70728ec Allow override of models base path via env var base-path-env-var filtered 2025-01-23 18:09:15 +11:00
  • f3566f0894 remove some params from load 3d node (#6436) Terry Jia 2025-01-22 17:23:51 -05:00
  • ca69b41cee Add utils/ to web server developer codeowner (#6570) Chenlei Hu 2025-01-22 17:16:54 -05:00
  • a058f52090 [i18n] Add /i18n endpoint to provide all custom node translations (#6558) Chenlei Hu 2025-01-22 17:15:45 -05:00
  • d6bbe8c40f Remove support for python 3.8. comfyanonymous 2025-01-22 17:04:30 -05:00
  • a7fe0a94de Refactor and fixes for video latents. comfyanonymous 2025-01-22 06:37:46 -05:00
  • e857dd48b8 Add gradient estimation sampler (#6554) chaObserv 2025-01-22 18:29:40 +08:00
  • d303cb5341 Add missing case to CLIPLoader. comfyanonymous 2025-01-21 08:57:04 -05:00
  • fb2ad645a3 Add FluxDisableGuidance node to disable using the guidance embed. comfyanonymous 2025-01-20 14:50:24 -05:00
  • ef137ac0b6 Merge branch 'multigpu_support' of https://github.com/kosinkadink/ComfyUI into multigpu_support Jedrzej Kosinski 2025-01-20 04:34:39 -06:00
  • 328d4f16a9 Make WeightHooks compatible with MultiGPU, clean up some code Jedrzej Kosinski 2025-01-20 04:34:26 -06:00
  • d8a7a32779 Cleanup old TODO. comfyanonymous 2025-01-20 03:44:13 -05:00
  • bdbcb85b8d Merge branch 'multigpu_support' of https://github.com/Kosinkadink/ComfyUI into multigpu_support Jedrzej Kosinski 2025-01-20 00:51:42 -06:00
  • 6c9e94bae7 Merge branch 'master' into multigpu_support Jedrzej Kosinski 2025-01-20 00:51:37 -06:00
  • a00e1489d2 LatentBatch fix for video latents comfyanonymous 2025-01-19 06:01:56 -05:00
  • ebf038d4fa Use torch.special.expm1 (#6388) Sergii Dymchenko 2025-01-19 01:54:32 -08:00
  • b4de04a1c1 Update frontend to v1.7.14 (#6522) Comfy Org PR Bot 2025-01-19 11:43:37 +09:00
  • b1a02131c9 Remove comfy.samplers self-import (#6506) catboxanon 2025-01-18 17:49:51 -05:00
  • 3a3910f91d PromptServer: Return 400 for empty filename param (#6504) catboxanon 2025-01-18 17:47:33 -05:00
  • 507199d9a8 Uni pc sampler now works with audio and video models. comfyanonymous 2025-01-18 05:27:58 -05:00
  • 2f3ab40b62 Add warning when using old pytorch versions. comfyanonymous 2025-01-17 18:47:27 -05:00
  • bfce723311 Initial work on multigpu_clone function, which will account for additional_models getting cloned Jedrzej Kosinski 2025-01-17 03:31:28 -06:00
  • 7fc3ccdcc2 Add that nvidia cosmos is supported to the README. comfyanonymous 2025-01-16 21:17:18 -05:00