1
0
mirror of https://github.com/vladmandic/sdnext.git synced 2026-01-29 05:02:09 +03:00

Commit Graph

  • 5e9c6bfd4e update diffusers Enes Sadık Özbek 2025-06-23 01:19:53 +00:00
  • 87e6d3f4fc SDNQ add modules_to_not_convert and don't quant _keep_in_fp32_modules layers in post mode Disty0 2025-06-20 20:41:11 +03:00
  • 4491d26a18 IPEX fix fp64 hijack Disty0 2025-06-19 22:55:28 +03:00
  • 7fc7797a1d SDNQ fix group size calc on odd shapes Disty0 2025-06-19 11:36:41 +03:00
  • ff5d7977a9 Fix Controlnet pipeline check on set atten Disty0 2025-06-18 19:35:57 +03:00
  • 2c4850cc2b Log more info with SDNQ Disty0 2025-06-18 18:03:46 +03:00
  • 86cd272b96 SDNQ fix Dora Disty0 2025-06-18 16:24:42 +03:00
  • 7025ac8c36 Merge branch 'dev' into feature/chroma-support Enes Sadık Özbek 2025-06-18 10:08:38 +03:00
  • 4b3ce06916 Initial support for Chroma Enes Sadık Özbek 2025-06-18 00:38:17 +00:00
  • e657cf790d SDNQ fix int8 matmul with qwen Disty0 2025-06-18 02:12:34 +03:00
  • 71c2714edf Cleanup Disty0 2025-06-17 20:04:15 +03:00
  • d8aaffbc27 IPEX fix DPM2++ FlowMatch Disty0 2025-06-17 19:58:20 +03:00
  • 26800a1ef9 Cleanup sdnq Disty0 2025-06-17 02:05:13 +03:00
  • 72eb013294 Merge pull request #3989 from vladmandic/dev Disty0 2025-06-16 22:42:26 +03:00
  • 37e0b57a01 Merge pull request #3988 from vladmandic/master Disty0 2025-06-16 22:41:47 +03:00
  • bddd091300 Custom VAE loading support for Lumina 2 Disty0 2025-06-16 22:34:02 +03:00
  • 319af31d25 Custom UNet loading support for Lumina 2 Disty0 2025-06-16 13:28:30 +03:00
  • 23a9c92cd2 Merge pull request #3986 from vladmandic/dev Disty0 2025-06-16 02:48:20 +03:00
  • e9604ca72e Merge pull request #3985 from vladmandic/master Disty0 2025-06-16 02:47:51 +03:00
  • c4f98a65a4 Update wiki Disty0 2025-06-16 02:46:48 +03:00
  • 892dd456f7 Fix Nunchaku Disty0 2025-06-16 02:43:49 +03:00
  • c74104bb57 Merge pull request #3984 from vladmandic/dev Disty0 2025-06-15 22:52:52 +03:00
  • 9c191e6796 Merge pull request #3983 from vladmandic/master Disty0 2025-06-15 22:52:23 +03:00
  • a00952bfae update wiki Disty0 2025-06-15 22:51:49 +03:00
  • e83cdc5811 Update changelog Disty0 2025-06-15 22:50:49 +03:00
  • 4fa48e4084 Use te hijcak for with lumina 2 Disty0 2025-06-15 22:19:56 +03:00
  • a7811da267 Teacache support for Lumina 2 Disty0 2025-06-15 22:07:58 +03:00
  • bc68093d32 Merge pull request #3982 from vladmandic/dev Disty0 2025-06-15 16:35:35 +03:00
  • c5b233b4ad Merge pull request #3981 from vladmandic/master Disty0 2025-06-15 16:33:51 +03:00
  • c307906813 Update CHANGELOG.md Disty0 2025-06-15 12:40:24 +03:00
  • 223a01dc71 Update changelog Disty0 2025-06-15 03:22:54 +03:00
  • d31df8c1eb SDNQ fuse bias into dequantizer with matmul Disty0 2025-06-14 22:10:10 +03:00
  • 25fc0094a9 SDNQ use quantize_device and return_device args and fix decompress_fp32 always being on Disty0 2025-06-14 21:29:08 +03:00
  • 24194201cf Fix OmniGen Disty0 2025-06-14 19:55:43 +03:00
  • fd583523f7 Update requirements Disty0 2025-06-14 11:47:34 +03:00
  • c01802d9ff SDNQ fix transformers llm Disty0 2025-06-14 01:13:51 +03:00
  • 8f8e5ce1b0 Cleanup x2 Disty0 2025-06-14 01:08:25 +03:00
  • 2ba64abcde Cleanup Disty0 2025-06-14 00:54:18 +03:00
  • fb72c6f540 Zluda use exact torch version Disty0 2025-06-13 21:32:06 +03:00
  • 45827a923f IPEX fix torch.cuda.set_device Disty0 2025-06-13 16:20:21 +03:00
  • 90e76b2023 Cleanup Disty0 2025-06-13 13:42:13 +03:00
  • fb7280c3f4 Flux quanto fix logged dtype Disty0 2025-06-13 13:40:44 +03:00
  • 1fca565178 Cleanup Disty0 2025-06-13 13:37:12 +03:00
  • e68f9272e8 Disable custom atten processors for non SD 1.5 / SDXL models Disty0 2025-06-13 13:05:46 +03:00
  • cb4684cbeb SNDQ add separate quant mode option for Text Encoders Disty0 2025-06-13 12:42:57 +03:00
  • c8f947827b IPEX fix Lumina2 Disty0 2025-06-12 19:46:05 +03:00
  • 41f14df8f5 Fix TAESD and double downloading with Lumina2 Disty0 2025-06-12 14:17:36 +03:00
  • 5e013fb154 SDNQ optimize input quantization and use the word quantize instead of compress Disty0 2025-06-12 12:06:57 +03:00
  • 2d05396b4e SDNQ simplify sym scale formula Disty0 2025-06-12 02:26:04 +03:00
  • 26545b6483 Add warning for incompatible attention processors Disty0 2025-06-11 21:59:59 +03:00
  • dd84fb541f Always set sdpa params Disty0 2025-06-11 21:43:48 +03:00
  • 5cefa64a60 SDNQ update accepted dtypes Disty0 2025-06-11 20:58:54 +03:00
  • 74b6edf2df revert gfx1101 Disty0 2025-06-11 19:25:05 +03:00
  • 6aa5c08fb0 Cleanup and update changelog Disty0 2025-06-11 16:03:33 +03:00
  • 71be3c7d45 ROCm don't override gfx with gfx1100 and gfx1101 + rocm 6.4 Disty0 2025-06-11 15:47:25 +03:00
  • fd0c5b0e3e Update changelog Disty0 2025-06-11 15:12:22 +03:00
  • df6b13ea47 Don't set gfx override with RX 9000 and above Disty0 2025-06-11 15:09:03 +03:00
  • 64f49fb40f ROCm log HSA_OVERRIDE_GFX_VERSION skip Disty0 2025-06-11 14:58:12 +03:00
  • c81b712ddb Make VAE options not require model reload Disty0 2025-06-10 15:56:19 +03:00
  • 78f99abec8 SDNQ use group_size / 2 for convs Disty0 2025-06-10 15:29:24 +03:00
  • 4436a583aa Cleanup Disty0 2025-06-10 14:12:49 +03:00
  • 7ccd94ed4f Force upgrade pip when installing Torch Disty0 2025-06-10 13:59:32 +03:00
  • a6b58efe45 ROCm 6.4 support with --use-nightly Disty0 2025-06-10 13:42:43 +03:00
  • d2ffee1b4e ROCm don't override user set HSA_OVERRIDE_GFX_VERSION Disty0 2025-06-10 13:21:37 +03:00
  • f5b575db28 Update changelog and wiki Disty0 2025-06-10 12:39:34 +03:00
  • 33fadf946b SDNQ add 7 bit support Disty0 2025-06-10 11:33:06 +03:00
  • 5bd7a08877 don't use inplace ops in quant layer Disty0 2025-06-10 03:29:07 +03:00
  • 5eed9135e3 Split SDNQ into multiple files and linting Disty0 2025-06-10 03:18:25 +03:00
  • 58b646e7f2 SDNQ add 5-bit and 3-bit quantization support Disty0 2025-06-10 01:48:51 +03:00
  • 92dbf3941b Update changelog Disty0 2025-06-09 23:06:39 +03:00
  • bd2d9d1677 Python 3.13 support Disty0 2025-06-09 22:58:08 +03:00
  • 92d2379626 Relax Python version check with Zluda Disty0 2025-06-09 20:18:03 +03:00
  • 8e08ef0edc Fix VAE Tiling with non-default tile sizes Disty0 2025-06-07 01:25:08 +03:00
  • 3868a9184b Fix incorrectly reported lycoris load error Enes Sadık Özbek 2025-06-07 00:36:08 +03:00
  • 2f7aff5250 Fix TAESD previews with PixArt Disty0 2025-06-06 19:43:00 +03:00
  • 5624671191 Fix PixArt Sigma Small and Large Disty0 2025-06-06 19:25:26 +03:00
  • 089e437708 Don't set attention processors with models outside of SD 1.5 and SDXL Disty0 2025-06-06 18:53:57 +03:00
  • c039ba90f6 Fix Meissonic by adding multiple generator support Disty0 2025-06-06 18:02:49 +03:00
  • 7679028c1a Override CPU to use FP32 by default Disty0 2025-06-06 15:33:51 +03:00
  • 2ccc76ab91 Increase medvram mode to 12 GB and update wiki Disty0 2025-06-06 15:27:30 +03:00
  • b5d6b57500 Update PyTorch to 2.7.1 Disty0 2025-06-06 11:37:10 +03:00
  • 9a54efda9b Cleanup Disty0 2025-06-06 01:55:35 +03:00
  • 06fcc3cf85 SDNQ add quantized matmul support for Conv1d and Conv3d too Disty0 2025-06-06 00:19:54 +03:00
  • 976f0ba61f Cleanup Disty0 2025-06-05 20:59:58 +03:00
  • 6bcd335f37 Update changelog and wiki Disty0 2025-06-05 20:17:46 +03:00
  • 778ca0436b Update wiki Disty0 2025-06-05 19:26:22 +03:00
  • 413cf54cb6 Update changelog Disty0 2025-06-05 18:11:36 +03:00
  • 8c03f78197 Fix bias is None Disty0 2025-06-05 14:37:00 +03:00
  • 1a00517338 SDNQ FP8 matmul support for Conv2d Disty0 2025-06-05 14:32:26 +03:00
  • 9b55ffe449 SDNQ fix VAE x2 Disty0 2025-06-05 14:17:59 +03:00
  • ad2a4ad616 Cleanup Disty0 2025-06-05 13:38:04 +03:00
  • e25890bb1d SDNQ INT8 matmul support for Conv2d Disty0 2025-06-05 13:36:49 +03:00
  • c7fb5b1690 SDNQ fix VAE quant Disty0 2025-06-05 13:24:02 +03:00
  • 894e15d3dc Merge pull request #3961 from vladmandic/dev Disty0 2025-06-03 18:14:48 +03:00
  • 1c508fbda4 Merge pull request #3960 from vladmandic/master Disty0 2025-06-03 18:13:47 +03:00
  • 0777f67573 Update changelog Disty0 2025-06-03 18:12:12 +03:00
  • ce3a6e27fa SDNQ use channelwise only quant Disty0 2025-06-03 12:30:19 +03:00
  • 79ec23dd9d SDNQ override conv dtype Disty0 2025-06-03 02:37:29 +03:00
  • e9ff242e03 SDNQ add group size support for convs Disty0 2025-06-03 02:16:52 +03:00
  • 6f637f41cc Use tensorwise fp8 matmul with torch < 2.5 Disty0 2025-06-02 23:28:49 +03:00