1
0
mirror of https://github.com/vladmandic/sdnext.git synced 2026-01-27 15:02:48 +03:00

5 Commits

Author SHA1 Message Date
CalamitousFelicitousness
33ee04a9f3 feat(flux2-klein): add SDNQ pre-quantized model support and reference images
- Add Transformers v5 tokenizer compatibility fix for SDNQ Klein models
  Downloads missing vocab.json, merges.txt, tokenizer_config.json from
  Z-Image-Turbo when needed
- Detect SDNQ repos and disable shared text encoder to use pre-quantized
  weights from the SDNQ repo instead of loading from shared base model
- Update reference-quant.json with correct preview images and metadata
  for Klein SDNQ models
- Update reference-distilled.json with correct cfg_scale (1.0) for
  distilled Klein models per official HuggingFace documentation
- Add 6 Klein model preview images
2026-01-17 21:36:44 +00:00
CalamitousFelicitousness
eaa8dbcd42 fix: correct comments and cleanup model descriptions
- Fix Klein text encoder comment to specify correct sizes per variant
- Lock TAESD decode logging behind SD_PREVIEW_DEBUG env var
- Fix misleading comment about FLUX.2 128-channel reshape (is fallback)
- Remove VRAM requirements from model descriptions in reference files
2026-01-16 03:24:39 +00:00
CalamitousFelicitousness
749371364b lint 2026-01-16 03:03:47 +00:00
CalamitousFelicitousness
fe99d3fe5d feat: add FLUX.2 Klein model support
Add support for FLUX.2 Klein distilled models (4B and 9B variants):

- Add pipeline loader for Flux2KleinPipeline
- Add model detection for 'flux.2' + 'klein' patterns
- Add pipeline mapping in shared_items
- Add shared Qwen3ForCausalLM text encoder handling:
  - 4B variants use Z-Image-Turbo's Qwen3-8B
  - 9B variants use FLUX.2-klein-9B's Qwen3-14B
- Add reference entries for distilled (4B, 9B) and base models
- Update diffusers commit for Flux2KleinPipeline support
2026-01-16 01:35:20 +00:00
vladmandic
387b3c7c36 split reference.json
Signed-off-by: vladmandic <mandic00@live.com>
2026-01-15 08:23:07 +01:00