1
0
mirror of https://github.com/vladmandic/sdnext.git synced 2026-01-27 15:02:48 +03:00
Files
sdnext/TODO.md
vladmandic f9aa2591e5 fix gallery save/delete
Signed-off-by: vladmandic <mandic00@live.com>
2026-01-20 09:53:20 +01:00

6.4 KiB

TODO

Project Board

Internal

  • Feature: Move nunchaku models to refernce instead of internal decision
  • Update: transformers==5.0.0
  • Feature: Unify huggingface and diffusers model folders
  • Reimplement llama remover for Kanvas
  • Deploy: Create executable for SD.Next
  • Feature: Integrate natural language image search ImageDB
  • Feature: Remote Text-Encoder support
  • Refactor: move sampler options to settings to config
  • Refactor: GGUF
  • Feature: LoRA add OMI format support for SD35/FLUX.1
  • Refactor: remove CodeFormer
  • Refactor: remove GFPGAN
  • UI: Lite vs Expert mode
  • Video tab: add full API support
  • Control tab: add overrides handling
  • Engine: TensorRT acceleration
  • Engine: mmgp
  • Engine: sharpfin instead of torchvision

Modular

Features

New models / Pipelines

TODO: Investigate which models are diffusers-compatible and prioritize!

Asyncio

rmtree

  • onerror deprecated and replaced with onexc in Python 3.12
    def excRemoveReadonly(func, path, exc: BaseException):
        import stat
        shared.log.debug(f'Exception during cleanup: {func} {path} {type(exc).__name__}')
        if func in (os.rmdir, os.remove, os.unlink) and isinstance(exc, PermissionError):
            shared.log.debug(f'Retrying cleanup: {path}')
            os.chmod(path, stat.S_IRWXU | stat.S_IRWXG | stat.S_IRWXO)
            func(path)
    # ...
      try:
          shutil.rmtree(found.path, ignore_errors=False, onexc=excRemoveReadonly)

Code TODO

npm run todo

  • fc: autodetect distilled based on model
  • fc: autodetect tensor format based on model
  • hypertile: vae breaks when using non-standard sizes
  • install: switch to pytorch source when it becomes available
  • loader: load receipe
  • loader: save receipe
  • lora: add other quantization types
  • lora: add t5 key support for sd35/f1
  • lora: maybe force imediate quantization
  • model load: force-reloading entire model as loading transformers only leads to massive memory usage
  • model load: implement model in-memory caching
  • modernui: monkey-patch for missing tabs.select event
  • modules/lora/lora_extract.py:188:9: W0511: TODO: lora: support pre-quantized flux
  • modules/modular_guiders.py:65:58: W0511: TODO: guiders
  • processing: remove duplicate mask params
  • resize image: enable full VAE mode for resize-latent