mirror of
https://github.com/huggingface/diffusers.git
synced 2026-01-29 07:22:12 +03:00
* introduce to promote reusability. * up * add more tests * up * remove comments. * fix fuse_nan test * clarify the scope of fuse_lora and unfuse_lora * remove space * rewrite fuse_lora a bit. * feedback * copy over load_lora_into_text_encoder. * address dhruv's feedback. * fix-copies * fix issubclass. * num_fused_loras * fix * fix * remove mapping * up * fix * style * fix-copies * change to SD3TransformerLoRALoadersMixin * Apply suggestions from code review Co-authored-by: Dhruv Nair <dhruv.nair@gmail.com> * up * handle wuerstchen * up * move lora to lora_pipeline.py * up * fix-copies * fix documentation. * comment set_adapters(). * fix-copies * fix set_adapters() at the model level. * fix? * fix * loraloadermixin. --------- Co-authored-by: Dhruv Nair <dhruv.nair@gmail.com>
1.1 KiB
1.1 KiB
PEFT
Diffusers supports loading adapters such as LoRA with the PEFT library with the [~loaders.peft.PeftAdapterMixin] class. This allows modeling classes in Diffusers like [UNet2DConditionModel], [SD3Transformer2DModel] to operate with an adapter.
Refer to the Inference with PEFT tutorial for an overview of how to use PEFT in Diffusers for inference.
PeftAdapterMixin
autodoc loaders.peft.PeftAdapterMixin