From 6e68c71503682c8693cb5b06a4da4911dfd655ee Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?apolin=C3=A1rio?= Date: Wed, 8 Nov 2023 18:26:53 +0100 Subject: [PATCH] Add adapter fusing + PEFT to the docs (#5662) * Add adapter fusing + PEFT to the docs * Update docs/source/en/tutorials/using_peft_for_inference.md Co-authored-by: Sayak Paul * Update docs/source/en/tutorials/using_peft_for_inference.md Co-authored-by: Sayak Paul * Update docs/source/en/tutorials/using_peft_for_inference.md Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com> * Update docs/source/en/tutorials/using_peft_for_inference.md Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com> * Update docs/source/en/tutorials/using_peft_for_inference.md * Update docs/source/en/tutorials/using_peft_for_inference.md --------- Co-authored-by: Sayak Paul Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com> --- .../en/tutorials/using_peft_for_inference.md | 28 +++++++++++++++---- 1 file changed, 23 insertions(+), 5 deletions(-) diff --git a/docs/source/en/tutorials/using_peft_for_inference.md b/docs/source/en/tutorials/using_peft_for_inference.md index 2e3337519c..da69b712a9 100644 --- a/docs/source/en/tutorials/using_peft_for_inference.md +++ b/docs/source/en/tutorials/using_peft_for_inference.md @@ -12,9 +12,9 @@ specific language governing permissions and limitations under the License. [[open-in-colab]] -# Inference with PEFT +# Load LoRAs for inference -There are many adapters trained in different styles to achieve different effects. You can even combine multiple adapters to create new and unique images. With the 🤗 [PEFT](https://huggingface.co/docs/peft/index) integration in 🤗 Diffusers, it is really easy to load and manage adapters for inference. In this guide, you'll learn how to use different adapters with [Stable Diffusion XL (SDXL)](../api/pipelines/stable_diffusion/stable_diffusion_xl) for inference. +There are many adapters (with LoRAs being the most common type) trained in different styles to achieve different effects. You can even combine multiple adapters to create new and unique images. With the 🤗 [PEFT](https://huggingface.co/docs/peft/index) integration in 🤗 Diffusers, it is really easy to load and manage adapters for inference. In this guide, you'll learn how to use different adapters with [Stable Diffusion XL (SDXL)](../api/pipelines/stable_diffusion/stable_diffusion_xl) for inference. Throughout this guide, you'll use LoRA as the main adapter technique, so we'll use the terms LoRA and adapter interchangeably. You should have some familiarity with LoRA, and if you don't, we welcome you to check out the [LoRA guide](https://huggingface.co/docs/peft/conceptual_guides/lora). @@ -22,9 +22,8 @@ Let's first install all the required libraries. ```bash !pip install -q transformers accelerate -# Will be updated once the stable releases are done. -!pip install -q git+https://github.com/huggingface/peft.git -!pip install -q git+https://github.com/huggingface/diffusers.git +!pip install peft +!pip install diffusers ``` Now, let's load a pipeline with a SDXL checkpoint: @@ -165,3 +164,22 @@ list_adapters_component_wise = pipe.get_list_adapters() list_adapters_component_wise {"text_encoder": ["toy", "pixel"], "unet": ["toy", "pixel"], "text_encoder_2": ["toy", "pixel"]} ``` + +## Fusing adapters into the model + +You can use PEFT to easily fuse/unfuse multiple adapters directly into the model weights (both UNet and text encoder) using the [`~diffusers.loaders.LoraLoaderMixin.fuse_lora`] method, which can lead to a speed-up in inference and lower VRAM usage. + +```py +pipe.load_lora_weights("nerijs/pixel-art-xl", weight_name="pixel-art-xl.safetensors", adapter_name="pixel") +pipe.load_lora_weights("CiroN2022/toy-face", weight_name="toy_face_sdxl.safetensors", adapter_name="toy") + +pipe.set_adapters(["pixel", "toy"], adapter_weights=[0.5, 1.0]) +# Fuses the LoRAs into the Unet +pipe.fuse_lora() + +prompt = "toy_face of a hacker with a hoodie, pixel art" +image = pipe(prompt, num_inference_steps=30, generator=torch.manual_seed(0)).images[0] + +# Gets the Unet back to the original state +pipe.unfuse_lora() +```