From e7696e20f9a932bad2d462d4f7cc9cf0b5000c02 Mon Sep 17 00:00:00 2001 From: Alex Umnov Date: Tue, 13 Feb 2024 05:05:20 +0100 Subject: [PATCH] Updated lora inference instructions (#6913) * Updated lora inference instructions * Update examples/dreambooth/README.md Co-authored-by: Sayak Paul * Update README.md * Update README.md --------- Co-authored-by: Sayak Paul --- examples/dreambooth/README.md | 12 ++++-------- 1 file changed, 4 insertions(+), 8 deletions(-) diff --git a/examples/dreambooth/README.md b/examples/dreambooth/README.md index 972fe6e8cf..eb025eefc3 100644 --- a/examples/dreambooth/README.md +++ b/examples/dreambooth/README.md @@ -376,18 +376,14 @@ After training, LoRA weights can be loaded very easily into the original pipelin load the original pipeline: ```python -from diffusers import DiffusionPipeline, DPMSolverMultistepScheduler -import torch - -pipe = DiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16) -pipe.scheduler = DPMSolverMultistepScheduler.from_config(pipe.scheduler.config) -pipe.to("cuda") +from diffusers import DiffusionPipeline +pipe = DiffusionPipeline.from_pretrained("base-model-name").to("cuda") ``` -Next, we can load the adapter layers into the UNet with the [`load_attn_procs` function](https://huggingface.co/docs/diffusers/api/loaders#diffusers.loaders.UNet2DConditionLoadersMixin.load_attn_procs). +Next, we can load the adapter layers into the pipeline with the [`load_lora_weights` function](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters#lora). ```python -pipe.unet.load_attn_procs("patrickvonplaten/lora_dreambooth_dog_example") +pipe.load_lora_weights("path-to-the-lora-checkpoint") ``` Finally, we can run the model in inference.