From dff35a86e4645b339f282e5d3ec95fd332d5fcdb Mon Sep 17 00:00:00 2001 From: JuanCarlosPi Date: Tue, 16 Jan 2024 11:18:13 -0500 Subject: [PATCH] =?UTF-8?q?Change=20in=20ip-adapter=20docs.=20CLIPVisionMo?= =?UTF-8?q?delWithProjection=20should=20be=20im=E2=80=A6=20(#6597)?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Change in ip-adapter docs. CLIPVisionModelWithProjection should be imported from transformers, not diffusers --- docs/source/en/using-diffusers/loading_adapters.md | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/docs/source/en/using-diffusers/loading_adapters.md b/docs/source/en/using-diffusers/loading_adapters.md index d9d4a675dd..0ef90c6dd9 100644 --- a/docs/source/en/using-diffusers/loading_adapters.md +++ b/docs/source/en/using-diffusers/loading_adapters.md @@ -344,7 +344,8 @@ pipeline.load_ip_adapter("h94/IP-Adapter", subfolder="models", weight_name="ip-a IP-Adapter relies on an image encoder to generate the image features, if your IP-Adapter weights folder contains a "image_encoder" subfolder, the image encoder will be automatically loaded and registered to the pipeline. Otherwise you can so load a [`~transformers.CLIPVisionModelWithProjection`] model and pass it to a Stable Diffusion pipeline when you create it. ```py -from diffusers import AutoPipelineForText2Image, CLIPVisionModelWithProjection +from diffusers import AutoPipelineForText2Image +from transformers import CLIPVisionModelWithProjection import torch image_encoder = CLIPVisionModelWithProjection.from_pretrained(