From dc8da1d44922c2e2a6731c70a4fa7789cd983bb2 Mon Sep 17 00:00:00 2001 From: Mayank Khanduja Date: Fri, 25 Aug 2023 23:34:50 +0530 Subject: [PATCH] Fixed broken link of CLIP doc in evaluation doc (#4760) --- docs/source/en/conceptual/evaluation.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/source/en/conceptual/evaluation.md b/docs/source/en/conceptual/evaluation.md index 6e5c14acad..d714c5d975 100644 --- a/docs/source/en/conceptual/evaluation.md +++ b/docs/source/en/conceptual/evaluation.md @@ -334,7 +334,7 @@ image_processor = CLIPImageProcessor.from_pretrained(clip_id) image_encoder = CLIPVisionModelWithProjection.from_pretrained(clip_id).to(device) ``` -Notice that we are using a particular CLIP checkpoint, i.e., `openai/clip-vit-large-patch14`. This is because the Stable Diffusion pre-training was performed with this CLIP variant. For more details, refer to the [documentation](https://huggingface.co/docs/diffusers/main/en/api/pipelines/stable_diffusion/pix2pix#diffusers.StableDiffusionInstructPix2PixPipeline.text_encoder). +Notice that we are using a particular CLIP checkpoint, i.e., `openai/clip-vit-large-patch14`. This is because the Stable Diffusion pre-training was performed with this CLIP variant. For more details, refer to the [documentation](https://huggingface.co/docs/transformers/model_doc/clip). Next, we prepare a PyTorch `nn.Module` to compute directional similarity: