diff --git a/docs/source/en/api/loaders/ip_adapter.md b/docs/source/en/api/loaders/ip_adapter.md index 8805092d92..a10f30ef8e 100644 --- a/docs/source/en/api/loaders/ip_adapter.md +++ b/docs/source/en/api/loaders/ip_adapter.md @@ -23,3 +23,7 @@ Learn how to load an IP-Adapter checkpoint and image in the IP-Adapter [loading] ## IPAdapterMixin [[autodoc]] loaders.ip_adapter.IPAdapterMixin + +## IPAdapterMaskProcessor + +[[autodoc]] image_processor.IPAdapterMaskProcessor \ No newline at end of file diff --git a/docs/source/en/using-diffusers/ip_adapter.md b/docs/source/en/using-diffusers/ip_adapter.md index 38f7b12398..4ae403538d 100644 --- a/docs/source/en/using-diffusers/ip_adapter.md +++ b/docs/source/en/using-diffusers/ip_adapter.md @@ -25,6 +25,9 @@ Let's take a look at how to use IP-Adapter's image prompting capabilities with t In all the following examples, you'll see the [`~loaders.IPAdapterMixin.set_ip_adapter_scale`] method. This method controls the amount of text or image conditioning to apply to the model. A value of `1.0` means the model is only conditioned on the image prompt. Lowering this value encourages the model to produce more diverse images, but they may not be as aligned with the image prompt. Typically, a value of `0.5` achieves a good balance between the two prompt types and produces good results. +> [!TIP] +> In the examples below, try adding `low_cpu_mem_usage=True` to the [`~loaders.IPAdapterMixin.load_ip_adapter`] method to speed up the loading time. + @@ -231,10 +234,21 @@ export_to_gif(frames, "gummy_bear.gif") -> [!TIP] -> While calling `load_ip_adapter()`, pass `low_cpu_mem_usage=True` to speed up the loading time. +## Configure parameters -All the pipelines supporting IP-Adapter accept a `ip_adapter_image_embeds` argument. If you need to run the IP-Adapter multiple times with the same image, you can encode the image once and save the embedding to the disk. +There are a couple of IP-Adapter parameters that are useful to know about and can help you with your image generation tasks. These parameters can make your workflow more efficient or give you more control over image generation. + +### Image embeddings + +IP-Adapter enabled pipelines provide the `ip_adapter_image_embeds` parameter to accept precomputed image embeddings. This is particularly useful in scenarios where you need to run the IP-Adapter pipeline multiple times because you have more than one image. For example, [multi IP-Adapter](#multi-ip-adapter) is a specific use case where you provide multiple styling images to generate a specific image in a specific style. Loading and encoding multiple images each time you use the pipeline would be inefficient. Instead, you can precompute and save the image embeddings to disk (which can save a lot of space if you're using high-quality images) and load them when you need them. + +> [!TIP] +> This parameter also gives you the flexibility to load embeddings from other sources. For example, ComfyUI image embeddings for IP-Adapters are compatible with Diffusers and should work ouf-of-the-box! + +Call the [`~StableDiffusionPipeline.prepare_ip_adapter_image_embeds`] method to encode and generate the image embeddings. Then you can save them to disk with `torch.save`. + +> [!TIP] +> If you're using IP-Adapter with `ip_adapter_image_embedding` instead of `ip_adapter_image`', you can set `load_ip_adapter(image_encoder_folder=None,...)` because you don't need to load an encoder to generate the image embeddings. ```py image_embeds = pipeline.prepare_ip_adapter_image_embeds( @@ -248,10 +262,7 @@ image_embeds = pipeline.prepare_ip_adapter_image_embeds( torch.save(image_embeds, "image_embeds.ipadpt") ``` -Load the image embedding and pass it to the pipeline as `ip_adapter_image_embeds` - -> [!TIP] -> ComfyUI image embeddings for IP-Adapters are fully compatible in Diffusers and should work out-of-box. +Now load the image embeddings by passing them to the `ip_adapter_image_embeds` parameter. ```py image_embeds = torch.load("image_embeds.ipadpt") @@ -264,8 +275,86 @@ images = pipeline( ).images ``` -> [!TIP] -> If you use IP-Adapter with `ip_adapter_image_embedding` instead of `ip_adapter_image`, you can choose not to load an image encoder by passing `image_encoder_folder=None` to `load_ip_adapter()`. +### IP-Adapter masking + +Binary masks specify which portion of the output image should be assigned to an IP-Adapter. This is useful for composing more than one IP-Adapter image. For each input IP-Adapter image, you must provide a binary mask an an IP-Adapter. + +To start, preprocess the input IP-Adapter images with the [`~image_processor.IPAdapterMaskProcessor.preprocess()`] to generate their masks. For optimal results, provide the output height and width to [`~image_processor.IPAdapterMaskProcessor.preprocess()`]. This ensures masks with different aspect ratios are appropriately stretched. If the input masks already match the aspect ratio of the generated image, you don't have to set the `height` and `width`. + +```py +from diffusers.image_processor import IPAdapterMaskProcessor + +mask1 = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/ip_mask_mask1.png") +mask2 = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/ip_mask_mask2.png") + +output_height = 1024 +output_width = 1024 + +processor = IPAdapterMaskProcessor() +masks = processor.preprocess([mask1, mask2], height=output_height, width=output_width) +``` + +
+
+ +
mask one
+
+
+ +
mask two
+
+
+ +When there is more than one input IP-Adapter image, load them as a list to ensure each image is assigned to a different IP-Adapter. Each of the input IP-Adapter images here correspond to the masks generated above. + +```py +face_image1 = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/ip_mask_girl1.png") +face_image2 = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/ip_mask_girl2.png") + +ip_images = [[face_image1], [face_image2]] +``` + +
+
+ +
IP-Adapter image one
+
+
+ +
IP-Adapter image two
+
+
+ +Now pass the preprocessed masks to `cross_attention_kwargs` in the pipeline call. + +```py +pipeline.load_ip_adapter("h94/IP-Adapter", subfolder="sdxl_models", weight_name=["ip-adapter-plus-face_sdxl_vit-h.safetensors"] * 2) +pipeline.set_ip_adapter_scale([0.7] * 2) +generator = torch.Generator(device="cpu").manual_seed(0) +num_images = 1 + +image = pipeline( + prompt="2 girls", + ip_adapter_image=ip_images, + negative_prompt="monochrome, lowres, bad anatomy, worst quality, low quality", + num_inference_steps=20, + num_images_per_prompt=num_images, + generator=generator, + cross_attention_kwargs={"ip_adapter_masks": masks} +).images[0] +image +``` + +
+
+ +
IP-Adapter masking applied
+
+
+ +
no IP-Adapter masking applied
+
+
## Specific use cases @@ -279,6 +368,7 @@ Generating accurate faces is challenging because they are complex and nuanced. D * [ip-adapter-plus-face_sd15.safetensors](https://huggingface.co/h94/IP-Adapter/blob/main/models/ip-adapter-plus-face_sd15.safetensors) uses patch embeddings and is conditioned with images of cropped faces > [!TIP] +> > [IP-Adapter-FaceID](https://huggingface.co/h94/IP-Adapter-FaceID) is a face-specific IP-Adapter trained with face ID embeddings instead of CLIP image embeddings, allowing you to generate more consistent faces in different contexts and styles. Try out this popular [community pipeline](https://github.com/huggingface/diffusers/tree/main/examples/community#ip-adapter-face-id) and see how it compares to the other face IP-Adapters. For face models, use the [h94/IP-Adapter](https://huggingface.co/h94/IP-Adapter) checkpoint. It is also recommended to use [`DDIMScheduler`] or [`EulerDiscreteScheduler`] for face models. @@ -502,82 +592,3 @@ image
   
- -### IP-Adapter masking - -Binary masks can be used to specify which portion of the output image should be assigned to an IP-Adapter. -For each input IP-Adapter image, a binary mask and an IP-Adapter must be provided. - -Before passing the masks to the pipeline, it's essential to preprocess them using [`IPAdapterMaskProcessor.preprocess()`]. - -> [!TIP] -> For optimal results, provide the output height and width to [`IPAdapterMaskProcessor.preprocess()`]. This ensures that masks with differing aspect ratios are appropriately stretched. If the input masks already match the aspect ratio of the generated image, specifying height and width can be omitted. - -Here an example with two masks: - -```py -from diffusers.image_processor import IPAdapterMaskProcessor - -mask1 = load_image("https://huggingface.co/datasets/YiYiXu/testing-images/resolve/main/ip_mask_mask1.png") -mask2 = load_image("https://huggingface.co/datasets/YiYiXu/testing-images/resolve/main/ip_mask_mask2.png") - -output_height = 1024 -output_width = 1024 - -processor = IPAdapterMaskProcessor() -masks = processor.preprocess([mask1, mask2], height=output_height, width=output_width) -``` - -
-
- -
mask one
-
-
- -
mask two
-
-
- -If you have more than one IP-Adapter image, load them into a list, ensuring each image is assigned to a different IP-Adapter. - -```py -face_image1 = load_image("https://huggingface.co/datasets/YiYiXu/testing-images/resolve/main/ip_mask_girl1.png") -face_image2 = load_image("https://huggingface.co/datasets/YiYiXu/testing-images/resolve/main/ip_mask_girl2.png") - -ip_images = [[face_image1], [face_image2]] -``` - -
-
- -
ip adapter image one
-
-
- -
ip adapter image two
-
-
- -Pass preprocessed masks to the pipeline using `cross_attention_kwargs` as shown below: - -```py -pipeline.load_ip_adapter("h94/IP-Adapter", subfolder="sdxl_models", weight_name=["ip-adapter-plus-face_sdxl_vit-h.safetensors"] * 2) -pipeline.set_ip_adapter_scale([0.7] * 2) -generator = torch.Generator(device="cpu").manual_seed(0) -num_images = 1 - -image = pipeline( - prompt="2 girls", - ip_adapter_image=ip_images, - negative_prompt="monochrome, lowres, bad anatomy, worst quality, low quality", - num_inference_steps=20, num_images_per_prompt=num_images, - generator=generator, cross_attention_kwargs={"ip_adapter_masks": masks} -).images[0] -image -``` - -
-    -
output image
-
diff --git a/src/diffusers/loaders/ip_adapter.py b/src/diffusers/loaders/ip_adapter.py index 2b70ed84d7..93959b9f0a 100644 --- a/src/diffusers/loaders/ip_adapter.py +++ b/src/diffusers/loaders/ip_adapter.py @@ -215,7 +215,7 @@ class IPAdapterMixin: else: logger.warning( "image_encoder is not loaded since `image_encoder_folder=None` passed. You will not be able to use `ip_adapter_image` when calling the pipeline with IP-Adapter." - "Use `ip_adapter_image_embedding` to pass pre-geneated image embedding instead." + "Use `ip_adapter_image_embeds` to pass pre-generated image embedding instead." ) # create feature extractor if it has not been registered to the pipeline yet