diff --git a/docs/source/en/_toctree.yml b/docs/source/en/_toctree.yml index dd0193a3a8..42558b636c 100644 --- a/docs/source/en/_toctree.yml +++ b/docs/source/en/_toctree.yml @@ -17,7 +17,7 @@ - local: tutorials/autopipeline title: AutoPipeline - local: using-diffusers/custom_pipeline_overview - title: Load community pipelines and components + title: Community pipelines and components - local: using-diffusers/callback title: Pipeline callbacks - local: using-diffusers/reusing_seeds diff --git a/docs/source/en/using-diffusers/custom_pipeline_overview.md b/docs/source/en/using-diffusers/custom_pipeline_overview.md index bfe48d28be..b087e57056 100644 --- a/docs/source/en/using-diffusers/custom_pipeline_overview.md +++ b/docs/source/en/using-diffusers/custom_pipeline_overview.md @@ -10,376 +10,163 @@ an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express o specific language governing permissions and limitations under the License. --> -# Load community pipelines and components - [[open-in-colab]] -## Community pipelines +# Community pipelines and components -> [!TIP] Take a look at GitHub Issue [#841](https://github.com/huggingface/diffusers/issues/841) for more context about why we're adding community pipelines to help everyone easily share their work without being slowed down. - -Community pipelines are any [`DiffusionPipeline`] class that are different from the original paper implementation (for example, the [`StableDiffusionControlNetPipeline`] corresponds to the [Text-to-Image Generation with ControlNet Conditioning](https://huggingface.co/papers/2302.05543) paper). They provide additional functionality or extend the original implementation of a pipeline. - -There are many cool community pipelines like [Marigold Depth Estimation](https://github.com/huggingface/diffusers/tree/main/examples/community#marigold-depth-estimation) or [InstantID](https://github.com/huggingface/diffusers/tree/main/examples/community#instantid-pipeline), and you can find all the official community pipelines [here](https://github.com/huggingface/diffusers/tree/main/examples/community). - -There are two types of community pipelines, those stored on the Hugging Face Hub and those stored on Diffusers GitHub repository. Hub pipelines are completely customizable (scheduler, models, pipeline code, etc.) while Diffusers GitHub pipelines are only limited to custom pipeline code. - -| | GitHub community pipeline | HF Hub community pipeline | -|----------------|------------------------------------------------------------------------------------------------------------------|-------------------------------------------------------------------------------------------| -| usage | same | same | -| review process | open a Pull Request on GitHub and undergo a review process from the Diffusers team before merging; may be slower | upload directly to a Hub repository without any review; this is the fastest workflow | -| visibility | included in the official Diffusers repository and documentation | included on your HF Hub profile and relies on your own usage/promotion to gain visibility | - - - - -To load a Hugging Face Hub community pipeline, pass the repository id of the community pipeline to the `custom_pipeline` argument and the model repository where you'd like to load the pipeline weights and components from. For example, the example below loads a dummy pipeline from [hf-internal-testing/diffusers-dummy-pipeline](https://huggingface.co/hf-internal-testing/diffusers-dummy-pipeline/blob/main/pipeline.py) and the pipeline weights and components from [google/ddpm-cifar10-32](https://huggingface.co/google/ddpm-cifar10-32): - -> [!WARNING] -> By loading a community pipeline from the Hugging Face Hub, you are trusting that the code you are loading is safe. Make sure to inspect the code online before loading and running it automatically! - -```py -from diffusers import DiffusionPipeline - -pipeline = DiffusionPipeline.from_pretrained( - "google/ddpm-cifar10-32", custom_pipeline="hf-internal-testing/diffusers-dummy-pipeline", use_safetensors=True -) -``` - - - - -To load a GitHub community pipeline, pass the repository id of the community pipeline to the `custom_pipeline` argument and the model repository where you you'd like to load the pipeline weights and components from. You can also load model components directly. The example below loads the community [CLIP Guided Stable Diffusion](https://github.com/huggingface/diffusers/tree/main/examples/community#clip-guided-stable-diffusion) pipeline and the CLIP model components. - -```py -from diffusers import DiffusionPipeline -from transformers import CLIPImageProcessor, CLIPModel - -clip_model_id = "laion/CLIP-ViT-B-32-laion2B-s34B-b79K" - -feature_extractor = CLIPImageProcessor.from_pretrained(clip_model_id) -clip_model = CLIPModel.from_pretrained(clip_model_id) - -pipeline = DiffusionPipeline.from_pretrained( - "stable-diffusion-v1-5/stable-diffusion-v1-5", - custom_pipeline="clip_guided_stable_diffusion", - clip_model=clip_model, - feature_extractor=feature_extractor, - use_safetensors=True, -) -``` - - - - -### Load from a local file - -Community pipelines can also be loaded from a local file if you pass a file path instead. The path to the passed directory must contain a pipeline.py file that contains the pipeline class. - -```py -pipeline = DiffusionPipeline.from_pretrained( - "stable-diffusion-v1-5/stable-diffusion-v1-5", - custom_pipeline="./path/to/pipeline_directory/", - clip_model=clip_model, - feature_extractor=feature_extractor, - use_safetensors=True, -) -``` - -### Load from a specific version - -By default, community pipelines are loaded from the latest stable version of Diffusers. To load a community pipeline from another version, use the `custom_revision` parameter. - - - - -For example, to load from the main branch: - -```py -pipeline = DiffusionPipeline.from_pretrained( - "stable-diffusion-v1-5/stable-diffusion-v1-5", - custom_pipeline="clip_guided_stable_diffusion", - custom_revision="main", - clip_model=clip_model, - feature_extractor=feature_extractor, - use_safetensors=True, -) -``` - - - - -For example, to load from a previous version of Diffusers like v0.25.0: - -```py -pipeline = DiffusionPipeline.from_pretrained( - "stable-diffusion-v1-5/stable-diffusion-v1-5", - custom_pipeline="clip_guided_stable_diffusion", - custom_revision="v0.25.0", - clip_model=clip_model, - feature_extractor=feature_extractor, - use_safetensors=True, -) -``` - - - - -### Load with from_pipe - -Community pipelines can also be loaded with the [`~DiffusionPipeline.from_pipe`] method which allows you to load and reuse multiple pipelines without any additional memory overhead (learn more in the [Reuse a pipeline](./loading#reuse-a-pipeline) guide). The memory requirement is determined by the largest single pipeline loaded. - -For example, let's load a community pipeline that supports [long prompts with weighting](https://github.com/huggingface/diffusers/tree/main/examples/community#long-prompt-weighting-stable-diffusion) from a Stable Diffusion pipeline. - -```py -import torch -from diffusers import DiffusionPipeline - -pipe_sd = DiffusionPipeline.from_pretrained("emilianJR/CyberRealistic_V3", torch_dtype=torch.float16) -pipe_sd.to("cuda") -# load long prompt weighting pipeline -pipe_lpw = DiffusionPipeline.from_pipe( - pipe_sd, - custom_pipeline="lpw_stable_diffusion", -).to("cuda") - -prompt = "cat, hiding in the leaves, ((rain)), zazie rainyday, beautiful eyes, macro shot, colorful details, natural lighting, amazing composition, subsurface scattering, amazing textures, filmic, soft light, ultra-detailed eyes, intricate details, detailed texture, light source contrast, dramatic shadows, cinematic light, depth of field, film grain, noise, dark background, hyperrealistic dslr film still, dim volumetric cinematic lighting" -neg_prompt = "(deformed iris, deformed pupils, semi-realistic, cgi, 3d, render, sketch, cartoon, drawing, anime, mutated hands and fingers:1.4), (deformed, distorted, disfigured:1.3), poorly drawn, bad anatomy, wrong anatomy, extra limb, missing limb, floating limbs, disconnected limbs, mutation, mutated, ugly, disgusting, amputation" -generator = torch.Generator(device="cpu").manual_seed(20) -out_lpw = pipe_lpw( - prompt, - negative_prompt=neg_prompt, - width=512, - height=512, - max_embeddings_multiples=3, - num_inference_steps=50, - generator=generator, - ).images[0] -out_lpw -``` - -
-
- -
Stable Diffusion with long prompt weighting
-
-
- -
Stable Diffusion
-
-
- -## Example community pipelines - -Community pipelines are a really fun and creative way to extend the capabilities of the original pipeline with new and unique features. You can find all community pipelines in the [diffusers/examples/community](https://github.com/huggingface/diffusers/tree/main/examples/community) folder with inference and training examples for how to use them. - -This section showcases a couple of the community pipelines and hopefully it'll inspire you to create your own (feel free to open a PR for your community pipeline and ping us for a review)! +Community pipelines are [`DiffusionPipeline`] classes that are different from the original paper implementation. They provide additional functionality or extend the original pipeline implementation. > [!TIP] -> The [`~DiffusionPipeline.from_pipe`] method is particularly useful for loading community pipelines because many of them don't have pretrained weights and add a feature on top of an existing pipeline like Stable Diffusion or Stable Diffusion XL. You can learn more about the [`~DiffusionPipeline.from_pipe`] method in the [Load with from_pipe](custom_pipeline_overview#load-with-from_pipe) section. +> Check out the community pipelines in [diffusers/examples/community](https://github.com/huggingface/diffusers/tree/main/examples/community) with inference and training examples for how to use them. - - +Community pipelines are either stored on the Hub or the Diffusers' GitHub repository. Hub pipelines are completely customizable (scheduler, models, pipeline code, etc.) while GitHub pipelines are limited to only the custom pipeline code. Further compare the two community pipeline types in the table below. -[Marigold](https://marigoldmonodepth.github.io/) is a depth estimation diffusion pipeline that uses the rich existing and inherent visual knowledge in diffusion models. It takes an input image and denoises and decodes it into a depth map. Marigold performs well even on images it hasn't seen before. +| | GitHub | Hub | +|---|---|---| +| Usage | Same. | Same. | +| Review process | Open a Pull Request on GitHub and undergo a review process from the Diffusers team before merging. This option is slower. | Upload directly to a Hub repository without a review. This is the fastest option. | +| Visibility | Included in the official Diffusers repository and docs. | Included on your Hub profile and relies on your own usage and promotion to gain visibility. | + +## custom_pipeline + +Load either community pipeline types by passing the `custom_pipeline` argument to [`~DiffusionPipeline.from_pretrained`]. ```py import torch -from PIL import Image from diffusers import DiffusionPipeline -from diffusers.utils import load_image pipeline = DiffusionPipeline.from_pretrained( - "prs-eth/marigold-lcm-v1-0", - custom_pipeline="marigold_depth_estimation", + "stabilityai/stable-diffusion-3-medium-diffusers", + custom_pipeline="pipeline_stable_diffusion_3_instruct_pix2pix", torch_dtype=torch.float16, - variant="fp16", + device_map="cuda" ) - -pipeline.to("cuda") -image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/community-marigold.png") -output = pipeline( - image, - denoising_steps=4, - ensemble_size=5, - processing_res=768, - match_input_res=True, - batch_size=0, - seed=33, - color_map="Spectral", - show_progress_bar=True, -) -depth_colored: Image.Image = output.depth_colored -depth_colored.save("./depth_colored.png") ``` -
-
- -
original image
-
-
- -
colorized depth image
-
-
- -
- - -[HD-Painter](https://hf.co/papers/2312.14091) is a high-resolution inpainting pipeline. It introduces a *Prompt-Aware Introverted Attention (PAIntA)* layer to better align a prompt with the area to be inpainted, and *Reweighting Attention Score Guidance (RASG)* to keep the latents more prompt-aligned and within their trained domain to generate realistc images. +Add the `custom_revision` argument to [`~DiffusionPipeline.from_pretrained`] to load a community pipeline from a specific version (for example, `v0.30.0` or `main`). By default, community pipelines are loaded from the latest stable version of Diffusers. ```py import torch -from diffusers import DiffusionPipeline, DDIMScheduler -from diffusers.utils import load_image +from diffusers import DiffusionPipeline pipeline = DiffusionPipeline.from_pretrained( - "stable-diffusion-v1-5/stable-diffusion-v1-5-inpainting", - custom_pipeline="hd_painter" + "stabilityai/stable-diffusion-3-medium-diffusers", + custom_pipeline="pipeline_stable_diffusion_3_instruct_pix2pix", + custom_revision="main" + torch_dtype=torch.float16, + device_map="cuda" ) -pipeline.scheduler = DDIMScheduler.from_config(pipeline.scheduler.config) -init_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/hd-painter.jpg") -mask_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/hd-painter-mask.png") -prompt = "football" -image = pipeline(prompt, init_image, mask_image, use_rasg=True, use_painta=True, generator=torch.manual_seed(0)).images[0] -image ``` -
-
- -
original image
-
-
- -
generated image
-
-
+> [!WARNING] +> While the Hugging Face Hub [scans](https://huggingface.co/docs/hub/security-malware) files, you should still inspect the Hub pipeline code and make sure it is safe. -
-
+There are a few ways to load a community pipeline. + +- Pass a path to `custom_pipeline` to load a local community pipeline. The directory must contain a `pipeline.py` file containing the pipeline class. + + ```py + import torch + from diffusers import DiffusionPipeline + + pipeline = DiffusionPipeline.from_pretrained( + "stabilityai/stable-diffusion-3-medium-diffusers", + custom_pipeline="path/to/pipeline_directory", + torch_dtype=torch.float16, + device_map="cuda" + ) + ``` + +- The `custom_pipeline` argument is also supported by [`~DiffusionPipeline.from_pipe`], which is useful for [reusing pipelines](./loading#reuse-a-pipeline) without using additional memory. It limits the memory usage to only the largest pipeline loaded. + + ```py + import torch + from diffusers import DiffusionPipeline + + pipeline_sd = DiffusionPipeline.from_pretrained("emilianJR/CyberRealistic_V3", torch_dtype=torch.float16, device_map="cuda") + pipeline_lpw = DiffusionPipeline.from_pipe( + pipeline_sd, custom_pipeline="lpw_stable_diffusion", device_map="cuda" + ) + ``` + + The [`~DiffusionPipeline.from_pipe`] method is especially useful for loading community pipelines because many of them don't have pretrained weights. Community pipelines generally add a feature on top of an existing pipeline. ## Community components -Community components allow users to build pipelines that may have customized components that are not a part of Diffusers. If your pipeline has custom components that Diffusers doesn't already support, you need to provide their implementations as Python modules. These customized components could be a VAE, UNet, and scheduler. In most cases, the text encoder is imported from the Transformers library. The pipeline code itself can also be customized. +Community components let users build pipelines with custom transformers, UNets, VAEs, and schedulers not supported by Diffusers. These components require Python module implementations. -This section shows how users should use community components to build a community pipeline. +This section shows how users can use community components to build a community pipeline using [showlab/show-1-base](https://huggingface.co/showlab/show-1-base) as an example. -You'll use the [showlab/show-1-base](https://huggingface.co/showlab/show-1-base) pipeline checkpoint as an example. - -1. Import and load the text encoder from Transformers: - -```python -from transformers import T5Tokenizer, T5EncoderModel - -pipe_id = "showlab/show-1-base" -tokenizer = T5Tokenizer.from_pretrained(pipe_id, subfolder="tokenizer") -text_encoder = T5EncoderModel.from_pretrained(pipe_id, subfolder="text_encoder") -``` - -2. Load a scheduler: +1. Load the required components, the scheduler and image processor. The text encoder is generally imported from [Transformers](https://huggingface.co/docs/transformers/index). ```python +from transformers import T5Tokenizer, T5EncoderModel, CLIPImageProcessor from diffusers import DPMSolverMultistepScheduler +pipeline_id = "showlab/show-1-base" +tokenizer = T5Tokenizer.from_pretrained(pipeline_id, subfolder="tokenizer") +text_encoder = T5EncoderModel.from_pretrained(pipeline_id, subfolder="text_encoder") scheduler = DPMSolverMultistepScheduler.from_pretrained(pipe_id, subfolder="scheduler") -``` - -3. Load an image processor: - -```python -from transformers import CLIPImageProcessor - feature_extractor = CLIPImageProcessor.from_pretrained(pipe_id, subfolder="feature_extractor") ``` - +> [!WARNING] +> In steps 2 and 3, the custom [UNet](https://github.com/showlab/Show-1/blob/main/showone/models/unet_3d_condition.py) and [pipeline](https://huggingface.co/sayakpaul/show-1-base-with-code/blob/main/unet/showone_unet_3d_condition.py) implementation must match the format shown in their files for this example to work. -In steps 4 and 5, the custom [UNet](https://github.com/showlab/Show-1/blob/main/showone/models/unet_3d_condition.py) and [pipeline](https://huggingface.co/sayakpaul/show-1-base-with-code/blob/main/unet/showone_unet_3d_condition.py) implementation must match the format shown in their files for this example to work. - - - -4. Now you'll load a [custom UNet](https://github.com/showlab/Show-1/blob/main/showone/models/unet_3d_condition.py), which in this example, has already been implemented in [showone_unet_3d_condition.py](https://huggingface.co/sayakpaul/show-1-base-with-code/blob/main/unet/showone_unet_3d_condition.py) for your convenience. You'll notice the [`UNet3DConditionModel`] class name is changed to `ShowOneUNet3DConditionModel` because [`UNet3DConditionModel`] already exists in Diffusers. Any components needed for the `ShowOneUNet3DConditionModel` class should be placed in showone_unet_3d_condition.py. - - Once this is done, you can initialize the UNet: - - ```python - from showone_unet_3d_condition import ShowOneUNet3DConditionModel - - unet = ShowOneUNet3DConditionModel.from_pretrained(pipe_id, subfolder="unet") - ``` - -5. Finally, you'll load the custom pipeline code. For this example, it has already been created for you in [pipeline_t2v_base_pixel.py](https://huggingface.co/sayakpaul/show-1-base-with-code/blob/main/pipeline_t2v_base_pixel.py). This script contains a custom `TextToVideoIFPipeline` class for generating videos from text. Just like the custom UNet, any code needed for the custom pipeline to work should go in pipeline_t2v_base_pixel.py. - -Once everything is in place, you can initialize the `TextToVideoIFPipeline` with the `ShowOneUNet3DConditionModel`: +2. Load a [custom UNet](https://github.com/showlab/Show-1/blob/main/showone/models/unet_3d_condition.py) which is already implemented in [showone_unet_3d_condition.py](https://huggingface.co/sayakpaul/show-1-base-with-code/blob/main/unet/showone_unet_3d_condition.py). The [`UNet3DConditionModel`] class name is renamed to the custom implementation, `ShowOneUNet3DConditionModel`, because [`UNet3DConditionModel`] already exists in Diffusers. Any components required for `ShowOneUNet3DConditionModel` class should be placed in `showone_unet_3d_condition.py`. + +```python +from showone_unet_3d_condition import ShowOneUNet3DConditionModel + +unet = ShowOneUNet3DConditionModel.from_pretrained(pipeline_id, subfolder="unet") +``` + +3. Load the custom pipeline code (already implemented in [pipeline_t2v_base_pixel.py](https://huggingface.co/sayakpaul/show-1-base-with-code/blob/main/pipeline_t2v_base_pixel.py)). This script contains a custom `TextToVideoIFPipeline` class for generating videos from text. Like the custom UNet, any code required for `TextToVideIFPipeline` should be placed in `pipeline_t2v_base_pixel.py`. + +Initialize `TextToVideoIFPipeline` with `ShowOneUNet3DConditionModel`. ```python -from pipeline_t2v_base_pixel import TextToVideoIFPipeline import torch +from pipeline_t2v_base_pixel import TextToVideoIFPipeline pipeline = TextToVideoIFPipeline( unet=unet, text_encoder=text_encoder, tokenizer=tokenizer, scheduler=scheduler, - feature_extractor=feature_extractor + feature_extractor=feature_extractor, + device_map="cuda", + torch_dtype=torch.float16 ) -pipeline = pipeline.to(device="cuda") -pipeline.torch_dtype = torch.float16 ``` -Push the pipeline to the Hub to share with the community! +4. Push the pipeline to the Hub to share with the community. ```python pipeline.push_to_hub("custom-t2v-pipeline") ``` -After the pipeline is successfully pushed, you need to make a few changes: +After the pipeline is successfully pushed, make the following changes. -1. Change the `_class_name` attribute in [model_index.json](https://huggingface.co/sayakpaul/show-1-base-with-code/blob/main/model_index.json#L2) to `"pipeline_t2v_base_pixel"` and `"TextToVideoIFPipeline"`. -2. Upload `showone_unet_3d_condition.py` to the [unet](https://huggingface.co/sayakpaul/show-1-base-with-code/blob/main/unet/showone_unet_3d_condition.py) subfolder. -3. Upload `pipeline_t2v_base_pixel.py` to the pipeline [repository](https://huggingface.co/sayakpaul/show-1-base-with-code/tree/main). +- Change the `_class_name` attribute in [model_index.json](https://huggingface.co/sayakpaul/show-1-base-with-code/blob/main/model_index.json#L2) to `"pipeline_t2v_base_pixel"` and `"TextToVideoIFPipeline"`. +- Upload `showone_unet_3d_condition.py` to the [unet](https://huggingface.co/sayakpaul/show-1-base-with-code/blob/main/unet/showone_unet_3d_condition.py) subfolder. +- Upload `pipeline_t2v_base_pixel.py` to the pipeline [repository](https://huggingface.co/sayakpaul/show-1-base-with-code/tree/main). To run inference, add the `trust_remote_code` argument while initializing the pipeline to handle all the "magic" behind the scenes. -> [!WARNING] -> As an additional precaution with `trust_remote_code=True`, we strongly encourage you to pass a commit hash to the `revision` parameter in [`~DiffusionPipeline.from_pretrained`] to make sure the code hasn't been updated with some malicious new lines of code (unless you fully trust the model owners). - ```python -from diffusers import DiffusionPipeline import torch +from diffusers import DiffusionPipeline pipeline = DiffusionPipeline.from_pretrained( "/", trust_remote_code=True, torch_dtype=torch.float16 -).to("cuda") - -prompt = "hello" - -# Text embeds -prompt_embeds, negative_embeds = pipeline.encode_prompt(prompt) - -# Keyframes generation (8x64x40, 2fps) -video_frames = pipeline( - prompt_embeds=prompt_embeds, - negative_prompt_embeds=negative_embeds, - num_frames=8, - height=40, - width=64, - num_inference_steps=2, - guidance_scale=9.0, - output_type="pt" -).frames -``` - -As an additional reference, take a look at the repository structure of [stabilityai/japanese-stable-diffusion-xl](https://huggingface.co/stabilityai/japanese-stable-diffusion-xl/) which also uses the `trust_remote_code` feature. - -```python -from diffusers import DiffusionPipeline -import torch - -pipeline = DiffusionPipeline.from_pretrained( - "stabilityai/japanese-stable-diffusion-xl", trust_remote_code=True ) -pipeline.to("cuda") ``` + +> [!WARNING] +> As an additional precaution with `trust_remote_code=True`, we strongly encourage passing a commit hash to the `revision` argument in [`~DiffusionPipeline.from_pretrained`] to make sure the code hasn't been updated with new malicious code (unless you fully trust the model owners). + +## Resources + +- Take a look at Issue [#841](https://github.com/huggingface/diffusers/issues/841) for more context about why we're adding community pipelines to help everyone easily share their work without being slowed down. +- Check out the [stabilityai/japanese-stable-diffusion-xl](https://huggingface.co/stabilityai/japanese-stable-diffusion-xl/) repository for an additional example of a community pipeline that also uses the `trust_remote_code` feature. \ No newline at end of file