mirror of
https://github.com/huggingface/diffusers.git
synced 2026-01-29 07:22:12 +03:00
* add scaffold - copied convert_controlnet_to_diffusers.py from convert_original_stable_diffusion_to_diffusers.py * Add support to load ControlNet (WIP) - this makes Missking Key error on ControlNetModel * Update to convert ControlNet without error msg - init impl for StableDiffusionControlNetPipeline - init impl for ControlNetModel * cleanup of commented out * split create_controlnet_diffusers_config() from create_unet_diffusers_config() - add config: hint_channels * Add input_hint_block, input_zero_conv and middle_block_out - this makes missing key error on loading model * add unet_2d_blocks_controlnet.py - copied from unet_2d_blocks.py as impl CrossAttnDownBlock2D,DownBlock2D - this makes missing key error on loading model * Add loading for input_hint_block, zero_convs and middle_block_out - this makes no error message on model loading * Copy from UNet2DConditionalModel except __init__ * Add ultra primitive test for ControlNetModel inference * Support ControlNetModel inference - without exceptions * copy forward() from UNet2DConditionModel * Impl ControlledUNet2DConditionModel inference - test_controlled_unet_inference passed * Frozen weight & biases for training * Minimized version of ControlNet/ControlledUnet - test_modules_controllnet.py passed * make style * Add support model loading for minimized ver * Remove all previous version files * from_pretrained and inference test passed * copied from pipeline_stable_diffusion.py except `__init__()` * Impl pipeline, pixel match test (almost) passed. * make style * make fix-copies * Fix to add import ControlNet blocks for `make fix-copies` * Remove einops dependency * Support np.ndarray, PIL.Image for controlnet_hint * set default config file as lllyasviel's * Add support grayscale (hw) numpy array * Add and update docstrings * add control_net.mdx * add control_net.mdx to toctree * Update copyright year * Fix to add PIL.Image RGB->BGR conversion - thanks @Mystfit * make fix-copies * add basic fast test for controlnet * add slow test for controlnet/unet * Ignore down/up_block len check on ControlNet * add a copy from test_stable_diffusion.py * Accept controlnet_hint is None * merge pipeline_stable_diffusion.py diff * Update class name to SDControlNetPipeline * make style * Baseline fast test almost passed (w long desc) * still needs investigate. Following didn't passed descriped in TODO comment: - test_stable_diffusion_long_prompt - test_stable_diffusion_no_safety_checker Following didn't passed same as stable_diffusion_pipeline: - test_attention_slicing_forward_pass - test_inference_batch_single_identical - test_xformers_attention_forwardGenerator_pass these seems come from calc accuracy. * Add note comment related vae_scale_factor * add test_stable_diffusion_controlnet_ddim * add assertion for vae_scale_factor != 8 * slow test of pipeline almost passed Failed: test_stable_diffusion_pipeline_with_model_offloading - ImportError: `enable_model_offload` requires `accelerate v0.17.0` or higher but currently latest version == 0.16.0 * test_stable_diffusion_long_prompt passed * test_stable_diffusion_no_safety_checker passed - due to its model size, move to slow test * remove PoC test files * fix num_of_image, prompt length issue add add test * add support List[PIL.Image] for controlnet_hint * wip * all slow test passed * make style * update for slow test * RGB(PIL)->BGR(ctrlnet) conversion * fixes * remove manual num_images_per_prompt test * add document * add `image` argument docstring * make style * Add line to correct conversion * add controlnet_conditioning_scale (aka control_scales strength) * rgb channel ordering by default * image batching logic * Add control image descriptions for each checkpoint * Only save controlnet model in conversion script * Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_controlnet.py typo Co-authored-by: Pedro Cuenca <pedro@huggingface.co> * Update docs/source/en/api/pipelines/stable_diffusion/control_net.mdx Co-authored-by: Pedro Cuenca <pedro@huggingface.co> * Update docs/source/en/api/pipelines/stable_diffusion/control_net.mdx Co-authored-by: Pedro Cuenca <pedro@huggingface.co> * Update docs/source/en/api/pipelines/stable_diffusion/control_net.mdx Co-authored-by: Pedro Cuenca <pedro@huggingface.co> * Update docs/source/en/api/pipelines/stable_diffusion/control_net.mdx Co-authored-by: Pedro Cuenca <pedro@huggingface.co> * Update docs/source/en/api/pipelines/stable_diffusion/control_net.mdx Co-authored-by: Pedro Cuenca <pedro@huggingface.co> * Update docs/source/en/api/pipelines/stable_diffusion/control_net.mdx Co-authored-by: Pedro Cuenca <pedro@huggingface.co> * Update docs/source/en/api/pipelines/stable_diffusion/control_net.mdx Co-authored-by: Pedro Cuenca <pedro@huggingface.co> * Update docs/source/en/api/pipelines/stable_diffusion/control_net.mdx Co-authored-by: Pedro Cuenca <pedro@huggingface.co> * Update docs/source/en/api/pipelines/stable_diffusion/control_net.mdx Co-authored-by: Pedro Cuenca <pedro@huggingface.co> * add gerated image example * a depth mask -> a depth map * rename control_net.mdx to controlnet.mdx * fix toc title * add ControlNet abstruct and link * Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_controlnet.py Co-authored-by: dqueue <dbyqin@gmail.com> * remove controlnet constructor arguments re: @patrickvonplaten * [integration tests] test canny * test_canny fixes * [integration tests] test_depth * [integration tests] test_hed * [integration tests] test_mlsd * add channel order config to controlnet * [integration tests] test normal * [integration tests] test_openpose test_scribble * change height and width to default to conditioning image * [integration tests] test seg * style * test_depth fix * [integration tests] size fixes * [integration tests] cpu offloading * style * generalize controlnet embedding * fix conversion script * Update docs/source/en/api/pipelines/stable_diffusion/controlnet.mdx Co-authored-by: Sayak Paul <spsayakpaul@gmail.com> * Update docs/source/en/api/pipelines/stable_diffusion/controlnet.mdx Co-authored-by: Sayak Paul <spsayakpaul@gmail.com> * Update docs/source/en/api/pipelines/stable_diffusion/controlnet.mdx Co-authored-by: Sayak Paul <spsayakpaul@gmail.com> * Update docs/source/en/api/pipelines/stable_diffusion/controlnet.mdx Co-authored-by: Sayak Paul <spsayakpaul@gmail.com> * Style adapted to the documentation of pix2pix * merge main by hand * style * [docs] controlling generation doc nits * correct some things * add: controlnetmodel to autodoc. * finish docs * finish * finish 2 * correct images * finish controlnet * Apply suggestions from code review Co-authored-by: Pedro Cuenca <pedro@huggingface.co> * uP * upload model * up * up --------- Co-authored-by: William Berman <WLBberman@gmail.com> Co-authored-by: Pedro Cuenca <pedro@huggingface.co> Co-authored-by: dqueue <dbyqin@gmail.com> Co-authored-by: Sayak Paul <spsayakpaul@gmail.com> Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
161 lines
10 KiB
Plaintext
161 lines
10 KiB
Plaintext
<!--Copyright 2023 The HuggingFace Team. All rights reserved.
|
|
|
|
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
|
|
the License. You may obtain a copy of the License at
|
|
|
|
http://www.apache.org/licenses/LICENSE-2.0
|
|
|
|
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
|
|
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
|
specific language governing permissions and limitations under the License.
|
|
-->
|
|
|
|
# Controlling generation of diffusion models
|
|
|
|
Controlling outputs generated by diffusion models has been long pursued by the community and is now an active research topic. In many popular diffusion models, subtle changes in inputs, both images and text prompts, can drastically change outputs. In an ideal world we want to be able to control how semantics are preserved and changed.
|
|
|
|
Most examples of preserving semantics reduce to being able to accurately map a change in input to a change in output. I.e. adding an adjective to a subject in a prompt preserves the entire image, only modifying the changed subject. Or, image variation of a particular subject preserves the subject's pose.
|
|
|
|
Additionally, there are qualities of generated images that we would like to influence beyond semantic preservation. I.e. in general, we would like our outputs to be of good quality, adhere to a particular style, or be realistic.
|
|
|
|
We will document some of the techniques `diffusers` supports to control generation of diffusion models. Much is cutting edge research and can be quite nuanced. If something needs clarifying or you have a suggestion, don't hesitate to open a discussion on the [forum](https://discuss.huggingface.co/) or a [GitHub issue](https://github.com/huggingface/diffusers/issues).
|
|
|
|
We provide a high level explanation of how the generation can be controlled as well as a snippet of the technicals. For more in depth explanations on the technicals, the original papers which are linked from the pipelines are always the best resources.
|
|
|
|
Depending on the use case, one should choose a technique accordingly. In many cases, these techniques can be combined. For example, one can combine Textual Inversion with SEGA to provide more semantic guidance to the outputs generated using Textual Inversion.
|
|
|
|
Unless otherwise mentioned, these are techniques that work with existing models and don't require their own weights.
|
|
|
|
1. [Instruct Pix2Pix](#instruct-pix2pix)
|
|
2. [Pix2Pix Zero](#pix2pixzero)
|
|
3. [Attend and Excite](#attend-and-excite)
|
|
4. [Semantic Guidance](#semantic-guidance)
|
|
5. [Self-attention Guidance](#self-attention-guidance)
|
|
6. [Depth2Image](#depth2image)
|
|
7. [MultiDiffusion Panorama](#multidiffusion-panorama)
|
|
8. [DreamBooth](#dreambooth)
|
|
9. [Textual Inversion](#textual-inversion)
|
|
10. [ControlNet](#controlnet)
|
|
|
|
## Instruct Pix2Pix
|
|
|
|
[Paper](https://arxiv.org/abs/2211.09800)
|
|
|
|
[Instruct Pix2Pix](../api/pipelines/stable_diffusion/pix2pix) is fine-tuned from stable diffusion to support editing input images. It takes as inputs an image and a prompt describing an edit, and it outputs the edited image.
|
|
Instruct Pix2Pix has been explicitly trained to work well with [InstructGPT](https://openai.com/blog/instruction-following/)-like prompts.
|
|
|
|
See [here](../api/pipelines/stable_diffusion/pix2pix) for more information on how to use it.
|
|
|
|
## Pix2Pix Zero
|
|
|
|
[Paper](https://arxiv.org/abs/2302.03027)
|
|
|
|
[Pix2Pix Zero](../api/pipelines/stable_diffusion/pix2pix_zero) allows modifying an image so that one concept or subject is translated to another one while preserving general image semantics.
|
|
|
|
The denoising process is guided from one conceptual embedding towards another conceptual embedding. The intermediate latents are optimized during the denoising process to push the attention maps towards reference attention maps. The reference attention maps are from the denoising process of the input image and are used to encourage semantic preservation.
|
|
|
|
Pix2Pix Zero can be used both to edit synthetic images as well as real images.
|
|
- To edit synthetic images, one first generates an image given a caption.
|
|
Next, we generate image captions for the concept that shall be edited and for the new target concept. We can use a model like [Flan-T5](https://huggingface.co/docs/transformers/model_doc/flan-t5) for this purpose. Then, "mean" prompt embeddings for both the source and target concepts are created via the text encoder. Finally, the pix2pix-zero algorithm is used to edit the synthetic image.
|
|
- To edit a real image, one first generates an image caption using a model like [BLIP](https://huggingface.co/docs/transformers/model_doc/blip). Then one applies ddim inversion on the prompt and image to generate "inverse" latents. Similar to before, "mean" prompt embeddings for both source and target concepts are created and finally the pix2pix-zero algorithm in combination with the "inverse" latents is used to edit the image.
|
|
|
|
<Tip>
|
|
|
|
Pix2Pix Zero is the first model that allows "zero-shot" image editing. This means that the model
|
|
can edit an image in less than a minute on a consumer GPU as shown [here](../api/pipelines/stable_diffusion/pix2pix_zero#usage-example)
|
|
|
|
</Tip>
|
|
|
|
As mentioned above, Pix2Pix Zero includes optimizing the latents (and not any of the UNet, VAE, or the text encoder) to steer the generation toward a specific concept. This means that the overall
|
|
pipeline might require more memory than a standard [StableDiffusionPipeline](../api/pipelines/stable_diffusion/text2img).
|
|
|
|
See [here](../api/pipelines/stable_diffusion/pix2pix_zero) for more information on how to use it.
|
|
|
|
## Attend and Excite
|
|
|
|
[Paper](https://arxiv.org/abs/2301.13826)
|
|
|
|
[Attend and Excite](../api/pipelines/stable_diffusion/attend_and_excite) allows subjects in the prompt to be faithfully represented in the final image.
|
|
|
|
A set of token indices are given as input, corresponding to the subjects in the prompt that need to be present in the image. During denoising, each token index is guaranteed to have a minimum attention threshold for at least one patch of the image. The intermediate latents are iteratively optimized during the denoising process to strengthen the attention of the most neglected subject token until the attention threshold is passed for all subject tokens.
|
|
|
|
Like Pix2Pix Zero, Attend and Excite also involves a mini optimization loop (leaving the pre-trained weights untouched) in its pipeline and can require more memory than the usual `StableDiffusionPipeline`.
|
|
|
|
See [here](../api/pipelines/stable_diffusion/attend_and_excite) for more information on how to use it.
|
|
|
|
## Semantic Guidance (SEGA)
|
|
|
|
[Paper](https://arxiv.org/abs/2301.12247)
|
|
|
|
SEGA allows applying or removing one or more concepts from an image. The strength of the concept can also be controlled. I.e. the smile concept can be used to incrementally increase or decrease the smile of a portrait.
|
|
|
|
Similar to how classifier free guidance provides guidance via empty prompt inputs, SEGA provides guidance on conceptual prompts. Multiple of these conceptual prompts can be applied simultaneously. Each conceptual prompt can either add or remove their concept depending on if the guidance is applied positively or negatively.
|
|
|
|
Unlike Pix2Pix Zero or Attend and Excite, SEGA directly interacts with the diffusion process instead of performing any explicit gradient-based optimization.
|
|
|
|
See [here](../api/pipelines/semantic_stable_diffusion) for more information on how to use it.
|
|
|
|
## Self-attention Guidance (SAG)
|
|
|
|
[Paper](https://arxiv.org/abs/2210.00939)
|
|
|
|
[Self-attention Guidance](../api/pipelines/stable_diffusion/self_attention_guidance) improves the general quality of images.
|
|
|
|
SAG provides guidance from predictions not conditioned on high-frequency details to fully conditioned images. The high frequency details are extracted out of the UNet self-attention maps.
|
|
|
|
See [here](../api/pipelines/stable_diffusion/self_attention_guidance) for more information on how to use it.
|
|
|
|
## Depth2Image
|
|
|
|
[Project](https://huggingface.co/stabilityai/stable-diffusion-2-depth)
|
|
|
|
[Depth2Image](../pipelines/stable_diffusion_2#depthtoimage) is fine-tuned from Stable Diffusion to better preserve semantics for text guided image variation.
|
|
|
|
It conditions on a monocular depth estimate of the original image.
|
|
|
|
See [here](../api/pipelines/stable_diffusion_2#depthtoimage) for more information on how to use it.
|
|
|
|
<Tip>
|
|
|
|
An important distinction between methods like InstructPix2Pix and Pix2Pix Zero is that the former
|
|
involves fine-tuning the pre-trained weights while the latter does not. This means that you can
|
|
apply Pix2Pix Zero to any of the available Stable Diffusion models.
|
|
|
|
</Tip>
|
|
|
|
## MultiDiffusion Panorama
|
|
|
|
[Paper](https://arxiv.org/abs/2302.08113)
|
|
|
|
MultiDiffusion defines a new generation process over a pre-trained diffusion model. This process binds together multiple diffusion generation methods that can be readily applied to generate high quality and diverse images. Results adhere to user-provided controls, such as desired aspect ratio (e.g., panorama), and spatial guiding signals, ranging from tight segmentation masks to bounding boxes.
|
|
[MultiDiffusion Panorama](../api/pipelines/stable_diffusion/panorama) allows to generate high-quality images at arbitrary aspect ratios (e.g., panoramas).
|
|
|
|
See [here](../api/pipelines/stable_diffusion/panorama) for more information on how to use it to generate panoramic images.
|
|
|
|
## Fine-tuning your own models
|
|
|
|
In addition to pre-trained models, Diffusers has training scripts for fine-tuning models on user-provided data.
|
|
|
|
### DreamBooth
|
|
|
|
[DreamBooth](../training/dreambooth) fine-tunes a model to teach it about a new subject. I.e. a few pictures of a person can be used to generate images of that person in different styles.
|
|
|
|
See [here](../training/dreambooth) for more information on how to use it.
|
|
|
|
### Textual Inversion
|
|
|
|
[Textual Inversion](../training/text_inversion) fine-tunes a model to teach it about a new concept. I.e. a few pictures of a style of artwork can be used to generate images in that style.
|
|
|
|
See [here](../training/text_inversion) for more information on how to use it.
|
|
|
|
## ControlNet
|
|
|
|
[Paper](https://arxiv.org/abs/2302.05543)
|
|
|
|
[ControlNet](../api/pipelines/stable_diffusion/controlnet) is an auxiliary network which adds an extra condition.
|
|
There are 8 canonical pre-trained ControlNets trained on different conditionings such as edge detection, scribbles,
|
|
depth maps, and semantic segmentations.
|
|
|
|
See [here](../api/pipelines/stable_diffusion/controlnet) for more information on how to use it.
|
|
|