diff --git a/docs/source/en/_toctree.yml b/docs/source/en/_toctree.yml index f098faa47e..357afb2ea2 100644 --- a/docs/source/en/_toctree.yml +++ b/docs/source/en/_toctree.yml @@ -63,12 +63,10 @@ title: Pipeline callbacks - local: using-diffusers/reusing_seeds title: Reproducible pipelines - - local: using-diffusers/control_brightness - title: Control image brightness + - local: using-diffusers/image_quality + title: Controlling image quality - local: using-diffusers/weighted_prompts title: Prompt techniques - - local: using-diffusers/freeu - title: Improve generation quality with FreeU title: Inference techniques - sections: - local: using-diffusers/sdxl diff --git a/docs/source/en/api/pipelines/overview.md b/docs/source/en/api/pipelines/overview.md index cd1232a90d..e7b8bf4936 100644 --- a/docs/source/en/api/pipelines/overview.md +++ b/docs/source/en/api/pipelines/overview.md @@ -97,6 +97,11 @@ The table below lists all the pipelines currently available in 🤗 Diffusers an - to - components + +[[autodoc]] pipelines.StableDiffusionMixin.enable_freeu + +[[autodoc]] pipelines.StableDiffusionMixin.disable_freeu + ## FlaxDiffusionPipeline [[autodoc]] pipelines.pipeline_flax_utils.FlaxDiffusionPipeline diff --git a/docs/source/en/using-diffusers/control_brightness.md b/docs/source/en/using-diffusers/control_brightness.md deleted file mode 100644 index 5fad664f60..0000000000 --- a/docs/source/en/using-diffusers/control_brightness.md +++ /dev/null @@ -1,58 +0,0 @@ - - -# Control image brightness - -The Stable Diffusion pipeline is mediocre at generating images that are either very bright or dark as explained in the [Common Diffusion Noise Schedules and Sample Steps are Flawed](https://huggingface.co/papers/2305.08891) paper. The solutions proposed in the paper are currently implemented in the [`DDIMScheduler`] which you can use to improve the lighting in your images. - - - -💡 Take a look at the paper linked above for more details about the proposed solutions! - - - -One of the solutions is to train a model with *v prediction* and *v loss*. Add the following flag to the [`train_text_to_image.py`](https://github.com/huggingface/diffusers/blob/main/examples/text_to_image/train_text_to_image.py) or [`train_text_to_image_lora.py`](https://github.com/huggingface/diffusers/blob/main/examples/text_to_image/train_text_to_image_lora.py) scripts to enable `v_prediction`: - -```bash ---prediction_type="v_prediction" -``` - -For example, let's use the [`ptx0/pseudo-journey-v2`](https://huggingface.co/ptx0/pseudo-journey-v2) checkpoint which has been finetuned with `v_prediction`. - -Next, configure the following parameters in the [`DDIMScheduler`]: - -1. `rescale_betas_zero_snr=True`, rescales the noise schedule to zero terminal signal-to-noise ratio (SNR) -2. `timestep_spacing="trailing"`, starts sampling from the last timestep - -```py -from diffusers import DiffusionPipeline, DDIMScheduler - -pipeline = DiffusionPipeline.from_pretrained("ptx0/pseudo-journey-v2", use_safetensors=True) - -# switch the scheduler in the pipeline to use the DDIMScheduler -pipeline.scheduler = DDIMScheduler.from_config( - pipeline.scheduler.config, rescale_betas_zero_snr=True, timestep_spacing="trailing" -) -pipeline.to("cuda") -``` - -Finally, in your call to the pipeline, set `guidance_rescale` to prevent overexposure: - -```py -prompt = "A lion in galaxies, spirals, nebulae, stars, smoke, iridescent, intricate detail, octane render, 8k" -image = pipeline(prompt, guidance_rescale=0.7).images[0] -image -``` - -
- -
diff --git a/docs/source/en/using-diffusers/freeu.md b/docs/source/en/using-diffusers/freeu.md deleted file mode 100644 index 7b1fb908ca..0000000000 --- a/docs/source/en/using-diffusers/freeu.md +++ /dev/null @@ -1,135 +0,0 @@ - - -# Improve generation quality with FreeU - -[[open-in-colab]] - -The UNet is responsible for denoising during the reverse diffusion process, and there are two distinct features in its architecture: - -1. Backbone features primarily contribute to the denoising process -2. Skip features mainly introduce high-frequency features into the decoder module and can make the network overlook the semantics in the backbone features - -However, the skip connection can sometimes introduce unnatural image details. [FreeU](https://hf.co/papers/2309.11497) is a technique for improving image quality by rebalancing the contributions from the UNet’s skip connections and backbone feature maps. - -FreeU is applied during inference and it does not require any additional training. The technique works for different tasks such as text-to-image, image-to-image, and text-to-video. - -In this guide, you will apply FreeU to the [`StableDiffusionPipeline`], [`StableDiffusionXLPipeline`], and [`TextToVideoSDPipeline`]. You need to install Diffusers from source to run the examples below. - -## StableDiffusionPipeline - -Load the pipeline: - -```py -from diffusers import DiffusionPipeline -import torch - -pipeline = DiffusionPipeline.from_pretrained( - "runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16, safety_checker=None -).to("cuda") -``` - -Then enable the FreeU mechanism with the FreeU-specific hyperparameters. These values are scaling factors for the backbone and skip features. - -```py -pipeline.enable_freeu(s1=0.9, s2=0.2, b1=1.2, b2=1.4) -``` - -The values above are from the official FreeU [code repository](https://github.com/ChenyangSi/FreeU) where you can also find [reference hyperparameters](https://github.com/ChenyangSi/FreeU#range-for-more-parameters) for different models. - - - -Disable the FreeU mechanism by calling `disable_freeu()` on a pipeline. - - - -And then run inference: - -```py -prompt = "A squirrel eating a burger" -seed = 2023 -image = pipeline(prompt, generator=torch.manual_seed(seed)).images[0] -image -``` - -The figure below compares non-FreeU and FreeU results respectively for the same hyperparameters used above (`prompt` and `seed`): - -![](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/freeu/sdv1_5_freeu.jpg) - - -Let's see how Stable Diffusion 2 results are impacted: - -```py -from diffusers import DiffusionPipeline -import torch - -pipeline = DiffusionPipeline.from_pretrained( - "stabilityai/stable-diffusion-2-1", torch_dtype=torch.float16, safety_checker=None -).to("cuda") - -prompt = "A squirrel eating a burger" -seed = 2023 - -pipeline.enable_freeu(s1=0.9, s2=0.2, b1=1.1, b2=1.2) -image = pipeline(prompt, generator=torch.manual_seed(seed)).images[0] -image -``` - -![](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/freeu/sdv2_1_freeu.jpg) - -## Stable Diffusion XL - -Finally, let's take a look at how FreeU affects Stable Diffusion XL results: - -```py -from diffusers import DiffusionPipeline -import torch - -pipeline = DiffusionPipeline.from_pretrained( - "stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16, -).to("cuda") - -prompt = "A squirrel eating a burger" -seed = 2023 - -# Comes from -# https://wandb.ai/nasirk24/UNET-FreeU-SDXL/reports/FreeU-SDXL-Optimal-Parameters--Vmlldzo1NDg4NTUw -pipeline.enable_freeu(s1=0.6, s2=0.4, b1=1.1, b2=1.2) -image = pipeline(prompt, generator=torch.manual_seed(seed)).images[0] -image -``` - -![](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/freeu/sdxl_freeu.jpg) - -## Text-to-video generation - -FreeU can also be used to improve video quality: - -```python -from diffusers import DiffusionPipeline -from diffusers.utils import export_to_video -import torch - -model_id = "cerspense/zeroscope_v2_576w" -pipe = DiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda") - -prompt = "an astronaut riding a horse on mars" -seed = 2023 - -# The values come from -# https://github.com/lyn-rgb/FreeU_Diffusers#video-pipelines -pipe.enable_freeu(b1=1.2, b2=1.4, s1=0.9, s2=0.2) -video_frames = pipe(prompt, height=320, width=576, num_frames=30, generator=torch.manual_seed(seed)).frames[0] -export_to_video(video_frames, "astronaut_rides_horse.mp4") -``` - -Thanks to [kadirnar](https://github.com/kadirnar/) for helping to integrate the feature, and to [justindujardin](https://github.com/justindujardin) for the helpful discussions. diff --git a/docs/source/en/using-diffusers/image_quality.md b/docs/source/en/using-diffusers/image_quality.md new file mode 100644 index 0000000000..8961f88b90 --- /dev/null +++ b/docs/source/en/using-diffusers/image_quality.md @@ -0,0 +1,190 @@ + + +# Controlling image quality + +The components of a diffusion model, like the UNet and scheduler, can be optimized to improve the quality of generated images leading to better image lighting and details. These techniques are especially useful if you don't have the resources to simply use a larger model for inference. You can enable these techniques during inference without any additional training. + +This guide will show you how to turn these techniques on in your pipeline and how to configure them to improve the quality of your generated images. + +## Lighting + +The Stable Diffusion models aren't very good at generating images that are very bright or dark because the scheduler doesn't start sampling from the last timestep and it doesn't enforce a zero signal-to-noise ratio (SNR). The [Common Diffusion Noise Schedules and Sample Steps are Flawed](https://hf.co/papers/2305.08891) paper fixes these issues which are now available in some Diffusers schedulers. + +> [!TIP] +> For inference, you need a model that has been trained with *v_prediction*. To train your own model with *v_prediction*, add the following flag to the [train_text_to_image.py](https://github.com/huggingface/diffusers/blob/main/examples/text_to_image/train_text_to_image.py) or [train_text_to_image_lora.py](https://github.com/huggingface/diffusers/blob/main/examples/text_to_image/train_text_to_image_lora.py) scripts. +> +> ```bash +> --prediction_type="v_prediction" +> ``` + +For example, load the [ptx0/pseudo-journey-v2](https://hf.co/ptx0/pseudo-journey-v2) checkpoint which was trained with `v_prediction` and the [`DDIMScheduler`]. Now you should configure the following parameters in the [`DDIMScheduler`]. + +* `rescale_betas_zero_snr=True` to rescale the noise schedule to zero SNR +* `timestep_spacing="trailing"` to start sampling from the last timestep + +Set `guidance_rescale` in the pipeline to prevent over-exposure. A lower value increases brightness but some of the details may appear washed out. + +```py +from diffusers import DiffusionPipeline, DDIMScheduler + +pipeline = DiffusionPipeline.from_pretrained("ptx0/pseudo-journey-v2", use_safetensors=True) + +pipeline.scheduler = DDIMScheduler.from_config( + pipeline.scheduler.config, rescale_betas_zero_snr=True, timestep_spacing="trailing" +) +pipeline.to("cuda") +prompt = "cinematic photo of a snowy mountain at night with the northern lights aurora borealis overhead, 35mm photograph, film, professional, 4k, highly detailed" +generator = torch.Generator(device="cpu").manual_seed(23) +image = pipeline(prompt, guidance_rescale=0.7, generator=generator).images[0] +image +``` + +
+
+ +
default Stable Diffusion v2-1 image
+
+
+ +
image with zero SNR and trailing timestep spacing enabled
+
+
+ +## Details + +[FreeU](https://hf.co/papers/2309.11497) improves image details by rebalancing the UNet's backbone and skip connection weights. The skip connections can cause the model to overlook some of the backbone semantics which may lead to unnatural image details in the generated image. This technique does not require any additional training and can be applied on the fly during inference for tasks like image-to-image and text-to-video. + +Use the [`~pipelines.StableDiffusionMixin.enable_freeu`] method on your pipeline and configure the scaling factors for the backbone (`b1` and `b2`) and skip connections (`s1` and `s2`). The number after each scaling factor corresponds to the stage in the UNet where the factor is applied. Take a look at the [FreeU](https://github.com/ChenyangSi/FreeU#parameters) repository for reference hyperparameters for different models. + + + + +```py +import torch +from diffusers import DiffusionPipeline + +pipeline = DiffusionPipeline.from_pretrained( + "runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16, safety_checker=None +).to("cuda") +pipeline.enable_freeu(s1=0.9, s2=0.2, b1=1.5, b2=1.6) +generator = torch.Generator(device="cpu").manual_seed(33) +prompt = "" +image = pipeline(prompt, generator=generator).images[0] +image +``` + +
+
+ +
FreeU disabled
+
+
+ +
FreeU enabled
+
+
+ +
+ + +```py +import torch +from diffusers import DiffusionPipeline + +pipeline = DiffusionPipeline.from_pretrained( + "stabilityai/stable-diffusion-2-1", torch_dtype=torch.float16, safety_checker=None +).to("cuda") +pipeline.enable_freeu(s1=0.9, s2=0.2, b1=1.4, b2=1.6) +generator = torch.Generator(device="cpu").manual_seed(80) +prompt = "A squirrel eating a burger" +image = pipeline(prompt, generator=generator).images[0] +image +``` + +
+
+ +
FreeU disabled
+
+
+ +
FreeU enabled
+
+
+ +
+ + +```py +import torch +from diffusers import DiffusionPipeline + +pipeline = DiffusionPipeline.from_pretrained( + "stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16, +).to("cuda") +pipeline.enable_freeu(s1=0.9, s2=0.2, b1=1.3, b2=1.4) +generator = torch.Generator(device="cpu").manual_seed(13) +prompt = "A squirrel eating a burger" +image = pipeline(prompt, generator=generator).images[0] +image +``` + +
+
+ +
FreeU disabled
+
+
+ +
FreeU enabled
+
+
+ +
+ + +```py +import torch +from diffusers import DiffusionPipeline +from diffusers.utils import export_to_video + +pipeline = DiffusionPipeline.from_pretrained( + "damo-vilab/text-to-video-ms-1.7b", torch_dtype=torch.float16 +).to("cuda") +# values come from https://github.com/lyn-rgb/FreeU_Diffusers#video-pipelines +pipeline.enable_freeu(b1=1.2, b2=1.4, s1=0.9, s2=0.2) +prompt = "Confident teddy bear surfer rides the wave in the tropics" +generator = torch.Generator(device="cpu").manual_seed(47) +video_frames = pipeline(prompt, generator=generator).frames[0] +export_to_video(video_frames, "teddy_bear.mp4", fps=10) +``` + +
+
+ +
FreeU disabled
+
+
+ +
FreeU enabled
+
+
+ +
+
+ +Call the [`pipelines.StableDiffusionMixin.disable_freeu`] method to disable FreeU. + +```py +pipeline.disable_freeu() +```