From 016866792de772dadf07a32ee6ad6bae09791f1a Mon Sep 17 00:00:00 2001 From: TimothyAlexisVass <55708319+TimothyAlexisVass@users.noreply.github.com> Date: Fri, 6 Oct 2023 20:20:06 +0200 Subject: [PATCH] Minor fixes (#5309) tiny fixes --- docs/source/en/using-diffusers/write_own_pipeline.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/docs/source/en/using-diffusers/write_own_pipeline.md b/docs/source/en/using-diffusers/write_own_pipeline.md index 42b3e4d676..a9243a7b9a 100644 --- a/docs/source/en/using-diffusers/write_own_pipeline.md +++ b/docs/source/en/using-diffusers/write_own_pipeline.md @@ -112,7 +112,7 @@ As you can see, this is already more complex than the DDPM pipeline which only c -💡 Read the [How does Stable Diffusion work?](https://huggingface.co/blog/stable_diffusion#how-does-stable-diffusion-work) blog for more details about how the VAE, UNet, and text encoder models. +💡 Read the [How does Stable Diffusion work?](https://huggingface.co/blog/stable_diffusion#how-does-stable-diffusion-work) blog for more details about how the VAE, UNet, and text encoder models work. @@ -214,7 +214,7 @@ Next, generate some initial random noise as a starting point for the diffusion p ```py >>> latents = torch.randn( -... (batch_size, unet.in_channels, height // 8, width // 8), +... (batch_size, unet.config.in_channels, height // 8, width // 8), ... generator=generator, ... ) >>> latents = latents.to(torch_device)