* Docs kr update 3 controlnet, reproducibility ์ ๋ก๋ generator ๊ทธ๋๋ก ์ฌ์ฉ seamless multi-GPU ๊ทธ๋๋ก ์ฌ์ฉ create_dataset ๋ฒ์ญ 1์ฐจ stable_diffusion_jax new translation Add coreml, tome kr docs minor fix translate training/instructpix2pix fix training/instructpix2pix.mdx using-diffusers/weighting_prompts ๋ฒ์ญ 1์ฐจ add SDXL docs Translate using-diffuers/loading_overview.md translate using-diffusers/textual_inversion_inference.md Conditional image generation (#37) * stable_diffusion_jax * index_update * index_update * condition_image_generation --------- Co-authored-by: Seongsu Park <tjdtnsu@gmail.com> jihwan/stable_diffusion.mdx custom_diffusion ์์ ์๋ฃ quicktour ์์ ์๋ฃ distributed inference & control brightness (#40) * distributed_inference.mdx * control_brightness --------- Co-authored-by: idra79haza <idra79haza@github.com> Co-authored-by: Seongsu Park <tjdtnsu@gmail.com> using_safetensors (#41) * distributed_inference.mdx * control_brightness * using_safetensors.mdx --------- Co-authored-by: idra79haza <idra79haza@github.com> Co-authored-by: Seongsu Park <tjdtnsu@gmail.com> delete safetensor short * Repace mdx to md * toctree update * Add controlling_generation * toctree fix * colab link, minor fix * docs name typo fix * frontmatter fix * translation fix
3.5 KiB
Text-guided ์ด๋ฏธ์ง ์ธํ์ธํ (inpainting)
[StableDiffusionInpaintPipeline]์ ๋ง์คํฌ์ ํ
์คํธ ํ๋กฌํํธ๋ฅผ ์ ๊ณตํ์ฌ ์ด๋ฏธ์ง์ ํน์ ๋ถ๋ถ์ ํธ์งํ ์ ์๋๋ก ํฉ๋๋ค. ์ด ๊ธฐ๋ฅ์ ์ธํ์ธํ
์์
์ ์ํด ํน๋ณํ ํ๋ จ๋ runwayml/stable-diffusion-inpainting๊ณผ ๊ฐ์ Stable Diffusion ๋ฒ์ ์ ์ฌ์ฉํฉ๋๋ค.
๋จผ์ [StableDiffusionInpaintPipeline] ์ธ์คํด์ค๋ฅผ ๋ถ๋ฌ์ต๋๋ค:
import PIL
import requests
import torch
from io import BytesIO
from diffusers import StableDiffusionInpaintPipeline
pipeline = StableDiffusionInpaintPipeline.from_pretrained(
"runwayml/stable-diffusion-inpainting",
torch_dtype=torch.float16,
)
pipeline = pipeline.to("cuda")
๋์ค์ ๊ต์ฒดํ ๊ฐ์์ง ์ด๋ฏธ์ง์ ๋ง์คํฌ๋ฅผ ๋ค์ด๋ก๋ํ์ธ์:
def download_image(url):
response = requests.get(url)
return PIL.Image.open(BytesIO(response.content)).convert("RGB")
img_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo.png"
mask_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo_mask.png"
init_image = download_image(img_url).resize((512, 512))
mask_image = download_image(mask_url).resize((512, 512))
์ด์ ๋ง์คํฌ๋ฅผ ๋ค๋ฅธ ๊ฒ์ผ๋ก ๊ต์ฒดํ๋ผ๋ ํ๋กฌํํธ๋ฅผ ๋ง๋ค ์ ์์ต๋๋ค:
prompt = "Face of a yellow cat, high resolution, sitting on a park bench"
image = pipe(prompt=prompt, image=init_image, mask_image=mask_image).images[0]
image |
mask_image |
prompt |
output |
|---|---|---|---|
![]() |
![]() |
Face of a yellow cat, high resolution, sitting on a park bench | ![]() |
์ด์ ์ ์คํ์ ์ธ ์ธํ์ธํ ๊ตฌํ์์๋ ํ์ง์ด ๋ฎ์ ๋ค๋ฅธ ํ๋ก์ธ์ค๋ฅผ ์ฌ์ฉํ์ต๋๋ค. ์ด์ ๋ฒ์ ๊ณผ์ ํธํ์ฑ์ ๋ณด์ฅํ๊ธฐ ์ํด ์ ๋ชจ๋ธ์ด ํฌํจ๋์ง ์์ ์ฌ์ ํ์ต๋ ํ์ดํ๋ผ์ธ์ ๋ถ๋ฌ์ค๋ฉด ์ด์ ์ธํ์ธํ ๋ฐฉ๋ฒ์ด ๊ณ์ ์ ์ฉ๋ฉ๋๋ค.
์๋ Space์์ ์ด๋ฏธ์ง ์ธํ์ธํ ์ ์ง์ ํด๋ณด์ธ์!


