mirror of
https://github.com/huggingface/diffusers.git
synced 2026-01-27 17:22:53 +03:00
* feat) optimization kr translation * fix) typo, italic setting * feat) dreambooth, text2image kr * feat) lora kr * fix) LoRA * fix) fp16 fix * fix) doc-builder style * fix) fp16 일부 단어 수정 * fix) fp16 style fix * fix) opt, training docs update * merge conflict * Fix community pipelines (#3266) * Allow disabling torch 2_0 attention (#3273) * Allow disabling torch 2_0 attention * make style * Update src/diffusers/models/attention.py * Release: v0.16.1 * feat) toctree update * feat) toctree update * Fix custom releases (#3708) * Fix custom releases * make style * Fix loading if unexpected keys are present (#3720) * Fix loading * make style * Release: v0.17.0 * opt_overview * commit * Create pipeline_overview.mdx * unconditional_image_generatoin_1stDraft * ✨ Add translation for write_own_pipeline.mdx * conditional-직역, 언컨디셔널 * unconditional_image_generation first draft * reviese * Update pipeline_overview.mdx * revise-2 * ♻️ translation fixed for write_own_pipeline.mdx * complete translate basic_training.mdx * other-formats.mdx 번역 완료 * fix tutorials/basic_training.mdx * other-formats 수정 * inpaint 한국어 번역 * depth2img translation * translate training/adapt-a-model.mdx * revised_all * feedback taken * using_safetensors.mdx_first_draft * custom_pipeline_examples.mdx_first_draft * img2img 한글번역 완료 * tutorial_overview edit * reusing_seeds * torch2.0 * translate complete * fix) 용어 통일 규약 반영 * [fix] 피드백을 반영해서 번역 보정 * 오탈자 정정 + 컨벤션 위배된 부분 정정 * typo, style fix * toctree update * copyright fix * toctree fix * Update _toctree.yml --------- Co-authored-by: Chanran Kim <seriousran@gmail.com> Co-authored-by: apolinário <joaopaulo.passos@gmail.com> Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com> Co-authored-by: Lee, Hongkyu <75282888+howsmyanimeprofilepicture@users.noreply.github.com> Co-authored-by: hyeminan <adios9709@gmail.com> Co-authored-by: movie5 <oyh5800@naver.com> Co-authored-by: idra79haza <idra79haza@github.com> Co-authored-by: Jihwan Kim <cuchoco@naver.com> Co-authored-by: jungwoo <boonkoonheart@gmail.com> Co-authored-by: jjuun0 <jh061993@gmail.com> Co-authored-by: szjung-test <93111772+szjung-test@users.noreply.github.com> Co-authored-by: idra79haza <37795618+idra79haza@users.noreply.github.com> Co-authored-by: howsmyanimeprofilepicture <howsmyanimeprofilepicture@gmail.com> Co-authored-by: hoswmyanimeprofilepicture <hoswmyanimeprofilepicture@gmail.com>
58 lines
2.8 KiB
Plaintext
58 lines
2.8 KiB
Plaintext
<!--Copyright 2023 The HuggingFace Team. All rights reserved.
|
|
|
|
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
|
|
the License. You may obtain a copy of the License at
|
|
|
|
http://www.apache.org/licenses/LICENSE-2.0
|
|
|
|
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
|
|
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
|
specific language governing permissions and limitations under the License.
|
|
-->
|
|
|
|
# Text-guided depth-to-image 생성
|
|
|
|
[[open-in-colab]]
|
|
|
|
[`StableDiffusionDepth2ImgPipeline`]을 사용하면 텍스트 프롬프트와 초기 이미지를 전달하여 새 이미지의 생성을 조절할 수 있습니다. 또한 이미지 구조를 보존하기 위해 `depth_map`을 전달할 수도 있습니다. `depth_map`이 제공되지 않으면 파이프라인은 통합된 [depth-estimation model](https://github.com/isl-org/MiDaS)을 통해 자동으로 깊이를 예측합니다.
|
|
|
|
|
|
먼저 [`StableDiffusionDepth2ImgPipeline`]의 인스턴스를 생성합니다:
|
|
|
|
```python
|
|
import torch
|
|
import requests
|
|
from PIL import Image
|
|
|
|
from diffusers import StableDiffusionDepth2ImgPipeline
|
|
|
|
pipe = StableDiffusionDepth2ImgPipeline.from_pretrained(
|
|
"stabilityai/stable-diffusion-2-depth",
|
|
torch_dtype=torch.float16,
|
|
).to("cuda")
|
|
```
|
|
|
|
이제 프롬프트를 파이프라인에 전달합니다. 특정 단어가 이미지 생성을 가이드 하는것을 방지하기 위해 `negative_prompt`를 전달할 수도 있습니다:
|
|
|
|
```python
|
|
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
|
|
init_image = Image.open(requests.get(url, stream=True).raw)
|
|
prompt = "two tigers"
|
|
n_prompt = "bad, deformed, ugly, bad anatomy"
|
|
image = pipe(prompt=prompt, image=init_image, negative_prompt=n_prompt, strength=0.7).images[0]
|
|
image
|
|
```
|
|
|
|
| Input | Output |
|
|
|---------------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------|
|
|
| <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/coco-cats.png" width="500"/> | <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/depth2img-tigers.png" width="500"/> |
|
|
|
|
아래의 Spaces를 가지고 놀며 depth map이 있는 이미지와 없는 이미지의 차이가 있는지 확인해 보세요!
|
|
|
|
<iframe
|
|
src="https://radames-stable-diffusion-depth2img.hf.space"
|
|
frameborder="0"
|
|
width="850"
|
|
height="500"
|
|
></iframe>
|