diff --git a/examples/textual_inversion/README.md b/examples/textual_inversion/README.md index 74ebcd5191..b731797ecb 100644 --- a/examples/textual_inversion/README.md +++ b/examples/textual_inversion/README.md @@ -3,6 +3,15 @@ [Textual inversion](https://arxiv.org/abs/2208.01618) is a method to personalize text2image models like stable diffusion on your own images using just 3-5 examples. The `textual_inversion.py` script shows how to implement the training procedure and adapt it for stable diffusion. +## Running on Colab + +Colab for training +[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb) + +Colab for inference +[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) + +## Running locally ### Installing the dependencies Before running the scipts, make sure to install the library's training dependencies: @@ -64,7 +73,6 @@ A full training run takes ~1 hour on one V100 GPU. Once you have trained a model using above command, the inference can be done simply using the `StableDiffusionPipeline`. Make sure to include the `placeholder_token` in your prompt. - ```python from torch import autocast