diff --git a/README.md b/README.md index b130d686b3..2b13222662 100644 --- a/README.md +++ b/README.md @@ -39,9 +39,7 @@ In order to get started, we recommend taking a look at two notebooks: Stable Diffusion is a text-to-image latent diffusion model created by the researchers and engineers from [CompVis](https://github.com/CompVis), [Stability AI](https://stability.ai/) and [LAION](https://laion.ai/). It's trained on 512x512 images from a subset of the [LAION-5B](https://laion.ai/blog/laion-5b/) database. This model uses a frozen CLIP ViT-L/14 text encoder to condition the model on text prompts. With its 860M UNet and 123M text encoder, the model is relatively lightweight and runs on a GPU with at least 10GB VRAM. See the [model card](https://huggingface.co/CompVis/stable-diffusion) for more information. -**The Stable Diffusion weights are currently only available to universities, academics, research institutions and independent researchers. Please request access applying to this form** - -Please run `pip install diffusers transformers` for the example below to work, since the current `main` git branch is not compatible with the checkpoint yet. +You need to accept the model license before downloading or using the Stable Diffusion weights. Please, visit the [model card](https://huggingface.co/CompVis/stable-diffusion-v1-3), read the license and tick the checkbox if you agree. You have to be a registered user in 🤗 Hugging Face Hub, and you'll also need to use an access token for the code to work. For more information on access tokens, please refer to [this section](https://huggingface.co/docs/hub/security-tokens) of the documentation. ```py # make sure you're logged in with `huggingface-cli login` @@ -55,10 +53,10 @@ lms = LMSDiscreteScheduler( ) pipe = StableDiffusionPipeline.from_pretrained( - "CompVis/stable-diffusion-v1-3-diffusers", + "CompVis/stable-diffusion-v1-3", scheduler=lms, use_auth_token=True -) +) .to("cuda") prompt = "a photo of an astronaut riding a horse on mars" with autocast("cuda"):