mirror of
https://github.com/huggingface/diffusers.git
synced 2026-01-27 17:22:53 +03:00
add openvino and onnx runtime SD XL documentation (#4285)
* add openvino SD XL documentation * add onnx SD XL integration * rephrase * update doc * add images * update model
This commit is contained in:
@@ -23,19 +23,20 @@ Install 🤗 Optimum with the following command for ONNX Runtime support:
|
||||
pip install optimum["onnxruntime"]
|
||||
```
|
||||
|
||||
## Stable Diffusion Inference
|
||||
## Stable Diffusion
|
||||
|
||||
To load an ONNX model and run inference with the ONNX Runtime, you need to replace [`StableDiffusionPipeline`] with `ORTStableDiffusionPipeline`. In case you want to load
|
||||
a PyTorch model and convert it to the ONNX format on-the-fly, you can set `export=True`.
|
||||
### Inference
|
||||
|
||||
To load an ONNX model and run inference with the ONNX Runtime, you need to replace [`StableDiffusionPipeline`] with `ORTStableDiffusionPipeline`. In case you want to load a PyTorch model and convert it to the ONNX format on-the-fly, you can set `export=True`.
|
||||
|
||||
```python
|
||||
from optimum.onnxruntime import ORTStableDiffusionPipeline
|
||||
|
||||
model_id = "runwayml/stable-diffusion-v1-5"
|
||||
pipe = ORTStableDiffusionPipeline.from_pretrained(model_id, export=True)
|
||||
prompt = "a photo of an astronaut riding a horse on mars"
|
||||
images = pipe(prompt).images[0]
|
||||
pipe.save_pretrained("./onnx-stable-diffusion-v1-5")
|
||||
pipeline = ORTStableDiffusionPipeline.from_pretrained(model_id, export=True)
|
||||
prompt = "sailing ship in storm by Leonardo da Vinci"
|
||||
image = pipeline(prompt).images[0]
|
||||
pipeline.save_pretrained("./onnx-stable-diffusion-v1-5")
|
||||
```
|
||||
|
||||
If you want to export the pipeline in the ONNX format offline and later use it for inference,
|
||||
@@ -51,15 +52,57 @@ Then perform inference:
|
||||
from optimum.onnxruntime import ORTStableDiffusionPipeline
|
||||
|
||||
model_id = "sd_v15_onnx"
|
||||
pipe = ORTStableDiffusionPipeline.from_pretrained(model_id)
|
||||
prompt = "a photo of an astronaut riding a horse on mars"
|
||||
images = pipe(prompt).images[0]
|
||||
pipeline = ORTStableDiffusionPipeline.from_pretrained(model_id)
|
||||
prompt = "sailing ship in storm by Leonardo da Vinci"
|
||||
image = pipeline(prompt).images[0]
|
||||
```
|
||||
|
||||
Notice that we didn't have to specify `export=True` above.
|
||||
|
||||
<div class="flex justify-center">
|
||||
<img src="https://huggingface.co/datasets/optimum/documentation-images/resolve/main/onnxruntime/stable_diffusion_v1_5_ort_sail_boat.png">
|
||||
</div>
|
||||
|
||||
You can find more examples in [optimum documentation](https://huggingface.co/docs/optimum/).
|
||||
|
||||
|
||||
### Supported tasks
|
||||
|
||||
| Task | Loading Class |
|
||||
|--------------------------------------|--------------------------------------|
|
||||
| `text-to-image` | `ORTStableDiffusionPipeline` |
|
||||
| `image-to-image` | `ORTStableDiffusionImg2ImgPipeline` |
|
||||
| `inpaint` | `ORTStableDiffusionInpaintPipeline` |
|
||||
|
||||
## Stable Diffusion XL
|
||||
|
||||
### Export
|
||||
|
||||
To export your model to ONNX, you can use the [Optimum CLI](https://huggingface.co/docs/optimum/main/en/exporters/onnx/usage_guides/export_a_model#exporting-a-model-to-onnx-using-the-cli) as follows :
|
||||
|
||||
```bash
|
||||
optimum-cli export onnx --model stabilityai/stable-diffusion-xl-base-1.0 --task stable-diffusion-xl sd_xl_onnx/
|
||||
```
|
||||
|
||||
### Inference
|
||||
|
||||
To load an ONNX model and run inference with ONNX Runtime, you need to replace `StableDiffusionPipelineXL` with `ORTStableDiffusionPipelineXL` :
|
||||
|
||||
```python
|
||||
from optimum.onnxruntime import ORTStableDiffusionXLPipeline
|
||||
|
||||
pipeline = ORTStableDiffusionXLPipeline.from_pretrained("sd_xl_onnx")
|
||||
prompt = "sailing ship in storm by Leonardo da Vinci"
|
||||
image = pipeline(prompt).images[0]
|
||||
```
|
||||
|
||||
### Supported tasks
|
||||
|
||||
| Task | Loading Class |
|
||||
|--------------------------------------|--------------------------------------|
|
||||
| `text-to-image` | `ORTStableDiffusionXLPipeline` |
|
||||
| `image-to-image` | `ORTStableDiffusionXLImg2ImgPipeline`|
|
||||
|
||||
## Known Issues
|
||||
|
||||
- Generating multiple prompts in a batch seems to take too much memory. While we look into it, you may need to iterate instead of batching.
|
||||
|
||||
@@ -13,27 +13,96 @@ specific language governing permissions and limitations under the License.
|
||||
|
||||
# How to use OpenVINO for inference
|
||||
|
||||
🤗 [Optimum](https://github.com/huggingface/optimum-intel) provides a Stable Diffusion pipeline compatible with OpenVINO. You can now easily perform inference with OpenVINO Runtime on a variety of Intel processors ([see](https://docs.openvino.ai/latest/openvino_docs_OV_UG_supported_plugins_Supported_Devices.html) the full list of supported devices).
|
||||
🤗 [Optimum](https://github.com/huggingface/optimum-intel) provides Stable Diffusion pipelines compatible with OpenVINO. You can now easily perform inference with OpenVINO Runtime on a variety of Intel processors ([see](https://docs.openvino.ai/latest/openvino_docs_OV_UG_supported_plugins_Supported_Devices.html) the full list of supported devices).
|
||||
|
||||
## Installation
|
||||
|
||||
Install 🤗 Optimum Intel with the following command:
|
||||
|
||||
```
|
||||
pip install optimum["openvino"]
|
||||
pip install --upgrade-strategy eager optimum["openvino"]
|
||||
```
|
||||
|
||||
## Stable Diffusion Inference
|
||||
The `--upgrade-strategy eager` option is needed to ensure [`optimum-intel`](https://github.com/huggingface/optimum-intel) is upgraded to its latest version.
|
||||
|
||||
|
||||
## Stable Diffusion
|
||||
|
||||
### Inference
|
||||
|
||||
To load an OpenVINO model and run inference with OpenVINO Runtime, you need to replace `StableDiffusionPipeline` with `OVStableDiffusionPipeline`. In case you want to load a PyTorch model and convert it to the OpenVINO format on-the-fly, you can set `export=True`.
|
||||
|
||||
```python
|
||||
from optimum.intel.openvino import OVStableDiffusionPipeline
|
||||
from optimum.intel import OVStableDiffusionPipeline
|
||||
|
||||
model_id = "runwayml/stable-diffusion-v1-5"
|
||||
pipe = OVStableDiffusionPipeline.from_pretrained(model_id, export=True)
|
||||
prompt = "a photo of an astronaut riding a horse on mars"
|
||||
images = pipe(prompt).images[0]
|
||||
pipeline = OVStableDiffusionPipeline.from_pretrained(model_id, export=True)
|
||||
prompt = "sailing ship in storm by Rembrandt"
|
||||
image = pipeline(prompt).images[0]
|
||||
|
||||
# Don't forget to save the exported model
|
||||
pipeline.save_pretrained("openvino-sd-v1-5")
|
||||
```
|
||||
|
||||
You can find more examples (such as static reshaping and model compilation) in [optimum documentation](https://huggingface.co/docs/optimum/intel/inference#export-and-inference-of-stable-diffusion-models).
|
||||
To further speed up inference, the model can be statically reshaped :
|
||||
|
||||
```python
|
||||
# Define the shapes related to the inputs and desired outputs
|
||||
batch_size, num_images, height, width = 1, 1, 512, 512
|
||||
|
||||
# Statically reshape the model
|
||||
pipeline.reshape(batch_size, height, width, num_images)
|
||||
# Compile the model before inference
|
||||
pipeline.compile()
|
||||
|
||||
image = pipeline(
|
||||
prompt,
|
||||
height=height,
|
||||
width=width,
|
||||
num_images_per_prompt=num_images,
|
||||
).images[0]
|
||||
```
|
||||
|
||||
In case you want to change any parameters such as the outputs height or width, you’ll need to statically reshape your model once again.
|
||||
|
||||
<div class="flex justify-center">
|
||||
<img src="https://huggingface.co/datasets/optimum/documentation-images/resolve/main/intel/openvino/stable_diffusion_v1_5_sail_boat_rembrandt.png">
|
||||
</div>
|
||||
|
||||
|
||||
### Supported tasks
|
||||
|
||||
| Task | Loading Class |
|
||||
|--------------------------------------|--------------------------------------|
|
||||
| `text-to-image` | `OVStableDiffusionPipeline` |
|
||||
| `image-to-image` | `OVStableDiffusionImg2ImgPipeline` |
|
||||
| `inpaint` | `OVStableDiffusionInpaintPipeline` |
|
||||
|
||||
You can find more examples in the optimum [documentation](https://huggingface.co/docs/optimum/intel/inference#stable-diffusion).
|
||||
|
||||
|
||||
## Stable Diffusion XL
|
||||
|
||||
### Inference
|
||||
|
||||
```python
|
||||
from optimum.intel import OVStableDiffusionXLPipeline
|
||||
|
||||
model_id = "stabilityai/stable-diffusion-xl-base-1.0"
|
||||
pipeline = OVStableDiffusionXLPipeline.from_pretrained(model_id, export=True)
|
||||
prompt = "sailing ship in storm by Rembrandt"
|
||||
image = pipeline(prompt).images[0]
|
||||
```
|
||||
|
||||
To further speed up inference, the model can be statically reshaped as showed above.
|
||||
You can find more examples in the optimum [documentation](https://huggingface.co/docs/optimum/intel/inference#stable-diffusion-xl).
|
||||
|
||||
### Supported tasks
|
||||
|
||||
| Task | Loading Class |
|
||||
|--------------------------------------|--------------------------------------|
|
||||
| `text-to-image` | `OVStableDiffusionXLPipeline` |
|
||||
| `image-to-image` | `OVStableDiffusionXLImg2ImgPipeline` |
|
||||
|
||||
|
||||
|
||||
|
||||
Reference in New Issue
Block a user