From 92e5ddd2959c734c1e24e31414c8b685aafaf583 Mon Sep 17 00:00:00 2001 From: Ella Charlaix <80481427+echarlaix@users.noreply.github.com> Date: Thu, 27 Jul 2023 18:01:58 +0200 Subject: [PATCH] Fix typo documentation (#4320) fix typo documentation --- docs/source/en/optimization/onnx.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/docs/source/en/optimization/onnx.md b/docs/source/en/optimization/onnx.md index 89ea435217..1eefc116cb 100644 --- a/docs/source/en/optimization/onnx.md +++ b/docs/source/en/optimization/onnx.md @@ -11,7 +11,7 @@ specific language governing permissions and limitations under the License. --> -# How to use the ONNX Runtime for inference +# How to use ONNX Runtime for inference 🤗 [Optimum](https://github.com/huggingface/optimum) provides a Stable Diffusion pipeline compatible with ONNX Runtime. @@ -27,7 +27,7 @@ pip install optimum["onnxruntime"] ### Inference -To load an ONNX model and run inference with the ONNX Runtime, you need to replace [`StableDiffusionPipeline`] with `ORTStableDiffusionPipeline`. In case you want to load a PyTorch model and convert it to the ONNX format on-the-fly, you can set `export=True`. +To load an ONNX model and run inference with ONNX Runtime, you need to replace [`StableDiffusionPipeline`] with `ORTStableDiffusionPipeline`. In case you want to load a PyTorch model and convert it to the ONNX format on-the-fly, you can set `export=True`. ```python from optimum.onnxruntime import ORTStableDiffusionPipeline