From f1ab955f64133fdf33ed310ea400331d06d63b28 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?M=2E=20Tolga=20Cang=C3=B6z?= <46008593+standardAI@users.noreply.github.com> Date: Fri, 10 Mar 2023 16:19:12 +0300 Subject: [PATCH] Update basic_training.mdx (#2639) Add 'import os' --- docs/source/en/tutorials/basic_training.mdx | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/docs/source/en/tutorials/basic_training.mdx b/docs/source/en/tutorials/basic_training.mdx index 1e91f81429..435de38d83 100644 --- a/docs/source/en/tutorials/basic_training.mdx +++ b/docs/source/en/tutorials/basic_training.mdx @@ -252,6 +252,7 @@ Then, you'll need a way to evaluate the model. For evaluation, you can use the [ ```py >>> from diffusers import DDPMPipeline >>> import math +>>> import os >>> def make_grid(images, rows, cols): @@ -411,4 +412,4 @@ Unconditional image generation is one example of a task that can be trained. You * [Textual Inversion](./training/text_inversion), an algorithm that teaches a model a specific visual concept and integrates it into the generated image. * [DreamBooth](./training/dreambooth), a technique for generating personalized images of a subject given several input images of the subject. * [Guide](./training/text2image) to finetuning a Stable Diffusion model on your own dataset. -* [Guide](./training/lora) to using LoRA, a memory-efficient technique for finetuning really large models faster. \ No newline at end of file +* [Guide](./training/lora) to using LoRA, a memory-efficient technique for finetuning really large models faster.