mirror of
https://github.com/huggingface/diffusers.git
synced 2026-01-27 17:22:53 +03:00
Added distillation for quantization example on textual inversion. (#2760)
* Added distillation for quantization example on textual inversion. Signed-off-by: Ye, Xinyu <xinyu.ye@intel.com> * refined readme and code style. Signed-off-by: Ye, Xinyu <xinyu.ye@intel.com> * Update text2images.py * refined code of model load and added compatibility check. Signed-off-by: Ye, Xinyu <xinyu.ye@intel.com> * fixed code style. Signed-off-by: Ye, Xinyu <xinyu.ye@intel.com> * fix C403 [*] Unnecessary `list` comprehension (rewrite as a `set` comprehension) Signed-off-by: Ye, Xinyu <xinyu.ye@intel.com> --------- Signed-off-by: Ye, Xinyu <xinyu.ye@intel.com>
This commit is contained in:
@@ -0,0 +1,93 @@
|
||||
# Distillation for quantization on Textual Inversion models to personalize text2image
|
||||
|
||||
[Textual inversion](https://arxiv.org/abs/2208.01618) is a method to personalize text2image models like stable diffusion on your own images._By using just 3-5 images new concepts can be taught to Stable Diffusion and the model personalized on your own images_
|
||||
The `textual_inversion.py` script shows how to implement the training procedure and adapt it for stable diffusion.
|
||||
We have enabled distillation for quantization in `textual_inversion.py` to do quantization aware training as well as distillation on the model generated by Textual Inversion method.
|
||||
|
||||
## Installing the dependencies
|
||||
|
||||
Before running the scripts, make sure to install the library's training dependencies:
|
||||
|
||||
```bash
|
||||
pip install -r requirements.txt
|
||||
```
|
||||
|
||||
## Prepare Datasets
|
||||
|
||||
One picture which is from the huggingface datasets [sd-concepts-library/dicoo2](https://huggingface.co/sd-concepts-library/dicoo2) is needed, and save it to the `./dicoo` directory. The picture is shown below:
|
||||
|
||||
<a href="https://huggingface.co/sd-concepts-library/dicoo2/blob/main/concept_images/1.jpeg">
|
||||
<img src="https://huggingface.co/sd-concepts-library/dicoo2/resolve/main/concept_images/1.jpeg" width = "300" height="300">
|
||||
</a>
|
||||
|
||||
## Get a FP32 Textual Inversion model
|
||||
|
||||
Use the following command to fine-tune the Stable Diffusion model on the above dataset to obtain the FP32 Textual Inversion model.
|
||||
|
||||
```bash
|
||||
export MODEL_NAME="CompVis/stable-diffusion-v1-4"
|
||||
export DATA_DIR="./dicoo"
|
||||
|
||||
accelerate launch textual_inversion.py \
|
||||
--pretrained_model_name_or_path=$MODEL_NAME \
|
||||
--train_data_dir=$DATA_DIR \
|
||||
--learnable_property="object" \
|
||||
--placeholder_token="<dicoo>" --initializer_token="toy" \
|
||||
--resolution=512 \
|
||||
--train_batch_size=1 \
|
||||
--gradient_accumulation_steps=4 \
|
||||
--max_train_steps=3000 \
|
||||
--learning_rate=5.0e-04 --scale_lr \
|
||||
--lr_scheduler="constant" \
|
||||
--lr_warmup_steps=0 \
|
||||
--output_dir="dicoo_model"
|
||||
```
|
||||
|
||||
## Do distillation for quantization
|
||||
|
||||
Distillation for quantization is a method that combines [intermediate layer knowledge distillation](https://github.com/intel/neural-compressor/blob/master/docs/source/distillation.md#intermediate-layer-knowledge-distillation) and [quantization aware training](https://github.com/intel/neural-compressor/blob/master/docs/source/quantization.md#quantization-aware-training) in the same training process to improve the performance of the quantized model. Provided a FP32 model, the distillation for quantization approach will take this model itself as the teacher model and transfer the knowledges of the specified layers to the student model, i.e. quantized version of the FP32 model, during the quantization aware training process.
|
||||
|
||||
Once you have the FP32 Textual Inversion model, the following command will take the FP32 Textual Inversion model as input to do distillation for quantization and generate the INT8 Textual Inversion model.
|
||||
|
||||
```bash
|
||||
export FP32_MODEL_NAME="./dicoo_model"
|
||||
export DATA_DIR="./dicoo"
|
||||
|
||||
accelerate launch textual_inversion.py \
|
||||
--pretrained_model_name_or_path=$FP32_MODEL_NAME \
|
||||
--train_data_dir=$DATA_DIR \
|
||||
--use_ema --learnable_property="object" \
|
||||
--placeholder_token="<dicoo>" --initializer_token="toy" \
|
||||
--resolution=512 \
|
||||
--train_batch_size=1 \
|
||||
--gradient_accumulation_steps=4 \
|
||||
--max_train_steps=300 \
|
||||
--learning_rate=5.0e-04 --max_grad_norm=3 \
|
||||
--lr_scheduler="constant" \
|
||||
--lr_warmup_steps=0 \
|
||||
--output_dir="int8_model" \
|
||||
--do_quantization --do_distillation --verify_loading
|
||||
```
|
||||
|
||||
After the distillation for quantization process, the quantized UNet would be 4 times smaller (3279MB -> 827MB).
|
||||
|
||||
## Inference
|
||||
|
||||
Once you have trained a INT8 model with the above command, the inference can be done simply using the `text2images.py` script. Make sure to include the `placeholder_token` in your prompt.
|
||||
|
||||
```bash
|
||||
export INT8_MODEL_NAME="./int8_model"
|
||||
|
||||
python text2images.py \
|
||||
--pretrained_model_name_or_path=$INT8_MODEL_NAME \
|
||||
--caption "a lovely <dicoo> in red dress and hat, in the snowly and brightly night, with many brighly buildings." \
|
||||
--images_num 4
|
||||
```
|
||||
|
||||
Here is the comparison of images generated by the FP32 model (left) and INT8 model (right) respectively:
|
||||
|
||||
<p float="left">
|
||||
<img src="https://huggingface.co/datasets/Intel/textual_inversion_dicoo_dfq/resolve/main/FP32.png" width = "300" height = "300" alt="FP32" align=center />
|
||||
<img src="https://huggingface.co/datasets/Intel/textual_inversion_dicoo_dfq/resolve/main/INT8.png" width = "300" height = "300" alt="INT8" align=center />
|
||||
</p>
|
||||
|
||||
@@ -0,0 +1,7 @@
|
||||
accelerate
|
||||
torchvision
|
||||
transformers>=4.25.0
|
||||
ftfy
|
||||
tensorboard
|
||||
modelcards
|
||||
neural-compressor
|
||||
@@ -0,0 +1,112 @@
|
||||
import argparse
|
||||
import math
|
||||
import os
|
||||
|
||||
import torch
|
||||
from neural_compressor.utils.pytorch import load
|
||||
from PIL import Image
|
||||
from transformers import CLIPTextModel, CLIPTokenizer
|
||||
|
||||
from diffusers import AutoencoderKL, StableDiffusionPipeline, UNet2DConditionModel
|
||||
|
||||
|
||||
def parse_args():
|
||||
parser = argparse.ArgumentParser()
|
||||
parser.add_argument(
|
||||
"-m",
|
||||
"--pretrained_model_name_or_path",
|
||||
type=str,
|
||||
default=None,
|
||||
required=True,
|
||||
help="Path to pretrained model or model identifier from huggingface.co/models.",
|
||||
)
|
||||
parser.add_argument(
|
||||
"-c",
|
||||
"--caption",
|
||||
type=str,
|
||||
default="robotic cat with wings",
|
||||
help="Text used to generate images.",
|
||||
)
|
||||
parser.add_argument(
|
||||
"-n",
|
||||
"--images_num",
|
||||
type=int,
|
||||
default=4,
|
||||
help="How much images to generate.",
|
||||
)
|
||||
parser.add_argument(
|
||||
"-s",
|
||||
"--seed",
|
||||
type=int,
|
||||
default=42,
|
||||
help="Seed for random process.",
|
||||
)
|
||||
parser.add_argument(
|
||||
"-ci",
|
||||
"--cuda_id",
|
||||
type=int,
|
||||
default=0,
|
||||
help="cuda_id.",
|
||||
)
|
||||
args = parser.parse_args()
|
||||
return args
|
||||
|
||||
|
||||
def image_grid(imgs, rows, cols):
|
||||
if not len(imgs) == rows * cols:
|
||||
raise ValueError("The specified number of rows and columns are not correct.")
|
||||
|
||||
w, h = imgs[0].size
|
||||
grid = Image.new("RGB", size=(cols * w, rows * h))
|
||||
grid_w, grid_h = grid.size
|
||||
|
||||
for i, img in enumerate(imgs):
|
||||
grid.paste(img, box=(i % cols * w, i // cols * h))
|
||||
return grid
|
||||
|
||||
|
||||
def generate_images(
|
||||
pipeline,
|
||||
prompt="robotic cat with wings",
|
||||
guidance_scale=7.5,
|
||||
num_inference_steps=50,
|
||||
num_images_per_prompt=1,
|
||||
seed=42,
|
||||
):
|
||||
generator = torch.Generator(pipeline.device).manual_seed(seed)
|
||||
images = pipeline(
|
||||
prompt,
|
||||
guidance_scale=guidance_scale,
|
||||
num_inference_steps=num_inference_steps,
|
||||
generator=generator,
|
||||
num_images_per_prompt=num_images_per_prompt,
|
||||
).images
|
||||
_rows = int(math.sqrt(num_images_per_prompt))
|
||||
grid = image_grid(images, rows=_rows, cols=num_images_per_prompt // _rows)
|
||||
return grid, images
|
||||
|
||||
|
||||
args = parse_args()
|
||||
# Load models and create wrapper for stable diffusion
|
||||
tokenizer = CLIPTokenizer.from_pretrained(args.pretrained_model_name_or_path, subfolder="tokenizer")
|
||||
text_encoder = CLIPTextModel.from_pretrained(args.pretrained_model_name_or_path, subfolder="text_encoder")
|
||||
vae = AutoencoderKL.from_pretrained(args.pretrained_model_name_or_path, subfolder="vae")
|
||||
unet = UNet2DConditionModel.from_pretrained(args.pretrained_model_name_or_path, subfolder="unet")
|
||||
|
||||
pipeline = StableDiffusionPipeline.from_pretrained(
|
||||
args.pretrained_model_name_or_path, text_encoder=text_encoder, vae=vae, unet=unet, tokenizer=tokenizer
|
||||
)
|
||||
pipeline.safety_checker = lambda images, clip_input: (images, False)
|
||||
if os.path.exists(os.path.join(args.pretrained_model_name_or_path, "best_model.pt")):
|
||||
unet = load(args.pretrained_model_name_or_path, model=unet)
|
||||
unet.eval()
|
||||
setattr(pipeline, "unet", unet)
|
||||
else:
|
||||
unet = unet.to(torch.device("cuda", args.cuda_id))
|
||||
pipeline = pipeline.to(unet.device)
|
||||
grid, images = generate_images(pipeline, prompt=args.caption, num_images_per_prompt=args.images_num, seed=args.seed)
|
||||
grid.save(os.path.join(args.pretrained_model_name_or_path, "{}.png".format("_".join(args.caption.split()))))
|
||||
dirname = os.path.join(args.pretrained_model_name_or_path, "_".join(args.caption.split()))
|
||||
os.makedirs(dirname, exist_ok=True)
|
||||
for idx, image in enumerate(images):
|
||||
image.save(os.path.join(dirname, "{}.png".format(idx + 1)))
|
||||
File diff suppressed because it is too large
Load Diff
Reference in New Issue
Block a user