1
0
mirror of https://github.com/huggingface/diffusers.git synced 2026-01-27 17:22:53 +03:00
Files
diffusers/examples
Anton Lozhkov 76f9b52289 Update the training examples (#102)
* New unet, gradient accumulation

* Save every n epochs

* Remove find_unused_params, hooray!

* Update examples

* Switch to DDPM completely
2022-07-20 19:51:23 +02:00
..
2022-07-20 19:51:23 +02:00

Training examples

Unconditional Flowers

The command to train a DDPM UNet model on the Oxford Flowers dataset:

accelerate launch train_unconditional.py \
  --dataset="huggan/flowers-102-categories" \
  --resolution=64 \
  --output_dir="ddpm-ema-flowers-64" \
  --train_batch_size=16 \
  --num_epochs=100 \
  --gradient_accumulation_steps=1 \
  --learning_rate=1e-4 \
  --lr_warmup_steps=500 \
  --mixed_precision=no \
  --push_to_hub

A full training run takes 2 hours on 4xV100 GPUs.

Unconditional Pokemon

The command to train a DDPM UNet model on the Pokemon dataset:

accelerate launch train_unconditional.py \
  --dataset="huggan/pokemon" \
  --resolution=64 \
  --output_dir="ddpm-ema-pokemon-64" \
  --train_batch_size=16 \
  --num_epochs=100 \
  --gradient_accumulation_steps=1 \
  --learning_rate=1e-4 \
  --lr_warmup_steps=500 \
  --mixed_precision=no \
  --push_to_hub

A full training run takes 2 hours on 4xV100 GPUs.