mirror of
https://github.com/huggingface/diffusers.git
synced 2026-01-27 17:22:53 +03:00
Added accelerator based gradient accumulation for basic_example (#8966)
added accelerator based gradient accumulation for basic_example Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
This commit is contained in:
@@ -340,8 +340,8 @@ Now you can wrap all these components together in a training loop with 🤗 Acce
|
||||
... loss = F.mse_loss(noise_pred, noise)
|
||||
... accelerator.backward(loss)
|
||||
|
||||
... if (step + 1) % config.gradient_accumulation_steps == 0:
|
||||
... accelerator.clip_grad_norm_(model.parameters(), 1.0)
|
||||
... if accelerator.sync_gradients:
|
||||
... accelerator.clip_grad_norm_(model.parameters(), 1.0)
|
||||
... optimizer.step()
|
||||
... lr_scheduler.step()
|
||||
... optimizer.zero_grad()
|
||||
|
||||
Reference in New Issue
Block a user