Enable VAE hash to be able to change with args change. If not, train_dataset_with_embeddiings may have row number inconsistency with train_dataset_with_vae.
Co-authored-by: Linoy Tsaban <57615435+linoytsaban@users.noreply.github.com>
* Improve the performance and suitable for NPU
* Improve the performance and suitable for NPU computing
* Improve the performance and suitable for NPU
* Improve the performance and suitable for NPU
* Improve the performance and suitable for NPU
* Improve the performance and suitable for NPU
---------
Co-authored-by: θη‘ <jiangshuo9@h-partners.com>
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
* fix for lr scheduler in distributed training
* Fixed the recalculation of the total training step section
* Fixed lint error
---------
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
* Trim all the trailing white space in the whole repo
* Remove unnecessary empty places
* make style && make quality
* Trim trailing white space
* trim
---------
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
* Add support for _foreach operations and non-blocking to EMAModel
* default foreach to false
* add non-blocking EMA offloading to SD1.5 T2I example script
* fix whitespace
* move foreach to cli argument
* linting
* Update README.md re: EMA weight training
* correct args.foreach_ema
* add tests for foreach ema
* code quality
* add foreach to from_pretrained
* default foreach false
* fix linting
---------
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
Co-authored-by: drhead <a@a.a>
* Modularized the train_lora_sdxl file
* Modularized the train_lora_sdxl file
* Modularized the train_lora_sdxl file
---------
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
* Modularized the train_lora file
* Modularized the train_lora file
* Modularized the train_lora file
* Modularized the train_lora file
* Modularized the train_lora file
---------
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
Update requirements.txt
If the datasets library is old, it will not read the metadata.jsonl and the label will default to an integer of type int.
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
* 7879 - adjust documentation to use naruto dataset, since pokemon is now gated
* replace references to pokemon in docs
* more references to pokemon replaced
* Japanese translation update
---------
Co-authored-by: bghira <bghira@users.github.com>
* Add Ascend NPU support for SDXL fine-tuning and fix the model saving bug when using DeepSpeed.
* fix check code quality
* Decouple the NPU flash attention and make it an independent module.
* add doc and unit tests for npu flash attention.
---------
Co-authored-by: mhh001 <mahonghao1@huawei.com>
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
A new function compute_dream_and_update_latents has been added to the
training utilities that allows you to do DREAM rectified training in line
with the paper https://arxiv.org/abs/2312.00210. The method can be used
with an extra argument in the train_text_to_image.py script.
Co-authored-by: Jimmy <39@πΊπΈ.com>
* Restore unet params back to normal from EMA when validation call is finished
* empty commit
---------
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
* 7529 do not disable autocast for cuda devices
* Remove typecasting error check for non-mps platforms, as a correct autocast implementation makes it a non-issue
* add autocast fix to other training examples
* disable native_amp for dreambooth (sdxl)
* disable native_amp for pix2pix (sdxl)
* remove tests from remaining files
* disable native_amp on huggingface accelerator for every training example that uses it
* convert more usages of autocast to nullcontext, make style fixes
* make style fixes
* style.
* Empty-Commit
---------
Co-authored-by: bghira <bghira@users.github.com>
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
* apple mps: training support for SDXL LoRA
* sdxl: support training lora, dreambooth, t2i, pix2pix, and controlnet on apple mps
---------
Co-authored-by: bghira <bghira@users.github.com>
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
* log loss per image
* add commandline param for per image loss logging
* style
* debug-loss -> debug_loss
---------
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
* add tags for diffusers training
* add tags for diffusers training
* add tags for diffusers training
* add tags for diffusers training
* add tags for diffusers training
* add tags for diffusers training
* add dora tags for drambooth lora scripts
* style