* imports and readme bug fixes
* bug fix - ensures text_encoder params are dtype==float32 (when using pivotal tuning) even if the rest of the model is loaded in fp16
* added pivotal tuning to readme
* mapping token identifier to new inserted token in validation prompt (if used)
* correct default value of --train_text_encoder_frac
* change default value of --adam_weight_decay_text_encoder
* validation prompt generations when using pivotal tuning bug fix
* style fix
* textual inversion embeddings name change
* style fix
* bug fix - stopping text encoder optimization halfway
* readme - will include token abstraction and new inserted tokens when using pivotal tuning
- added type to --num_new_tokens_per_abstraction
* style fix
---------
Co-authored-by: Linoy Tsaban <linoy@huggingface.co>
* make `requires_safety_checker` a kwarg instead of a positional argument as it's more future-proof
* apply `make style` formatting edits
* add image_encoder to arguments and pass to super constructor
* bug in MultiAdapter for Inpainting
* adapter_input is a list for MultiAdapter
---------
Co-authored-by: andres <andres@hax.ai>
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
* Additions:
- support for different lr for text encoder
- support for Prodigy optimizer
- support for min snr gamma
- support for custom captions and dataset loading from the hub
* adjusted --caption_column behaviour (to -not- use the second column of the dataset by default if --caption_column is not provided)
* fixed --output_dir / --model_dir_name confusion
* added --repeats, --adam_weight_decay_text_encoder
+ some fixes
* Update examples/dreambooth/train_dreambooth_lora_sdxl.py
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* Update examples/dreambooth/train_dreambooth_lora_sdxl.py
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* Update examples/dreambooth/train_dreambooth_lora_sdxl.py
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* - import compute_snr from diffusers/training_utils.py
- cluster adamw together
- when using 'prodigy', if --train_text_encoder == True and --text_encoder_lr != --learning rate, changes the lr of the text encoders optimization params to be --learning_rate (otherwise errors)
* shape fixes when custom captions are used
* formatting and a little cleanup
* code styling
* --repeats default value fixed, changed to 1
* bug fix - removed redundant lines of embedding concatenation when using prior_preservation (that duplicated class_prompt embeddings)
* changed dataset loading logic according to the following usecases (to avoid unnecessary dependency on datasets)-
1. user provides --dataset_name
2. user provides local dir --instance_data_dir that contains a metadata .jsonl file
3. user provides local dir --instance_data_dir that contains only images
in cases [1,2] we import datasets and use load_dataset method, in case [3] we process the data same as in the original script setting
* styling fix
* arg name fix
* adjusted the --repeats logic
* -removed redundant arg and 'if' when loading local folder with prompts
-updated readme template
-some default val fixes
-custom caption tests
* image path fix for readme
* code style
* bug fix
* --caption_column arg
* readme fix
---------
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Co-authored-by: Linoy Tsaban <linoy@huggingface.co>
* fix an issue that ipex occupy too much memory, it will not impact performance
* make style
---------
Co-authored-by: root <jun.chen@intel.com>
Co-authored-by: Meng Guoqing <guoqing.meng@intel.com>
* Fix the pipeline name in the examples for LMD+ pipeline
* Add LMD+ colab link
* Apply code formatting
---------
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
* fix error reported 'find_unused_parameters' running in mutiple GPUs or NPUs
* fix code check of importing module by its alphabetic order
---------
Co-authored-by: jiaqiw <wangjiaqi50@huawei.com>
Co-authored-by: Dhruv Nair <dhruv.nair@gmail.com>