* modelcard generation edit
* add missed tag
* fix param name
* fix var
* change str to dict
* add use_dora check
* use correct tags for lora
* make style && make quality
---------
Co-authored-by: Aryan <aryan@huggingface.co>
* add ostris trainer to README & add cache latents of vae
* add ostris trainer to README & add cache latents of vae
* style
* readme
* add test for latent caching
* add ostris noise scheduler
9ee1ef2a0a/toolkit/samplers/custom_flowmatch_sampler.py (L95)
* style
* fix import
* style
* fix tests
* style
* --change upcasting of transformer?
* update readme according to main
* add pivotal tuning for CLIP
* fix imports, encode_prompt call,add TextualInversionLoaderMixin to FluxPipeline for inference
* TextualInversionLoaderMixin support for FluxPipeline for inference
* move changes to advanced flux script, revert canonical
* add latent caching to canonical script
* revert changes to canonical script to keep it separate from https://github.com/huggingface/diffusers/pull/9160
* revert changes to canonical script to keep it separate from https://github.com/huggingface/diffusers/pull/9160
* style
* remove redundant line and change code block placement to align with logic
* add initializer_token arg
* add transformer frac for range support from pure textual inversion to the orig pivotal tuning
* support pure textual inversion - wip
* adjustments to support pure textual inversion and transformer optimization in only part of the epochs
* fix logic when using initializer token
* fix pure_textual_inversion_condition
* fix ti/pivotal loading of last validation run
* remove embeddings loading for ti in final training run (to avoid adding huggingface hub dependency)
* support pivotal for t5
* adapt pivotal for T5 encoder
* adapt pivotal for T5 encoder and support in flux pipeline
* t5 pivotal support + support fo pivotal for clip only or both
* fix param chaining
* fix param chaining
* README first draft
* readme
* readme
* readme
* style
* fix import
* style
* add fix from https://github.com/huggingface/diffusers/pull/9419
* add to readme, change function names
* te lr changes
* readme
* change concept tokens logic
* fix indices
* change arg name
* style
* dummy test
* revert dummy test
* reorder pivoting
* add warning in case the token abstraction is not the instance prompt
* experimental - wip - specific block training
* fix documentation and token abstraction processing
* remove transformer block specification feature (for now)
* style
* fix copies
* fix indexing issue when --initializer_concept has different amounts
* add if TextualInversionLoaderMixin to all flux pipelines
* style
* fix import
* fix imports
* address review comments - remove necessary prints & comments, use pin_memory=True, use free_memory utils, unify warning and prints
* style
* logger info fix
* make lora target modules configurable and change the default
* make lora target modules configurable and change the default
* style
* make lora target modules configurable and change the default, add notes to readme
* style
* add tests
* style
* fix repo id
* add updated requirements for advanced flux
* fix indices of t5 pivotal tuning embeddings
* fix path in test
* remove `pin_memory`
* fix filename of embedding
* fix filename of embedding
---------
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
Co-authored-by: YiYi Xu <yixu310@gmail.com>
* minor changes
* minor changes
* minor changes
* minor changes
* minor changes
* minor changes
* minor changes
* fix
* fix
* aligning with blora script
* aligning with blora script
* aligning with blora script
* aligning with blora script
* aligning with blora script
* remove prints
* style
* default val
* license
* move save_model_card to outside push_to_hub
* Update train_dreambooth_lora_sdxl_advanced.py
---------
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
* Trim all the trailing white space in the whole repo
* Remove unnecessary empty places
* make style && make quality
* Trim trailing white space
* trim
---------
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
* 7529 do not disable autocast for cuda devices
* Remove typecasting error check for non-mps platforms, as a correct autocast implementation makes it a non-issue
* add autocast fix to other training examples
* disable native_amp for dreambooth (sdxl)
* disable native_amp for pix2pix (sdxl)
* remove tests from remaining files
* disable native_amp on huggingface accelerator for every training example that uses it
* convert more usages of autocast to nullcontext, make style fixes
* make style fixes
* style.
* Empty-Commit
---------
Co-authored-by: bghira <bghira@users.github.com>
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
you cannot specify `type="bool"` and `action="store_true"` at the same time.
remove excessive and buggy `type=bool`.
Co-authored-by: Linoy Tsaban <57615435+linoytsaban@users.noreply.github.com>
* add tags for diffusers training
* add tags for diffusers training
* add tags for diffusers training
* add tags for diffusers training
* add tags for diffusers training
* add tags for diffusers training
* add dora tags for drambooth lora scripts
* style
* add is_dora arg
* style
* add dora training feature to sd 1.5 script
* added notes about DoRA training
---------
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
* add noise_offset param
* micro conditioning - wip
* image processing adjusted and moved to support micro conditioning
* change time ids to be computed inside train loop
* change time ids to be computed inside train loop
* change time ids to be computed inside train loop
* time ids shape fix
* move token replacement of validation prompt to the same section of instance prompt and class prompt
* add offset noise to sd15 advanced script
* fix token loading during validation
* fix token loading during validation in sdxl script
* a little clean
* style
* a little clean
* style
* sdxl script - a little clean + minor path fix
sd 1.5 script - change default resolution value
* ad 1.5 script - minor path fix
* fix missing comma in code example in model card
* clean up commented lines
* style
* remove time ids computed outside training loop - no longer used now that we utilize micro-conditioning, as all time ids are now computed inside the training loop
* style
* [WIP] - added draft readme, building off of examples/dreambooth/README.md
* readme
* readme
* readme
* readme
* readme
* readme
* readme
* readme
* removed --crops_coords_top_left from CLI args
* style
* fix missing shape bug due to missing RGB if statement
* add blog mention at the start of the reamde as well
* Update examples/advanced_diffusion_training/README.md
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
* change note to render nicely as well
---------
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
* sd1.5 support in separate script
A quick adaptation to support people interested in using this method on 1.5 models.
* sd15 prompt text encoding and unet conversions
as per @linoytsaban 's recommendations. Testing would be appreciated,
* Readability and quality improvements
Removed some mentions of SDXL, and some arguments that don't apply to sd 1.5, and cleaned up some comments.
* make style/quality commands
* tracker rename and run-it doc
* Update examples/advanced_diffusion_training/train_dreambooth_lora_sd15_advanced.py
* Update examples/advanced_diffusion_training/train_dreambooth_lora_sd15_advanced.py
---------
Co-authored-by: Linoy Tsaban <57615435+linoytsaban@users.noreply.github.com>
* Fixes#6418 Advanced Dreambooth LoRa Training
* change order of import to fix nit
* fix nit, use cast_training_params
* remove torch.compile fix, will move to a new PR
* remove unnecessary import
* unwrap text encoder when saving hook only for full text encoder tuning
* unwrap text encoder when saving hook only for full text encoder tuning
* save embeddings in each checkpoint as well
* save embeddings in each checkpoint as well
* save embeddings in each checkpoint as well
* Update examples/advanced_diffusion_training/train_dreambooth_lora_sdxl_advanced.py
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
---------
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
* change timesteps used to calculate snr when --with_prior_preservation is enabled
* change timesteps used to calculate snr when --with_prior_preservation is enabled (canonical script)
* style
* revert canonical script to before snr gamma change
---------
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>