* add zero123 pipeline to community
* add community doc
* reformat
* update zero123 pipeline, including cc_projection within diffusers; add convert ckpt scripts; support diffusers weights
* [SDXL-IP2P] Add gif for demonstrating training processes
* [SDXL-IP2P] Add gif for demonstrating training processes
* [SDXL-IP2P] Change gif to URLs
* [SDXL-IP2P] Add URLs in case gif now show
---------
Co-authored-by: Harutatsu Akiyama <kf.zy.qin@gmail.com>
* fix: #4206
* add: sdxl controlnet training smoketest.
* remove unnecessary token inits.
* add: licensing to model card.
* include SDXL licensing in the model card and make public visibility default
* debugging
* debugging
* disable local file download.
* fix: training test.
* fix: ckpt prefix.
* 📄 Renamed File for Better Understanding
Renamed the 'rl' file to 'run_locomotion'. This change was made to improve the clarity and readability of the codebase. The 'rl' name was ambiguous, and 'run_locomotion' provides a more clear description of the file's purpose.
Thanks 🙌
* 📁 [Docs] Renamed Directory for Better Clarity
Renamed the 'rl' directory to 'reinforcement_learning'. This change provides a clearer understanding of the directory's purpose and its contents.
* Update examples/reinforcement_learning/README.md
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* 📝 Update README
---------
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* add: controlnet sdxl.
* modifications to controlnet.
* run styling.
* add: __init__.pys
* incorporate https://github.com/huggingface/diffusers/pull/4019 changes.
* run make fix-copies.
* resize the conditioning images.
* remove autocast.
* run styling.
* disable autocast.
* debugging
* device placement.
* back to autocast.
* remove comment.
* save some memory by reusing the vae and unet in the pipeline.
* apply styling.
* Allow low precision sd xl
* finish
* finish
* changes to accommodate the improved VAE.
* modifications to how we handle vae encoding in the training.
* make style
* make existing controlnet fast tests pass.
* change vae checkpoint cli arg.
* fix: vae pretrained paths.
* fix: steps in get_scheduler().
* debugging.
* debugging./
* fix: weight conversion.
* add: docs.
* add: limited tests./
* add: datasets to the requirements.
* update docstrings and incorporate the usage of watermarking.
* incorporate fix from #4083
* fix watermarking dependency handling.
* run make-fix-copies.
* Empty-Commit
* Update requirements_sdxl.txt
* remove vae upcasting part.
* Apply suggestions from code review
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* run make style
* run make fix-copies.
* disable suppot for multicontrolnet.
* Apply suggestions from code review
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* run make fix-copies.
* dtyle/.
* fix-copies.
---------
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* refactor to support patching LoRA into T5
instantiate the lora linear layer on the same device as the regular linear layer
get lora rank from state dict
tests
fmt
can create lora layer in float32 even when rest of model is float16
fix loading model hook
remove load_lora_weights_ and T5 dispatching
remove Unet#attn_processors_state_dict
docstrings
* text encoder monkeypatch class method
* fix test
---------
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* - Added validation parameters
- Changed some parameter descriptions to better explain their use.
- Fixed a few typos.
- Added concept_list parameter for better management of multiple subjects
- changed logic for image validation
* - Fixed bad logic for class data root directories
* Defaulting validation_steps to None for an easier logic
* Fixed multiple validation prompts
* Fixed bug on validation negative prompt
* Changed validation logic for tracker.
* Added uuid for validation image labeling
* Fix error when comparing validation prompts and validation negative prompts
* Improved error message when negative prompts for validation are more than the number of prompts
* - Changed image tracking number from epoch to global_step
- Added Typing for functions
* Added some validations more when using concept_list parameter and the regular ones.
* Fixed error message
* Added more validations for validation parameters
* Improved messaging for errors
* Fixed validation error for parameters with default values
* - Added train step to image name for validation
- reformatted code
* - Added train step to image's name for validation
- reformatted code
* Updated README.md file.
* reverted back original script of train_dreambooth.py
* reverted back original script of train_dreambooth.py
* left one blank line at the eof
* reverted back setup.py
* reverted back setup.py
* added same logic for when parameters for prior preservation are used without enabling the flag while using concept_list parameter.
* Ran black formatter.
* fixed a few strings
* fixed import sort with isort and removed fstrings without placeholder
* fixed import order with ruff (since with isort wasn't ok)
---------
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* refactor: readme serialized from the example when push_to_hub is True.
* fix: batch size arg.
* a bit better formatting
* minor fixes.
* add note on env.
* Apply suggestions from code review
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* condition wandb info better
* make mixed_precision assignment in cli args explicit.
* separate inference block for sample images.
* Apply suggestions from code review
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* address more comments.
* autocast mode.
* correct none image type problem.
* ifx: list assignment.
* minor fix.
---------
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* added StableDiffusionCanvasPipeline pipeline
* Added utils codes to pipe_utils file.
* make style
* delete mixture.py and Text2ImageRegion class
* make style
* Added the codes to the readme.md file.
* Moved functions from pipeline_utils to mix_canvas