* EMA: fix `state_dict()` & add `cur_decay_value`
* EMA: fix a bug in `load_state_dict()`
'float' object (`state_dict["power"]`) has no attribute 'get'.
* del train_unconditional_ort.py
* Quality check and adding tokenizer
* Adapted stable diffusion to mixed precision+finished up style fixes
* Fixed based on patrick's review
* Fixed oom from number of validation images
* Removed unnecessary np.array conversion
---------
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* better accelerated saving
* up
* finish
* finish
* uP
* up
* up
* fix
* Apply suggestions from code review
* correct ema
* Remove @
* up
* Apply suggestions from code review
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* Update docs/source/en/training/dreambooth.mdx
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
---------
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* Fix torchvision.transforms and transforms function naming clash
* Update unconditional script for onnx
* Apply suggestions from code review
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
---------
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* Add center crop and horizontal flip to args
* Update command to use center crop and random flip
* Add center crop and horizontal flip to args
* Update command to use center crop and random flip
* make scaling factor cnfig arg of vae
* fix
* make flake happy
* fix ldm
* fix upscaler
* qualirty
* Apply suggestions from code review
Co-authored-by: Anton Lozhkov <anton@huggingface.co>
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* solve conflicts, addres some comments
* examples
* examples min version
* doc
* fix type
* typo
* Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_upscale.py
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* remove duplicate line
* Apply suggestions from code review
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Co-authored-by: Anton Lozhkov <anton@huggingface.co>
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* add: a doc on LoRA support in diffusers.
* Apply suggestions from code review
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* apply PR suggestions.
* Apply suggestions from code review
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* remove visually incoherent elements.
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* allow a local model directory to be used for merging
* moved checkpoint merge bugfix into main for testing
* possibly fix local variable "config_dict" referenced before assignment
* fix deprecation warning
* debugging...
* debugging
* allow safetensors
* safetensors try again
* fix syntax error
* further debugging
* fix logic error when checkpoint 2 is none
* more debugging...
* more debuging...
* more debugging...
* more debugging...
* debugging
* clean up status reporting
* skip the requires_safety_checker boolean
* moved checkpoint merge bugfix into main for testing
* possibly fix local variable "config_dict" referenced before assignment
* fix deprecation warning
* allow safetensors
* fix logic error when checkpoint 2 is none
* clean up status reporting
* undo hack to use private repo for community pipelines
* allow a local model directory to be used for merging
* allow safetensors
* clean up status reporting
* reformatted with black
* sort imported modules correctly
* Update examples/community/checkpoint_merger.py
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* Update examples/community/checkpoint_merger.py
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* Update examples/community/checkpoint_merger.py
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* fix import style error
Co-authored-by: Lincoln Stein <lstein@gmail.com>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* Fix resuming state when using gradient checkpointing.
Also, allow --resume_from_checkpoint to be used when the checkpoint does
not yet exist (a normal training run will be started).
* style
* Dreambooth: use `optimizer.zero_grad(set_to_none=True)` to reduce VRAM usage
* Allow the user to control `optimizer.zero_grad(set_to_none=True)` with --set_grads_to_none
* Update Dreambooth readme
* Fix link in readme
* Fix header size in readme
* example on fine-tuning with LoRA.
* apply make quality.
* fix: pipeline loading.
* Apply suggestions from code review
Co-authored-by: Suraj Patil <surajp815@gmail.com>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* apply suggestions for PR review.
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* apply make style and make quality.
* chore: remove mention of dreambooth from text2image.
* add: weight path and wandb run link.
* Apply suggestions from code review
* apply make style.
* make style
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Co-authored-by: Suraj Patil <surajp815@gmail.com>
* improve EMA
* style
* one EMA model
* quality
* fix tests
* fix test
* Apply suggestions from code review
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* re organise the unconditional script
* backwards compatibility
* default to init values for some args
* fix ort script
* issubclass => isinstance
* update state_dict
* docstr
* doc
* Apply suggestions from code review
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* use .to if device is passed
* deprecate device
* make flake happy
* fix typo
Co-authored-by: patil-suraj <surajp815@gmail.com>
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>