* Support training SD V2 with Flax
Mostly involves supporting a v_prediction scheduler.
The implementation in #1777 doesn't take into account a recent refactor of `scheduling_utils_flax`, so this should be used instead.
* Add to other top-level files.
* [Deterministic torch randn] Allow tensors to be generated on CPU
* fix more
* up
* fix more
* up
* Update src/diffusers/utils/torch_utils.py
Co-authored-by: Anton Lozhkov <anton@huggingface.co>
* Apply suggestions from code review
* up
* up
* Apply suggestions from code review
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
Co-authored-by: Anton Lozhkov <anton@huggingface.co>
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* misc fixes
* more comments
* Update examples/textual_inversion/textual_inversion.py
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* set transformers verbosity to warning
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* allow using non-ema weights for training
* Apply suggestions from code review
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* address more review comment
* reorganise a few lines
* always pad text to max_length to match original training
* ifx collate_fn
* remove unused code
* don't prepare ema_unet, don't register lr scheduler
* style
* assert => ValueError
* add allow_tf32
* set log level
* fix comment
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* move files a bit
* more refactors
* fix more
* more fixes
* fix more onnx
* make style
* upload
* fix
* up
* fix more
* up again
* up
* small fix
* Update src/diffusers/__init__.py
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* correct
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* initial
* type hints
* update scheduler type hint
* add to README
* add example generation to README
* v -> mix_factor
* load scheduler from pretrained
* Make xformers optional even if it is available
* Raise exception if xformers is used but not available
* Rename use_xformers to enable_xformers_memory_efficient_attention
* Add a note about xformers in README
* Reformat code style
* Section header for in-painting, inference from checkpoint.
* Inference: link to section to perform inference from checkpoint.
* Move Dreambooth in-painting instructions to the proper place.
* [Flax] Stateless schedulers, fixes and refactors
* Remove scheduling_common_flax and some renames
* Update src/diffusers/schedulers/scheduling_pndm_flax.py
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* expose polynomial:power and cosine_with_restarts:num_cycles using get_scheduler func, add it to train_dreambooth.py
* fix formatting
* fix style
* Update src/diffusers/optimization.py
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* Fail if there are less images than the effective batch size.
* Remove lr-scheduler arg as it's currently ignored.
* Make guidance_scale work for batch_size > 1.
* Add examples with Intel optimizations (BF16 fine-tuning and inference)
* Remove unused package
* Add README for intel_opts and refine the description for research projects
* Add notes of intel opts for diffusers
* Add state checkpointing to other training scripts
* Fix first_epoch
* Apply suggestions from code review
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* Update Dreambooth checkpoint help message.
* Dreambooth docs: checkpoints, inference from a checkpoint.
* make style
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* Dreambooth: save / restore training state.
* make style
* Rename vars for clarity.
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* Remove unused import
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* Added Community pipeline for comparing Stable Diffusion v1.1-4
Signed-off-by: Suvaditya Mukherjee <suvadityamuk@gmail.com>
* Made changes to provide support for current iteration of from_pretrained and added example
Signed-off-by: Suvaditya Mukherjee <suvadityamuk@gmail.com>
* updated a small spelling error
Signed-off-by: Suvaditya Mukherjee <suvadityamuk@gmail.com>
* added pipeline entry to table
Signed-off-by: Suvaditya Mukherjee <suvadityamuk@gmail.com>
Signed-off-by: Suvaditya Mukherjee <suvadityamuk@gmail.com>
* dreambooth: fix#1566: maintain fp32 wrapper when saving a checkpoint to avoid crash when running fp16
* dreambooth: guard against passing keep_fp32_wrapper arg to older versions of accelerate. part of fix for #1566
* Apply suggestions from code review
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* Update examples/dreambooth/train_dreambooth.py
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
easy fix for undefined name in train_dreambooth.py
import_model_class_from_model_name_or_path loads a pretrained model
and refers to args.revision in a context where args is undefined. I modified
the function to take revision as an argument and modified the invocation
of the function to pass in the revision from args. Seems like this was caused
by a cut and paste.
* add check_min_version for examples
* move __version__ to the top
* Apply suggestions from code review
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* fix comment
* fix error_message
* adapt the install message
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>