* Support training SD V2 with Flax
Mostly involves supporting a v_prediction scheduler.
The implementation in #1777 doesn't take into account a recent refactor of `scheduling_utils_flax`, so this should be used instead.
* Add to other top-level files.
* [Deterministic torch randn] Allow tensors to be generated on CPU
* fix more
* up
* fix more
* up
* Update src/diffusers/utils/torch_utils.py
Co-authored-by: Anton Lozhkov <anton@huggingface.co>
* Apply suggestions from code review
* up
* up
* Apply suggestions from code review
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
Co-authored-by: Anton Lozhkov <anton@huggingface.co>
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* misc fixes
* more comments
* Update examples/textual_inversion/textual_inversion.py
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* set transformers verbosity to warning
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* allow using non-ema weights for training
* Apply suggestions from code review
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* address more review comment
* reorganise a few lines
* always pad text to max_length to match original training
* ifx collate_fn
* remove unused code
* don't prepare ema_unet, don't register lr scheduler
* style
* assert => ValueError
* add allow_tf32
* set log level
* fix comment
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* [Unclip] Make sure latents can be reused
* allow one to directly pass embeddings
* up
* make unclip for text work
* finish allowing to pass embeddings
* correct more
* make style
* move files a bit
* more refactors
* fix more
* more fixes
* fix more onnx
* make style
* upload
* fix
* up
* fix more
* up again
* up
* small fix
* Update src/diffusers/__init__.py
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* correct
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* initial
* type hints
* update scheduler type hint
* add to README
* add example generation to README
* v -> mix_factor
* load scheduler from pretrained
* Make xformers optional even if it is available
* Raise exception if xformers is used but not available
* Rename use_xformers to enable_xformers_memory_efficient_attention
* Add a note about xformers in README
* Reformat code style
* Make safety_checker optional in more pipelines.
* Remove inappropriate comment in inpaint pipeline.
* InPaint Test: set feature_extractor to None.
* Remove import
* img2img test: set feature_extractor to None.
* inpaint sd2 test: set feature_extractor to None.
Co-authored-by: Suraj Patil <surajp815@gmail.com>
* first proposal
* rename
* up
* Apply suggestions from code review
* better
* up
* finish
* up
* rename
* correct versatile
* up
* up
* up
* up
* fix
* Apply suggestions from code review
* make style
* Apply suggestions from code review
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* add error message
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* use repeat_interleave
* fix repeat
* Trigger Build
* don't install accelerate from main
* install released accelrate for mps test
* Remove additional accelerate installation from main.
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* Section header for in-painting, inference from checkpoint.
* Inference: link to section to perform inference from checkpoint.
* Move Dreambooth in-painting instructions to the proper place.
* [Flax] Stateless schedulers, fixes and refactors
* Remove scheduling_common_flax and some renames
* Update src/diffusers/schedulers/scheduling_pndm_flax.py
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* expose polynomial:power and cosine_with_restarts:num_cycles using get_scheduler func, add it to train_dreambooth.py
* fix formatting
* fix style
* Update src/diffusers/optimization.py
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>