* Increase min accelerate ver to avoid OOM when mixed precision
* Rm re-instantiation of VAE
* Rm casting to float32
* Del unused models and free GPU
* Fix style
* Update textual_inversion.py
fixed safe_path bug in textual inversion training
* Update test_examples.py
update test_textual_inversion for updating saved file's name
* Update textual_inversion.py
fixed some formatting issues
---------
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* empty PR
* init
* changes
* starting with the pipeline
* stable diff
* prev
* more things, getting started
* more functions
* makeing it more readable
* almost done testing
* var changes
* testing
* device
* device support
* maybe
* device malfunctions
* new new
* register
* testing
* exec does not work
* float
* change info
* change of architecture
* might work
* testing with colab
* more attn atuff
* stupid additions
* documenting and testing
* writing tests
* more docs
* tests and docs
* remove test
* empty PR
* init
* changes
* starting with the pipeline
* stable diff
* prev
* more things, getting started
* more functions
* makeing it more readable
* almost done testing
* var changes
* testing
* device
* device support
* maybe
* device malfunctions
* new new
* register
* testing
* exec does not work
* float
* change info
* change of architecture
* might work
* testing with colab
* more attn atuff
* stupid additions
* documenting and testing
* writing tests
* more docs
* tests and docs
* remove test
* change cross attention
* revert back
* tests
* reverting back to orig
* changes
* test passing
* pipeline changes
* before quality
* quality checks pass
* remove print statements
* doc fixes
* __init__ error something
* update docs, working on dim
* working on encoding
* doc fix
* more fixes
* no more dependent on 512*512
* update docs
* fixes
* test passing
* remove comment
* fixes and migration
* simpler tests
* doc changes
* green CI
* changes
* more docs
* changes
* new images
* to community examples
* selete
* more fixes
* changes
* fix
---------
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
* Update loaders.py
Solves an error sometimes thrown while iterating over state_dict.keys() caused by using the .pop() method within the loop.
* Update loaders.py
* debugging
* better logic for filtering.
* Update src/diffusers/loaders.py
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
---------
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* dreambooth training
* train_dreambooth validation scheduler
* set a particular scheduler via a string
* modify readme after setting a particular scheduler via a string
* modify readme after setting a particular scheduler
* use importlib to set a particular scheduler
* import with correct sort
* Fix AutoencoderTiny encoder scaling convention
* Add [-1, 1] -> [0, 1] rescaling to EncoderTiny
* Move [0, 1] -> [-1, 1] rescaling from AutoencoderTiny.decode to DecoderTiny
(i.e. immediately after the final conv, as early as possible)
* Fix missing [0, 255] -> [0, 1] rescaling in AutoencoderTiny.forward
* Update AutoencoderTinyIntegrationTests to protect against scaling issues.
The new test constructs a simple image, round-trips it through AutoencoderTiny,
and confirms the decoded result is approximately equal to the source image.
This test checks behavior with and without tiling enabled.
This test will fail if new AutoencoderTiny scaling issues are introduced.
* Context: Raw TAESD weights expect images in [0, 1], but diffusers'
convention represents images with zero-centered values in [-1, 1],
so AutoencoderTiny needs to scale / unscale images at the start of
encoding and at the end of decoding in order to work with diffusers.
* Re-add existing AutoencoderTiny test, update golden values
* Add comments to AutoencoderTiny.forward