* update IF stage I pipelines
add fixed variance schedulers and lora loading
* added kv lora attn processor
* allow loading into alternative lora attn processor
* make vae optional
* throw away predicted variance
* allow loading into added kv lora layer
* allow load T5
* allow pre compute text embeddings
* set new variance type in schedulers
* fix copies
* refactor all prompt embedding code
class prompts are now included in pre-encoding code
max tokenizer length is now configurable
embedding attention mask is now configurable
* fix for when variance type is not defined on scheduler
* do not pre compute validation prompt if not present
* add example test for if lora dreambooth
* add check for train text encoder and pre compute text embeddings
* Batched load of textual inversions
- Only call resize_token_embeddings once per batch as it is the most expensive operation
- Allow pretrained_model_name_or_path and token to be an optional list
- Remove Dict from type annotation pretrained_model_name_or_path as it was not supported in this function
- Add comment that single files (e.g. .pt/.safetensors) are supported
- Add comment for token parameter
- Convert token override log message from warning to info
* Update src/diffusers/loaders.py
Check for duplicate tokens
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* Update condition for None tokens
---------
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* fix multistep dpmsolver for cosine schedule (deepfloy-if)
* fix a typo
* Update src/diffusers/schedulers/scheduling_dpmsolver_multistep.py
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* Update src/diffusers/schedulers/scheduling_dpmsolver_multistep.py
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* Update src/diffusers/schedulers/scheduling_dpmsolver_multistep.py
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* Update src/diffusers/schedulers/scheduling_dpmsolver_multistep.py
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* Update src/diffusers/schedulers/scheduling_dpmsolver_multistep.py
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* update all dpmsolver (singlestep, multistep, dpm, dpm++) for cosine noise schedule
* add test, fix style
---------
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* Fix more torch compile breaks
* add tests
* Fix all
* fix controlnet
* fix more
* Add Horace He as co-author.
>
>
Co-authored-by: Horace He <horacehe2007@yahoo.com>
* Add Horace He as co-author.
Co-authored-by: Horace He <horacehe2007@yahoo.com>
---------
Co-authored-by: Horace He <horacehe2007@yahoo.com>
* fix more
* Fix more
* fix more
* Apply suggestions from code review
* fix
* make style
* make fix-copies
* fix
* make sure torch compile
* Clean
* fix test
* add constant lr with rules
* add constant with rules in TYPE_TO_SCHEDULER_FUNCTION
* add constant lr rate with rule
* hotfix code quality
* fix doc style
* change name constant_with_rules to piecewise constant
* Update Pix2PixZero Auto-correlation Loss
* Add Stable Diffusion DiffEdit pipeline
* Add draft documentation and import code
* Bugfixes and refactoring
* Add option to not decode latents in the inversion process
* Harmonize preprocessing
* Revert "Update Pix2PixZero Auto-correlation Loss"
This reverts commit b218062fed.
* Update annotations
* rename `compute_mask` to `generate_mask`
* Update documentation
* Update docs
* Update Docs
* Fix copy
* Change shape of output latents to batch first
* Update docs
* Add first draft for tests
* Bugfix and update tests
* Add `cross_attention_kwargs` support for all pipeline methods
* Fix Copies
* Add support for PIL image latents
Add support for mask broadcasting
Update docs and tests
Align `mask` argument to `mask_image`
Remove height and width arguments
* Enable MPS Tests
* Move example docstrings
* Fix test
* Fix test
* fix pipeline inheritance
* Harmonize `prepare_image_latents` with StableDiffusionPix2PixZeroPipeline
* Register modules set to `None` in config for `test_save_load_optional_components`
* Move fixed logic to specific test class
* Clean changes to other pipelines
* Update new tests to coordinate with #2953
* Update slow tests for better results
* Safety to avoid potential problems with torch.inference_mode
* Add reference in SD Pipeline Overview
* Fix tests again
* Enforce determinism in noise for generate_mask
* Fix copies
* Widen test tolerance for fp16 based on `test_stable_diffusion_upscale_pipeline_fp16`
* Add LoraLoaderMixin and update `prepare_image_latents`
* clean up repeat and reg
* bugfix
* Remove invalid args from docs
Suppress spurious warning by repeating image before latent to mask gen
* 👽 qol improvements for LoRA.
* better function name?
* fix: LoRA weight loading with the new format.
* address Patrick's comments.
* Apply suggestions from code review
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* change wording around encouraging the use of load_lora_weights().
* fix: function name.
---------
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* removed unnecessary parameters from get_up_block and get_down_block functions
* adding resnet_skip_time_act, resnet_out_scale_factor and cross_attention_norm to get_up_block and get_down_block functions
---------
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
* add
* clean
* up
* clean up more
* fix more tests
* Improve docs further
* improve
* more fixes docs
* Improve docs more
* Update src/diffusers/models/unet_2d_condition.py
* fix
* up
* update doc links
* make fix-copies
* add safety checker and watermarker to stage 3 doc page code snippets
* speed optimizations docs
* memory optimization docs
* make style
* add watermarking snippets to doc string examples
* make style
* use pt_to_pil helper functions in doc strings
* skip mps tests
* Improve safety
* make style
* new logic
* fix
* fix bad onnx design
* make new stable diffusion upscale pipeline model arguments optional
* define has_nsfw_concept when non-pil output type
* lowercase linked to notebook name
---------
Co-authored-by: William Berman <WLBberman@gmail.com>
When the token used for textual inversion does not have any special symbols (e.g. it is not surrounded by <>), the tokenizer does not properly split the replacement tokens. Adding a space for the padding tokens fixes this.
* Add karras pattern to discrete heun scheduler
* Add integration test
* Fix failing CI on pytorch test on M1 (mps)
---------
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* Update Pix2PixZero Auto-correlation Loss
* Add fast inversion tests
* Clarify purpose and mark as deprecated
Fix inversion prompt broadcasting
* Register modules set to `None` in config for `test_save_load_optional_components`
* Update new tests to coordinate with #2953
* Modified altdiffusion pipline to support altdiffusion-m18
* Modified altdiffusion pipline to support altdiffusion-m18
* Modified altdiffusion pipline to support altdiffusion-m18
* Modified altdiffusion pipline to support altdiffusion-m18
* Modified altdiffusion pipline to support altdiffusion-m18
* Modified altdiffusion pipline to support altdiffusion-m18
* Modified altdiffusion pipline to support altdiffusion-m18
---------
Co-authored-by: root <fulong_ye@163.com>