* Update Pix2PixZero Auto-correlation Loss
* Add Stable Diffusion DiffEdit pipeline
* Add draft documentation and import code
* Bugfixes and refactoring
* Add option to not decode latents in the inversion process
* Harmonize preprocessing
* Revert "Update Pix2PixZero Auto-correlation Loss"
This reverts commit b218062fed.
* Update annotations
* rename `compute_mask` to `generate_mask`
* Update documentation
* Update docs
* Update Docs
* Fix copy
* Change shape of output latents to batch first
* Update docs
* Add first draft for tests
* Bugfix and update tests
* Add `cross_attention_kwargs` support for all pipeline methods
* Fix Copies
* Add support for PIL image latents
Add support for mask broadcasting
Update docs and tests
Align `mask` argument to `mask_image`
Remove height and width arguments
* Enable MPS Tests
* Move example docstrings
* Fix test
* Fix test
* fix pipeline inheritance
* Harmonize `prepare_image_latents` with StableDiffusionPix2PixZeroPipeline
* Register modules set to `None` in config for `test_save_load_optional_components`
* Move fixed logic to specific test class
* Clean changes to other pipelines
* Update new tests to coordinate with #2953
* Update slow tests for better results
* Safety to avoid potential problems with torch.inference_mode
* Add reference in SD Pipeline Overview
* Fix tests again
* Enforce determinism in noise for generate_mask
* Fix copies
* Widen test tolerance for fp16 based on `test_stable_diffusion_upscale_pipeline_fp16`
* Add LoraLoaderMixin and update `prepare_image_latents`
* clean up repeat and reg
* bugfix
* Remove invalid args from docs
Suppress spurious warning by repeating image before latent to mask gen
* 👽 qol improvements for LoRA.
* better function name?
* fix: LoRA weight loading with the new format.
* address Patrick's comments.
* Apply suggestions from code review
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* change wording around encouraging the use of load_lora_weights().
* fix: function name.
---------
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* [docs] add notes for stateful model changes
* Update docs/source/en/optimization/fp16.mdx
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* link to accelerate docs for discarding hooks
---------
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* add
* clean
* up
* clean up more
* fix more tests
* Improve docs further
* improve
* more fixes docs
* Improve docs more
* Update src/diffusers/models/unet_2d_condition.py
* fix
* up
* update doc links
* make fix-copies
* add safety checker and watermarker to stage 3 doc page code snippets
* speed optimizations docs
* memory optimization docs
* make style
* add watermarking snippets to doc string examples
* make style
* use pt_to_pil helper functions in doc strings
* skip mps tests
* Improve safety
* make style
* new logic
* fix
* fix bad onnx design
* make new stable diffusion upscale pipeline model arguments optional
* define has_nsfw_concept when non-pil output type
* lowercase linked to notebook name
---------
Co-authored-by: William Berman <WLBberman@gmail.com>
* add: LoRA text encoder support for DreamBooth example.
* fix initialization.
* fix: modification call.
* add: entry in the readme.
* use dog dataset from hub.
* fix: params to clip.
* add entry to the LoRA doc.
* add: tests for lora.
* remove unnecessary list comprehension./
* add mixin class for pipeline from original sd ckpt
* Improve
* make style
* merge main into
* Improve more
* fix more
* up
* Apply suggestions from code review
* finish docs
* rename
* make style
---------
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* add guess mode (WIP)
* fix uncond/cond order
* support guidance_scale=1.0 and batch != 1
* remove magic coeff
* add docstring
* add intergration test
* add document to controlnet.mdx
* made the comments a bit more explanatory
* fix table
* add: first draft for a better LoRA enabler.
* make fix-copies.
* feat: backward compatibility.
* add: entry to the docs.
* add: tests.
* fix: docs.
* fix: norm group test for UNet3D.
* feat: add support for flat dicts.
* add depcrcation message instead of warning.
* [Config] Fix config prints and save, load
* Only use potential nn.Modules for dtype and device
* Correct vae image processor
* make sure in_channels is not accessed directly
* make sure in channels is only accessed via config
* Make sure schedulers only access config attributes
* Make sure to access config in SAG
* Fix vae processor and make style
* add tests
* uP
* make style
* Fix more naming issues
* Final fix with vae config
* change more
* add TextToVideoZeroPipeline and CrossFrameAttnProcessor
* add docs for text-to-video zero
* add teaser image for text-to-video zero docs
* Fix review changes. Add Documentation. Add test
* clean up the codes in pipeline_text_to_video.py. Add descriptive comments and docstrings
* make style && make quality
* make fix-copies
* make requested changes to docs. use huggingface server links for resources, delete res folder
* make style && make quality && make fix-copies
* make style && make quality
* Apply suggestions from code review
---------
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>