* enable deterministic pytorch and cuda operations.
* disable manual seeding.
* make style && make quality for unet_2d tests.
* enable determinism for the unet2dconditional model.
* add CUBLAS_WORKSPACE_CONFIG for better reproducibility.
* relax tolerance (very weird issue, though).
* revert to torch manual_seed() where needed.
* relax more tolerance.
* better placement of the cuda variable and relax more tolerance.
* enable determinism for 3d condition model.
* relax tolerance.
* add: determinism to alt_diffusion.
* relax tolerance for alt diffusion.
* dance diffusion.
* dance diffusion is flaky.
* test_dict_tuple_outputs_equivalent edit.
* fix two more tests.
* fix more ddim tests.
* fix: argument.
* change to diff in place of difference.
* fix: test_save_load call.
* test_save_load_float16 call.
* fix: expected_max_diff
* fix: paint by example.
* relax tolerance.
* add determinism to 1d unet model.
* torch 2.0 regressions seem to be brutal
* determinism to vae.
* add reason to skipping.
* up tolerance.
* determinism to vq.
* determinism to cuda.
* determinism to the generic test pipeline file.
* refactor general pipelines testing a bit.
* determinism to alt diffusion i2i
* up tolerance for alt diff i2i and audio diff
* up tolerance.
* determinism to audioldm
* increase tolerance for audioldm lms.
* increase tolerance for paint by paint.
* increase tolerance for repaint.
* determinism to cycle diffusion and sd 1.
* relax tol for cycle diffusion 🚲
* relax tol for sd 1.0
* relax tol for controlnet.
* determinism to img var.
* relax tol for img variation.
* tolerance to i2i sd
* make style
* determinism to inpaint.
* relax tolerance for inpaiting.
* determinism for inpainting legacy
* relax tolerance.
* determinism to instruct pix2pix
* determinism to model editing.
* model editing tolerance.
* panorama determinism
* determinism to pix2pix zero.
* determinism to sag.
* sd 2. determinism
* sd. tolerance
* disallow tf32 matmul.
* relax tolerance is all you need.
* make style and determinism to sd 2 depth
* relax tolerance for depth.
* tolerance to diffedit.
* tolerance to sd 2 inpaint.
* up tolerance.
* determinism in upscaling.
* tolerance in upscaler.
* more tolerance relaxation.
* determinism to v pred.
* up tol for v_pred
* unclip determinism
* determinism to unclip img2img
* determinism to text to video.
* determinism to last set of tests
* up tol.
* vq cumsum doesn't have a deterministic kernel
* relax tol
* relax tol
* add inferring_controlnet_cond_batch
* Revert "add inferring_controlnet_cond_batch"
This reverts commit abe8d6311d.
* set guess_mode to True
whenever global_pool_conditions is True
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* nit
* add integration test
---------
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* StableDiffusionInpaintingPipeline now resizes input images and masks w.r.t to passed input height and width. Default is already set to 512. This addresses the common tensor mismatch error. Also moved type check into relevant funciton to keep main pipeline body tidy.
* Fixed StableDiffusionInpaintingPrepareMaskAndMaskedImageTests
Due to previous commit these tests were failing as height and width need to be passed into the prepare_mask_and_masked_image function, I have updated the code and added a height/width variable per unit test as it seemed more appropriate than the current hard coded solution
* Added a resolution test to StableDiffusionInpaintPipelineSlowTests
this unit test simply gets the input and resizes it into some that would fail (e.g. would throw a tensor mismatch error/not a mult of 8). Then passes it through the pipeline and verifies it produces output with correct dims w.r.t the passed height and width
---------
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* add: a warning message when using xformers in a PT 2.0 env.
* Apply suggestions from code review
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
---------
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* update IF stage I pipelines
add fixed variance schedulers and lora loading
* added kv lora attn processor
* allow loading into alternative lora attn processor
* make vae optional
* throw away predicted variance
* allow loading into added kv lora layer
* allow load T5
* allow pre compute text embeddings
* set new variance type in schedulers
* fix copies
* refactor all prompt embedding code
class prompts are now included in pre-encoding code
max tokenizer length is now configurable
embedding attention mask is now configurable
* fix for when variance type is not defined on scheduler
* do not pre compute validation prompt if not present
* add example test for if lora dreambooth
* add check for train text encoder and pre compute text embeddings
* Batched load of textual inversions
- Only call resize_token_embeddings once per batch as it is the most expensive operation
- Allow pretrained_model_name_or_path and token to be an optional list
- Remove Dict from type annotation pretrained_model_name_or_path as it was not supported in this function
- Add comment that single files (e.g. .pt/.safetensors) are supported
- Add comment for token parameter
- Convert token override log message from warning to info
* Update src/diffusers/loaders.py
Check for duplicate tokens
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* Update condition for None tokens
---------
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* Set --only_save_embeds to False by default
Due to how the option is named, it makes more sense to behave like this.
* Refactor only_save_embeds to save_as_full_pipeline
* fix multistep dpmsolver for cosine schedule (deepfloy-if)
* fix a typo
* Update src/diffusers/schedulers/scheduling_dpmsolver_multistep.py
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* Update src/diffusers/schedulers/scheduling_dpmsolver_multistep.py
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* Update src/diffusers/schedulers/scheduling_dpmsolver_multistep.py
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* Update src/diffusers/schedulers/scheduling_dpmsolver_multistep.py
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* Update src/diffusers/schedulers/scheduling_dpmsolver_multistep.py
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* update all dpmsolver (singlestep, multistep, dpm, dpm++) for cosine noise schedule
* add test, fix style
---------
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* Fix more torch compile breaks
* add tests
* Fix all
* fix controlnet
* fix more
* Add Horace He as co-author.
>
>
Co-authored-by: Horace He <horacehe2007@yahoo.com>
* Add Horace He as co-author.
Co-authored-by: Horace He <horacehe2007@yahoo.com>
---------
Co-authored-by: Horace He <horacehe2007@yahoo.com>