FlaxStableDiffusionSafetyChecker sets main_input_name to "clip_input".
This makes StableDiffusionSafetyChecker consistent.
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
Co-authored-by: YiYi Xu <yixu310@gmail.com>
* Added get_velocity function to EulerDiscreteScheduler.
* Fix white space on blank lines
* Added copied from statement
* back to the original.
---------
Co-authored-by: Ruining Li <ruining@robots.ox.ac.uk>
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
swap the order for do_classifier_free_guidance concat with repeat
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
Co-authored-by: Dhruv Nair <dhruv.nair@gmail.com>
* Check for latents, before calling prepare_latents - sdxlImg2Img
* Added latents check for all the img2img pipeline
* Fixed silly mistake while checking latents as None
A new function compute_dream_and_update_latents has been added to the
training utilities that allows you to do DREAM rectified training in line
with the paper https://arxiv.org/abs/2312.00210. The method can be used
with an extra argument in the train_text_to_image.py script.
Co-authored-by: Jimmy <39@🇺🇸.com>
* Convert channel order to BGR for the watermark encoder. Convert the watermarked BGR images back to RGB. Fixes#6292
* Revert channel order before stacking images to overcome limitations that negative strides are currently not supported
---------
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
* Fixed wrong decorator by modifying it to @classmethod.
* Updated the method and it's argument.
---------
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
* get device <-> component mapping when using multiple gpus.
* condition the device_map bits.
* relax condition
* device_map progress.
* device_map enhancement
* some cleaning up and debugging
* Apply suggestions from code review
Co-authored-by: Marc Sun <57196510+SunMarc@users.noreply.github.com>
* incorporate suggestions from PR.
* remove multi-gpu condition for now.
* guard check the component -> device mapping
* fix: device_memory variable
* dispatching transformers model to have force_hooks=True
* better guarding for transformers device_map
* introduce support balanced_low_memory and balanced_ultra_low_memory.
* remove device_map patch.
* fix: intermediate variable scoping.
* fix: condition in cpu offload.
* fix: flax class restrictions.
* remove modifications from cpu_offload and model_offload
* incorporate changes.
* add a simple forward pass test
* add: torch_device in get_inputs()
* add: tests
* remove print
* safe-guard to(), model offloading and cpu offloading when balanced is used as a device_map.
* style
* remove .
* safeguard device_map with more checks and remove invalid device_mapping strategues.
* make a class attribute and adjust tests accordingly.
* fix device_map check
* fix test
* adjust comment
* fix: device_map attribute
* fix: dispatching.
* max_memory test for pipeline
* version guard the tests
* fix guard.
* address review feedback.
* reset_device_map method.
* add: test for reset_hf_device_map
* fix a couple things.
* add reset_device_map() in the error message.
* add tests for checking reset_device_map doesn't have unintended consequences.
* fix reset_device_map and offloading tests.
* create _get_final_device_map utility.
* hf_device_map -> _hf_device_map
* add documentation
* add notes suggested by Marc.
* styling.
* Apply suggestions from code review
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* move updates within gpu condition.
* other docs related things
* note on ignore a device not specified in .
* provide a suggestion if device mapping errors out.
* fix: typo.
* _hf_device_map -> hf_device_map
* Empty-Commit
* add: example hf_device_map.
---------
Co-authored-by: Marc Sun <57196510+SunMarc@users.noreply.github.com>
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
Allow safety and feature extractor arguments to be passed to convert_from_ckpt
Allows management of safety checker and feature extractor
from outside of the convert ckpt class.
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
* UniPC Multistep add `rescale_betas_zero_snr`
Same patch as DPM and Euler with the patched final alpha cumprod
BF16 doesn't seem to break down, I think cause UniPC upcasts during some
phases already? We could still force an upcast since it only
loses ≈ 0.005 it/s for me but the difference in output is very small. A
better endeavor might upcasting in step() and removing all the other
upcasts elsewhere?
* UniPC ZSNR UT
* Re-add `rescale_betas_zsnr` doc oops
* UniPC UTs iterate solvers on FP16
It wasn't catching errs on order==3. Might be excessive?
* UniPC Multistep fix tensor dtype/device on order=3
* UniPC UTs Add v_pred to fp16 test iter
For completions sake. Probably overkill?
* 7529 do not disable autocast for cuda devices
* Remove typecasting error check for non-mps platforms, as a correct autocast implementation makes it a non-issue
* add autocast fix to other training examples
* disable native_amp for dreambooth (sdxl)
* disable native_amp for pix2pix (sdxl)
* remove tests from remaining files
* disable native_amp on huggingface accelerator for every training example that uses it
* convert more usages of autocast to nullcontext, make style fixes
* make style fixes
* style.
* Empty-Commit
---------
Co-authored-by: bghira <bghira@users.github.com>
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
* Add `final_sigma_zero` to UniPCMultistep
Effectively the same trick as DDIM's `set_alpha_to_one` and
DPM's `final_sigma_type='zero'`.
Currently False by default but maybe this should be True?
* `final_sigma_zero: bool` -> `final_sigmas_type: str`
Should 1:1 match DPM Multistep now.
* Set `final_sigmas_type='sigma_min'` in UniPC UTs