mirror of
https://github.com/huggingface/diffusers.git
synced 2026-01-29 07:22:12 +03:00
* make scaling factor cnfig arg of vae * fix * make flake happy * fix ldm * fix upscaler * qualirty * Apply suggestions from code review Co-authored-by: Anton Lozhkov <anton@huggingface.co> Co-authored-by: Pedro Cuenca <pedro@huggingface.co> Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com> * solve conflicts, addres some comments * examples * examples min version * doc * fix type * typo * Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_upscale.py Co-authored-by: Pedro Cuenca <pedro@huggingface.co> * remove duplicate line * Apply suggestions from code review Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com> Co-authored-by: Anton Lozhkov <anton@huggingface.co> Co-authored-by: Pedro Cuenca <pedro@huggingface.co> Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Diffusers examples with Intel optimizations
This research project is not actively maintained by the diffusers team. For any questions or comments, please make sure to tag @hshen14 .
This aims to provide diffusers examples with Intel optimizations such as Bfloat16 for training/fine-tuning acceleration and 8-bit integer (INT8) for inference acceleration on Intel platforms.
Accelerating the fine-tuning for textual inversion
We accelereate the fine-tuning for textual inversion with Intel Extension for PyTorch. The examples enable both single node and multi-node distributed training with Bfloat16 support on Intel Xeon Scalable Processor.
Accelerating the inference for Stable Diffusion using Bfloat16
We start the inference acceleration with Bfloat16 using Intel Extension for PyTorch. The script is generally designed to support standard Stable Diffusion models with Bfloat16 support.
Accelerating the inference for Stable Diffusion using INT8
Coming soon ...