mirror of
https://github.com/huggingface/diffusers.git
synced 2026-01-29 07:22:12 +03:00
* Add Ascend NPU support for SDXL fine-tuning and fix the model saving bug when using DeepSpeed. * fix check code quality * Decouple the NPU flash attention and make it an independent module. * add doc and unit tests for npu flash attention. --------- Co-authored-by: mhh001 <mahonghao1@huawei.com> Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
61 lines
2.0 KiB
Markdown
61 lines
2.0 KiB
Markdown
<!--Copyright 2024 The HuggingFace Team. All rights reserved.
|
|
|
|
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
|
|
the License. You may obtain a copy of the License at
|
|
|
|
http://www.apache.org/licenses/LICENSE-2.0
|
|
|
|
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
|
|
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
|
specific language governing permissions and limitations under the License.
|
|
-->
|
|
|
|
# Attention Processor
|
|
|
|
An attention processor is a class for applying different types of attention mechanisms.
|
|
|
|
## AttnProcessor
|
|
[[autodoc]] models.attention_processor.AttnProcessor
|
|
|
|
## AttnProcessor2_0
|
|
[[autodoc]] models.attention_processor.AttnProcessor2_0
|
|
|
|
## AttnAddedKVProcessor
|
|
[[autodoc]] models.attention_processor.AttnAddedKVProcessor
|
|
|
|
## AttnAddedKVProcessor2_0
|
|
[[autodoc]] models.attention_processor.AttnAddedKVProcessor2_0
|
|
|
|
## CrossFrameAttnProcessor
|
|
[[autodoc]] pipelines.text_to_video_synthesis.pipeline_text_to_video_zero.CrossFrameAttnProcessor
|
|
|
|
## CustomDiffusionAttnProcessor
|
|
[[autodoc]] models.attention_processor.CustomDiffusionAttnProcessor
|
|
|
|
## CustomDiffusionAttnProcessor2_0
|
|
[[autodoc]] models.attention_processor.CustomDiffusionAttnProcessor2_0
|
|
|
|
## CustomDiffusionXFormersAttnProcessor
|
|
[[autodoc]] models.attention_processor.CustomDiffusionXFormersAttnProcessor
|
|
|
|
## FusedAttnProcessor2_0
|
|
[[autodoc]] models.attention_processor.FusedAttnProcessor2_0
|
|
|
|
## LoRAAttnAddedKVProcessor
|
|
[[autodoc]] models.attention_processor.LoRAAttnAddedKVProcessor
|
|
|
|
## LoRAXFormersAttnProcessor
|
|
[[autodoc]] models.attention_processor.LoRAXFormersAttnProcessor
|
|
|
|
## SlicedAttnProcessor
|
|
[[autodoc]] models.attention_processor.SlicedAttnProcessor
|
|
|
|
## SlicedAttnAddedKVProcessor
|
|
[[autodoc]] models.attention_processor.SlicedAttnAddedKVProcessor
|
|
|
|
## XFormersAttnProcessor
|
|
[[autodoc]] models.attention_processor.XFormersAttnProcessor
|
|
|
|
## AttnProcessorNPU
|
|
[[autodoc]] models.attention_processor.AttnProcessorNPU
|