mirror of
https://github.com/huggingface/diffusers.git
synced 2026-01-29 07:22:12 +03:00
* why mdx? * why mdx? * why mdx? * no x for kandinksy either --------- Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
42 lines
1.2 KiB
Markdown
42 lines
1.2 KiB
Markdown
# Attention Processor
|
|
|
|
An attention processor is a class for applying different types of attention mechanisms.
|
|
|
|
## AttnProcessor
|
|
[[autodoc]] models.attention_processor.AttnProcessor
|
|
|
|
## AttnProcessor2_0
|
|
[[autodoc]] models.attention_processor.AttnProcessor2_0
|
|
|
|
## LoRAAttnProcessor
|
|
[[autodoc]] models.attention_processor.LoRAAttnProcessor
|
|
|
|
## LoRAAttnProcessor2_0
|
|
[[autodoc]] models.attention_processor.LoRAAttnProcessor2_0
|
|
|
|
## CustomDiffusionAttnProcessor
|
|
[[autodoc]] models.attention_processor.CustomDiffusionAttnProcessor
|
|
|
|
## AttnAddedKVProcessor
|
|
[[autodoc]] models.attention_processor.AttnAddedKVProcessor
|
|
|
|
## AttnAddedKVProcessor2_0
|
|
[[autodoc]] models.attention_processor.AttnAddedKVProcessor2_0
|
|
|
|
## LoRAAttnAddedKVProcessor
|
|
[[autodoc]] models.attention_processor.LoRAAttnAddedKVProcessor
|
|
|
|
## XFormersAttnProcessor
|
|
[[autodoc]] models.attention_processor.XFormersAttnProcessor
|
|
|
|
## LoRAXFormersAttnProcessor
|
|
[[autodoc]] models.attention_processor.LoRAXFormersAttnProcessor
|
|
|
|
## CustomDiffusionXFormersAttnProcessor
|
|
[[autodoc]] models.attention_processor.CustomDiffusionXFormersAttnProcessor
|
|
|
|
## SlicedAttnProcessor
|
|
[[autodoc]] models.attention_processor.SlicedAttnProcessor
|
|
|
|
## SlicedAttnAddedKVProcessor
|
|
[[autodoc]] models.attention_processor.SlicedAttnAddedKVProcessor |