1
0
mirror of https://github.com/huggingface/diffusers.git synced 2026-01-29 07:22:12 +03:00
Files
diffusers/docs/source/en/api/models/transformer2d.md
Sayak Paul 10b4e354b6 [Chore] remove deprecation from transformer2d regarding the output class. (#8698)
* remove deprecation from transformer2d regarding the output class.

* up

* deprecate more
2024-06-26 07:35:36 -10:00

1.7 KiB

Transformer2DModel

A Transformer model for image-like data from CompVis that is based on the Vision Transformer introduced by Dosovitskiy et al. The [Transformer2DModel] accepts discrete (classes of vector embeddings) or continuous (actual embeddings) inputs.

When the input is continuous:

  1. Project the input and reshape it to (batch_size, sequence_length, feature_dimension).
  2. Apply the Transformer blocks in the standard way.
  3. Reshape to image.

When the input is discrete:

It is assumed one of the input classes is the masked latent pixel. The predicted classes of the unnoised image don't contain a prediction for the masked pixel because the unnoised image cannot be masked.

  1. Convert input (classes of latent pixels) to embeddings and apply positional embeddings.
  2. Apply the Transformer blocks in the standard way.
  3. Predict classes of unnoised image.

Transformer2DModel

autodoc Transformer2DModel

Transformer2DModelOutput

autodoc models.modeling_outputs.Transformer2DModelOutput