add group norm type to attention processor cross attention norm
This lets the cross attention norm use both a group norm block and a
layer norm block.
The group norm operates along the channels dimension
and requires input shape (batch size, channels, *) where as the layer norm with a single
`normalized_shape` dimension only operates over the least significant
dimension i.e. (*, channels).
The channels we want to normalize are the hidden dimension of the encoder hidden states.
By convention, the encoder hidden states are always passed as (batch size, sequence
length, hidden states).
This means the layer norm can operate on the tensor without modification, but the group
norm requires flipping the last two dimensions to operate on (batch size, hidden states, sequence length).
All existing attention processors will have the same logic and we can
consolidate it in a helper function `prepare_encoder_hidden_states`
prepare_encoder_hidden_states -> norm_encoder_hidden_states re: @patrickvonplaten
move norm_cross defined check to outside norm_encoder_hidden_states
add missing attn.norm_cross check
* ⚙️chore(train_controlnet) fix typo in logger message
* ⚙️chore(models) refactor modules order; make them the same as calling order
When printing the BasicTransformerBlock to stdout, I think it's crucial that the attributes order are shown in proper order. And also previously the "3. Feed Forward" comment was not making sense. It should have been close to self.ff but it's instead next to self.norm3
* correct many tests
* remove bogus file
* make style
* correct more tests
* finish tests
* fix one more
* make style
* make unclip deterministic
* ⚙️chore(models/attention) reorganize comments in BasicTransformerBlock class
---------
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* add only cross attention to simple attention blocks
* add test for only_cross_attention re: @patrickvonplaten
* mid_block_only_cross_attention better default
allow mid_block_only_cross_attention to default to
`only_cross_attention` when `only_cross_attention` is given
as a single boolean
* Fix invocation of some slow tests.
We use __call__ rather than pmapping the generation function ourselves
because the number of static arguments is different now.
* style
* `AttentionProcessor.group_norm` num_channels should be `query_dim`
The group_norm on the attention processor should really norm the number
of channels in the query _not_ the inner dim. This wasn't caught before
because the group_norm is only used by the added kv attention processors
and the added kv attention processors are only used by the karlo models
which are configured such that the inner dim is the same as the query
dim.
* add_{k,v}_proj should be projecting to inner_dim
* [Config] Fix config prints and save, load
* Only use potential nn.Modules for dtype and device
* Correct vae image processor
* make sure in_channels is not accessed directly
* make sure in channels is only accessed via config
* Make sure schedulers only access config attributes
* Make sure to access config in SAG
* Fix vae processor and make style
* add tests
* uP
* make style
* Fix more naming issues
* Final fix with vae config
* change more
* add TextToVideoZeroPipeline and CrossFrameAttnProcessor
* add docs for text-to-video zero
* add teaser image for text-to-video zero docs
* Fix review changes. Add Documentation. Add test
* clean up the codes in pipeline_text_to_video.py. Add descriptive comments and docstrings
* make style && make quality
* make fix-copies
* make requested changes to docs. use huggingface server links for resources, delete res folder
* make style && make quality && make fix-copies
* make style && make quality
* Apply suggestions from code review
---------
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>