1
0
mirror of https://github.com/huggingface/diffusers.git synced 2026-01-27 17:22:53 +03:00
Files
diffusers/docs
Yuxuan Zhang d90cd3621d CogView4 (supports different length c and uc) (#10649)
* init

* encode with glm

* draft schedule

* feat(scheduler): Add CogView scheduler implementation

* feat(embeddings): add CogView 2D rotary positional embedding

* 1

* Update pipeline_cogview4.py

* fix the timestep init and sigma

* update latent

* draft patch(not work)

* fix

* [WIP][cogview4]: implement initial CogView4 pipeline

Implement the basic CogView4 pipeline structure with the following changes:
- Add CogView4 pipeline implementation
- Implement DDIM scheduler for CogView4
- Add CogView3Plus transformer architecture
- Update embedding models

Current limitations:
- CFG implementation uses padding for sequence length alignment
- Need to verify transformer inference alignment with Megatron

TODO:
- Consider separate forward passes for condition/uncondition
  instead of padding approach

* [WIP][cogview4][refactor]: Split condition/uncondition forward pass in CogView4 pipeline

Split the forward pass for conditional and unconditional predictions in the CogView4 pipeline to match the original implementation. The noise prediction is now done separately for each case before combining them for guidance. However, the results still need improvement.

This is a work in progress as the generated images are not yet matching expected quality.

* use with -2 hidden state

* remove text_projector

* 1

* [WIP] Add tensor-reload to align input from transformer block

* [WIP] for older glm

* use with cogview4 transformers forward twice of u and uc

* Update convert_cogview4_to_diffusers.py

* remove this

* use main example

* change back

* reset

* setback

* back

* back 4

* Fix qkv conversion logic for CogView4 to Diffusers format

* back5

* revert to sat to cogview4 version

* update a new convert from megatron

* [WIP][cogview4]: implement CogView4 attention processor

Add CogView4AttnProcessor class for implementing scaled dot-product attention
with rotary embeddings for the CogVideoX model. This processor concatenates
encoder and hidden states, applies QKV projections and RoPE, but does not
include spatial normalization.

TODO:
- Fix incorrect QKV projection weights
- Resolve ~25% error in RoPE implementation compared to Megatron

* [cogview4] implement CogView4 transformer block

Implement CogView4 transformer block following the Megatron architecture:
- Add multi-modulate and multi-gate mechanisms for adaptive layer normalization
- Implement dual-stream attention with encoder-decoder structure
- Add feed-forward network with GELU activation
- Support rotary position embeddings for image tokens

The implementation follows the original CogView4 architecture while adapting
it to work within the diffusers framework.

* with new attn

* [bugfix] fix dimension mismatch in CogView4 attention

* [cogview4][WIP]: update final normalization in CogView4 transformer

Refactored the final normalization layer in CogView4 transformer to use separate layernorm and AdaLN operations instead of combined AdaLayerNormContinuous. This matches the original implementation but needs validation.

Needs verification against reference implementation.

* 1

* put back

* Update transformer_cogview4.py

* change time_shift

* Update pipeline_cogview4.py

* change timesteps

* fix

* change text_encoder_id

* [cogview4][rope] align RoPE implementation with Megatron

- Implement apply_rope method in attention processor to match Megatron's implementation
- Update position embeddings to ensure compatibility with Megatron-style rotary embeddings
- Ensure consistent rotary position encoding across attention layers

This change improves compatibility with Megatron-based models and provides
better alignment with the original implementation's positional encoding approach.

* [cogview4][bugfix] apply silu activation to time embeddings in CogView4

Applied silu activation to time embeddings before splitting into conditional
and unconditional parts in CogView4Transformer2DModel. This matches the
original implementation and helps ensure correct time conditioning behavior.

* [cogview4][chore] clean up pipeline code

- Remove commented out code and debug statements
- Remove unused retrieve_timesteps function
- Clean up code formatting and documentation

This commit focuses on code cleanup in the CogView4 pipeline implementation, removing unnecessary commented code and improving readability without changing functionality.

* [cogview4][scheduler] Implement CogView4 scheduler and pipeline

* now It work

* add timestep

* batch

* change convert scipt

* refactor pt. 1; make style

* refactor pt. 2

* refactor pt. 3

* add tests

* make fix-copies

* update toctree.yml

* use flow match scheduler instead of custom

* remove scheduling_cogview.py

* add tiktoken to test dependencies

* Update src/diffusers/models/embeddings.py

Co-authored-by: YiYi Xu <yixu310@gmail.com>

* apply suggestions from review

* use diffusers apply_rotary_emb

* update flow match scheduler to accept timesteps

* fix comment

* apply review sugestions

* Update src/diffusers/schedulers/scheduling_flow_match_euler_discrete.py

Co-authored-by: YiYi Xu <yixu310@gmail.com>

---------

Co-authored-by: 三洋三洋 <1258009915@qq.com>
Co-authored-by: OleehyO <leehy0357@gmail.com>
Co-authored-by: Aryan <aryan@huggingface.co>
Co-authored-by: YiYi Xu <yixu310@gmail.com>
2025-02-15 21:46:48 +05:30
..

Generating the documentation

To generate the documentation, you first have to build it. Several packages are necessary to build the doc, you can install them with the following command, at the root of the code repository:

pip install -e ".[docs]"

Then you need to install our open source documentation builder tool:

pip install git+https://github.com/huggingface/doc-builder

NOTE

You only need to generate the documentation to inspect it locally (if you're planning changes and want to check how they look before committing for instance). You don't have to commit the built documentation.


Previewing the documentation

To preview the docs, first install the watchdog module with:

pip install watchdog

Then run the following command:

doc-builder preview {package_name} {path_to_docs}

For example:

doc-builder preview diffusers docs/source/en

The docs will be viewable at http://localhost:3000. You can also preview the docs once you have opened a PR. You will see a bot add a comment to a link where the documentation with your changes lives.


NOTE

The preview command only works with existing doc files. When you add a completely new file, you need to update _toctree.yml & restart preview command (ctrl-c to stop it & call doc-builder preview ... again).


Adding a new element to the navigation bar

Accepted files are Markdown (.md).

Create a file with its extension and put it in the source directory. You can then link it to the toc-tree by putting the filename without the extension in the _toctree.yml file.

Renaming section headers and moving sections

It helps to keep the old links working when renaming the section header and/or moving sections from one document to another. This is because the old links are likely to be used in Issues, Forums, and Social media and it'd make for a much more superior user experience if users reading those months later could still easily navigate to the originally intended information.

Therefore, we simply keep a little map of moved sections at the end of the document where the original section was. The key is to preserve the original anchor.

So if you renamed a section from: "Section A" to "Section B", then you can add at the end of the file:

Sections that were moved:

[ <a href="#section-b">Section A</a><a id="section-a"></a> ]

and of course, if you moved it to another file, then:

Sections that were moved:

[ <a href="../new-file#section-b">Section A</a><a id="section-a"></a> ]

Use the relative style to link to the new file so that the versioned docs continue to work.

For an example of a rich moved section set please see the very end of the transformers Trainer doc.

Writing Documentation - Specification

The huggingface/diffusers documentation follows the Google documentation style for docstrings, although we can write them directly in Markdown.

Adding a new tutorial

Adding a new tutorial or section is done in two steps:

  • Add a new Markdown (.md) file under docs/source/<languageCode>.
  • Link that file in docs/source/<languageCode>/_toctree.yml on the correct toc-tree.

Make sure to put your new file under the proper section. It's unlikely to go in the first section (Get Started), so depending on the intended targets (beginners, more advanced users, or researchers) it should go in sections two, three, or four.

Adding a new pipeline/scheduler

When adding a new pipeline:

  • Create a file xxx.md under docs/source/<languageCode>/api/pipelines (don't hesitate to copy an existing file as template).
  • Link that file in (Diffusers Summary) section in docs/source/api/pipelines/overview.md, along with the link to the paper, and a colab notebook (if available).
  • Write a short overview of the diffusion model:
    • Overview with paper & authors
    • Paper abstract
    • Tips and tricks and how to use it best
    • Possible an end-to-end example of how to use it
  • Add all the pipeline classes that should be linked in the diffusion model. These classes should be added using our Markdown syntax. By default as follows:
[[autodoc]] XXXPipeline
    - all
	- __call__

This will include every public method of the pipeline that is documented, as well as the __call__ method that is not documented by default. If you just want to add additional methods that are not documented, you can put the list of all methods to add in a list that contains all.

[[autodoc]] XXXPipeline
    - all
	- __call__
	- enable_attention_slicing
	- disable_attention_slicing
    - enable_xformers_memory_efficient_attention
    - disable_xformers_memory_efficient_attention

You can follow the same process to create a new scheduler under the docs/source/<languageCode>/api/schedulers folder.

Writing source documentation

Values that should be put in code should either be surrounded by backticks: `like so`. Note that argument names and objects like True, None, or any strings should usually be put in code.

When mentioning a class, function, or method, it is recommended to use our syntax for internal links so that our tool adds a link to its documentation with this syntax: [`XXXClass`] or [`function`]. This requires the class or function to be in the main package.

If you want to create a link to some internal class or function, you need to provide its path. For instance: [`pipelines.ImagePipelineOutput`]. This will be converted into a link with pipelines.ImagePipelineOutput in the description. To get rid of the path and only keep the name of the object you are linking to in the description, add a ~: [`~pipelines.ImagePipelineOutput`] will generate a link with ImagePipelineOutput in the description.

The same works for methods so you can either use [`XXXClass.method`] or [`~XXXClass.method`].

Defining arguments in a method

Arguments should be defined with the Args: (or Arguments: or Parameters:) prefix, followed by a line return and an indentation. The argument should be followed by its type, with its shape if it is a tensor, a colon, and its description:

    Args:
        n_layers (`int`): The number of layers of the model.

If the description is too long to fit in one line, another indentation is necessary before writing the description after the argument.

Here's an example showcasing everything so far:

    Args:
        input_ids (`torch.LongTensor` of shape `(batch_size, sequence_length)`):
            Indices of input sequence tokens in the vocabulary.

            Indices can be obtained using [`AlbertTokenizer`]. See [`~PreTrainedTokenizer.encode`] and
            [`~PreTrainedTokenizer.__call__`] for details.

            [What are input IDs?](../glossary#input-ids)

For optional arguments or arguments with defaults we follow the following syntax: imagine we have a function with the following signature:

def my_function(x: str=None, a: float=3.14):

then its documentation should look like this:

    Args:
        x (`str`, *optional*):
            This argument controls ...
        a (`float`, *optional*, defaults to `3.14`):
            This argument is used to ...

Note that we always omit the "defaults to `None`" when None is the default for any argument. Also note that even if the first line describing your argument type and its default gets long, you can't break it on several lines. You can however write as many lines as you want in the indented description (see the example above with input_ids).

Writing a multi-line code block

Multi-line code blocks can be useful for displaying examples. They are done between two lines of three backticks as usual in Markdown:

```
# first line of code
# second line
# etc
```

Writing a return block

The return block should be introduced with the Returns: prefix, followed by a line return and an indentation. The first line should be the type of the return, followed by a line return. No need to indent further for the elements building the return.

Here's an example of a single value return:

    Returns:
        `List[int]`: A list of integers in the range [0, 1] --- 1 for a special token, 0 for a sequence token.

Here's an example of a tuple return, comprising several objects:

    Returns:
        `tuple(torch.Tensor)` comprising various elements depending on the configuration ([`BertConfig`]) and inputs:
        - ** loss** (*optional*, returned when `masked_lm_labels` is provided) `torch.Tensor` of shape `(1,)` --
          Total loss is the sum of the masked language modeling loss and the next sequence prediction (classification) loss.
        - **prediction_scores** (`torch.Tensor` of shape `(batch_size, sequence_length, config.vocab_size)`) --
          Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).

Adding an image

Due to the rapidly growing repository, it is important to make sure that no files that would significantly weigh down the repository are added. This includes images, videos, and other non-text files. We prefer to leverage a hf.co hosted dataset like the ones hosted on hf-internal-testing in which to place these files and reference them by URL. We recommend putting them in the following dataset: huggingface/documentation-images. If an external contribution, feel free to add the images to your PR and ask a Hugging Face member to migrate your images to this dataset.

Styling the docstring

We have an automatic script running with the make style command that will make sure that:

  • the docstrings fully take advantage of the line width
  • all code examples are formatted using black, like the code of the Transformers library

This script may have some weird failures if you made a syntax mistake or if you uncover a bug. Therefore, it's recommended to commit your changes before running make style, so you can revert the changes done by that script easily.