1
0
mirror of https://github.com/huggingface/diffusers.git synced 2026-01-27 17:22:53 +03:00
Files
diffusers/docs
dg845 aed7499a8d Add Consistency Models Pipeline (#3492)
* initial commit

* Improve consistency models sampling implementation.

* Add CMStochasticIterativeScheduler, which implements the multi-step sampler (stochastic_iterative_sampler) in the original code, and make further improvements to sampling.

* Add Unet blocks for consistency models

* Add conversion script for Unet

* Fix bug in new unet blocks

* Fix attention weight loading

* Make design improvements to ConsistencyModelPipeline and CMStochasticIterativeScheduler and add initial version of tests.

* make style

* Make small random test UNet class conditional and set resnet_time_scale_shift to 'scale_shift' to better match consistency model checkpoints.

* Add support for converting a test UNet and non-class-conditional UNets to the consistency models conversion script.

* make style

* Change num_class_embeds to 1000 to better match the original consistency models implementation.

* Add support for distillation in pipeline_consistency_models.py.

* Improve consistency model tests:
	- Get small testing checkpoints from hub
	- Modify tests to take into account "distillation" parameter of ConsistencyModelPipeline
	- Add onestep, multistep tests for distillation and distillation + class conditional
	- Add expected image slices for onestep tests

* make style

* Improve ConsistencyModelPipeline:
	- Add initial support for class-conditional generation
	- Fix initial sigma for onestep generation
	- Fix some sigma shape issues

* make style

* Improve ConsistencyModelPipeline:
	- add latents __call__ argument and prepare_latents method
	- add check_inputs method
	- add initial docstrings for ConsistencyModelPipeline.__call__

* make style

* Fix bug when randomly generating class labels for class-conditional generation.

* Switch CMStochasticIterativeScheduler to configuring a sigma schedule and make related changes to the pipeline and tests.

* Remove some unused code and make style.

* Fix small bug in CMStochasticIterativeScheduler.

* Add expected slices for multistep sampling tests and make them pass.

* Work on consistency model fast tests:
	- in pipeline, call self.scheduler.scale_model_input before denoising
	- get expected slices for Euler and Heun scheduler tests
	- make Euler test pass
	- mark Heun test as expected fail because it doesn't support prediction_type "sample" yet
	- remove DPM and Euler Ancestral tests because they don't support use_karras_sigmas

* make style

* Refactor conversion script to make it easier to add more model architectures to convert in the future.

* Work on ConsistencyModelPipeline tests:
	- Fix device bug when handling class labels in ConsistencyModelPipeline.__call__
	- Add slow tests for onestep and multistep sampling and make them pass
	- Refactor fast tests
	- Refactor ConsistencyModelPipeline.__init__

* make style

* Remove the add_noise and add_noise_to_input methods from CMStochasticIterativeScheduler for now.

* Run python utils/check_copies.py --fix_and_overwrite
python utils/check_dummies.py --fix_and_overwrite to make dummy objects for new pipeline and scheduler.

* Make fast tests from PipelineTesterMixin pass.

* make style

* Refactor consistency models pipeline and scheduler:
	- Remove support for Karras schedulers (only support CMStochasticIterativeScheduler)
	- Move sigma manipulation, input scaling, denoising from pipeline to scheduler
	- Make corresponding changes to tests and ensure they pass

* make style

* Add docstrings and further refactor pipeline and scheduler.

* make style

* Add initial version of the consistency models documentation.

* Refactor custom timesteps logic following DDPMScheduler/IFPipeline and temporarily add torch 2.0 SDPA kernel selection logic for debugging.

* make style

* Convert current slow tests to use fp16 and flash attention.

* make style

* Add slow tests for normal attention on cuda device.

* make style

* Fix attention weights loading

* Update consistency model fast tests for new test checkpoints with attention fix.

* make style

* apply suggestions

* Add add_noise method to CMStochasticIterativeScheduler (copied from EulerDiscreteScheduler).

* Conversion script now outputs pipeline instead of UNet and add support for LSUN-256 models and different schedulers.

* When both timesteps and num_inference_steps are supplied, raise warning instead of error (timesteps take precedence).

* make style

* Add remaining diffusers model checkpoints for models in the original consistency model release and update usage example.

* apply suggestions from review

* make style

* fix attention naming

* Add tests for CMStochasticIterativeScheduler.

* make style

* Make CMStochasticIterativeScheduler tests pass.

* make style

* Override test_step_shape in CMStochasticIterativeSchedulerTest instead of modifying it in SchedulerCommonTest.

* make style

* rename some models

* Improve API

* rename some models

* Remove duplicated block

* Add docstring and make torch compile work

* More fixes

* Fixes

* Apply suggestions from code review

* Apply suggestions from code review

* add more docstring

* update consistency conversion script

---------

Co-authored-by: ayushmangal <ayushmangal@microsoft.com>
Co-authored-by: Ayush Mangal <43698245+ayushtues@users.noreply.github.com>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2023-07-05 19:33:58 +02:00
..
2023-03-01 10:31:00 +01:00

Generating the documentation

To generate the documentation, you first have to build it. Several packages are necessary to build the doc, you can install them with the following command, at the root of the code repository:

pip install -e ".[docs]"

Then you need to install our open source documentation builder tool:

pip install git+https://github.com/huggingface/doc-builder

NOTE

You only need to generate the documentation to inspect it locally (if you're planning changes and want to check how they look before committing for instance). You don't have to commit the built documentation.


Previewing the documentation

To preview the docs, first install the watchdog module with:

pip install watchdog

Then run the following command:

doc-builder preview {package_name} {path_to_docs}

For example:

doc-builder preview diffusers docs/source/en

The docs will be viewable at http://localhost:3000. You can also preview the docs once you have opened a PR. You will see a bot add a comment to a link where the documentation with your changes lives.


NOTE

The preview command only works with existing doc files. When you add a completely new file, you need to update _toctree.yml & restart preview command (ctrl-c to stop it & call doc-builder preview ... again).


Adding a new element to the navigation bar

Accepted files are Markdown (.md or .mdx).

Create a file with its extension and put it in the source directory. You can then link it to the toc-tree by putting the filename without the extension in the _toctree.yml file.

Renaming section headers and moving sections

It helps to keep the old links working when renaming the section header and/or moving sections from one document to another. This is because the old links are likely to be used in Issues, Forums, and Social media and it'd make for a much more superior user experience if users reading those months later could still easily navigate to the originally intended information.

Therefore, we simply keep a little map of moved sections at the end of the document where the original section was. The key is to preserve the original anchor.

So if you renamed a section from: "Section A" to "Section B", then you can add at the end of the file:

Sections that were moved:

[ <a href="#section-b">Section A</a><a id="section-a"></a> ]

and of course, if you moved it to another file, then:

Sections that were moved:

[ <a href="../new-file#section-b">Section A</a><a id="section-a"></a> ]

Use the relative style to link to the new file so that the versioned docs continue to work.

For an example of a rich moved section set please see the very end of the transformers Trainer doc.

Writing Documentation - Specification

The huggingface/diffusers documentation follows the Google documentation style for docstrings, although we can write them directly in Markdown.

Adding a new tutorial

Adding a new tutorial or section is done in two steps:

  • Add a new file under docs/source. This file can either be ReStructuredText (.rst) or Markdown (.md).
  • Link that file in docs/source/_toctree.yml on the correct toc-tree.

Make sure to put your new file under the proper section. It's unlikely to go in the first section (Get Started), so depending on the intended targets (beginners, more advanced users, or researchers) it should go in sections two, three, or four.

Adding a new pipeline/scheduler

When adding a new pipeline:

  • create a file xxx.mdx under docs/source/api/pipelines (don't hesitate to copy an existing file as template).
  • Link that file in (Diffusers Summary) section in docs/source/api/pipelines/overview.mdx, along with the link to the paper, and a colab notebook (if available).
  • Write a short overview of the diffusion model:
    • Overview with paper & authors
    • Paper abstract
    • Tips and tricks and how to use it best
    • Possible an end-to-end example of how to use it
  • Add all the pipeline classes that should be linked in the diffusion model. These classes should be added using our Markdown syntax. By default as follows:
## XXXPipeline

[[autodoc]] XXXPipeline
    - all
	- __call__

This will include every public method of the pipeline that is documented, as well as the __call__ method that is not documented by default. If you just want to add additional methods that are not documented, you can put the list of all methods to add in a list that contains all.

[[autodoc]] XXXPipeline
    - all
	- __call__
	- enable_attention_slicing
	- disable_attention_slicing
    - enable_xformers_memory_efficient_attention 
    - disable_xformers_memory_efficient_attention

You can follow the same process to create a new scheduler under the docs/source/api/schedulers folder

Writing source documentation

Values that should be put in code should either be surrounded by backticks: `like so`. Note that argument names and objects like True, None, or any strings should usually be put in code.

When mentioning a class, function, or method, it is recommended to use our syntax for internal links so that our tool adds a link to its documentation with this syntax: [`XXXClass`] or [`function`]. This requires the class or function to be in the main package.

If you want to create a link to some internal class or function, you need to provide its path. For instance: [`pipelines.ImagePipelineOutput`]. This will be converted into a link with pipelines.ImagePipelineOutput in the description. To get rid of the path and only keep the name of the object you are linking to in the description, add a ~: [`~pipelines.ImagePipelineOutput`] will generate a link with ImagePipelineOutput in the description.

The same works for methods so you can either use [`XXXClass.method`] or [~`XXXClass.method`].

Defining arguments in a method

Arguments should be defined with the Args: (or Arguments: or Parameters:) prefix, followed by a line return and an indentation. The argument should be followed by its type, with its shape if it is a tensor, a colon, and its description:

    Args:
        n_layers (`int`): The number of layers of the model.

If the description is too long to fit in one line, another indentation is necessary before writing the description after the argument.

Here's an example showcasing everything so far:

    Args:
        input_ids (`torch.LongTensor` of shape `(batch_size, sequence_length)`):
            Indices of input sequence tokens in the vocabulary.

            Indices can be obtained using [`AlbertTokenizer`]. See [`~PreTrainedTokenizer.encode`] and
            [`~PreTrainedTokenizer.__call__`] for details.

            [What are input IDs?](../glossary#input-ids)

For optional arguments or arguments with defaults we follow the following syntax: imagine we have a function with the following signature:

def my_function(x: str = None, a: float = 1):

then its documentation should look like this:

    Args:
        x (`str`, *optional*):
            This argument controls ...
        a (`float`, *optional*, defaults to 1):
            This argument is used to ...

Note that we always omit the "defaults to `None`" when None is the default for any argument. Also note that even if the first line describing your argument type and its default gets long, you can't break it on several lines. You can however write as many lines as you want in the indented description (see the example above with input_ids).

Writing a multi-line code block

Multi-line code blocks can be useful for displaying examples. They are done between two lines of three backticks as usual in Markdown:

```
# first line of code
# second line
# etc
```

Writing a return block

The return block should be introduced with the Returns: prefix, followed by a line return and an indentation. The first line should be the type of the return, followed by a line return. No need to indent further for the elements building the return.

Here's an example of a single value return:

    Returns:
        `List[int]`: A list of integers in the range [0, 1] --- 1 for a special token, 0 for a sequence token.

Here's an example of a tuple return, comprising several objects:

    Returns:
        `tuple(torch.FloatTensor)` comprising various elements depending on the configuration ([`BertConfig`]) and inputs:
        - ** loss** (*optional*, returned when `masked_lm_labels` is provided) `torch.FloatTensor` of shape `(1,)` --
          Total loss is the sum of the masked language modeling loss and the next sequence prediction (classification) loss.
        - **prediction_scores** (`torch.FloatTensor` of shape `(batch_size, sequence_length, config.vocab_size)`) --
          Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).

Adding an image

Due to the rapidly growing repository, it is important to make sure that no files that would significantly weigh down the repository are added. This includes images, videos, and other non-text files. We prefer to leverage a hf.co hosted dataset like the ones hosted on hf-internal-testing in which to place these files and reference them by URL. We recommend putting them in the following dataset: huggingface/documentation-images. If an external contribution, feel free to add the images to your PR and ask a Hugging Face member to migrate your images to this dataset.

Styling the docstring

We have an automatic script running with the make style command that will make sure that:

  • the docstrings fully take advantage of the line width
  • all code examples are formatted using black, like the code of the Transformers library

This script may have some weird failures if you made a syntax mistake or if you uncover a bug. Therefore, it's recommended to commit your changes before running make style, so you can revert the changes done by that script easily.