mirror of
https://github.com/huggingface/diffusers.git
synced 2026-01-27 17:22:53 +03:00
Use HF Papers (#11567)
* Use HF Papers * Apply style fixes --------- Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
This commit is contained in:
committed by
GitHub
parent
799adf4a10
commit
c8bb1ff53e
@@ -65,14 +65,14 @@ For convenience, we provide a table to denote which methods are inference-only a
|
||||
| [Fabric](#fabric) | ✅ | ❌ | |
|
||||
## InstructPix2Pix
|
||||
|
||||
[Paper](https://arxiv.org/abs/2211.09800)
|
||||
[Paper](https://huggingface.co/papers/2211.09800)
|
||||
|
||||
[InstructPix2Pix](../api/pipelines/pix2pix) is fine-tuned from Stable Diffusion to support editing input images. It takes as inputs an image and a prompt describing an edit, and it outputs the edited image.
|
||||
InstructPix2Pix has been explicitly trained to work well with [InstructGPT](https://openai.com/blog/instruction-following/)-like prompts.
|
||||
|
||||
## Pix2Pix Zero
|
||||
|
||||
[Paper](https://arxiv.org/abs/2302.03027)
|
||||
[Paper](https://huggingface.co/papers/2302.03027)
|
||||
|
||||
[Pix2Pix Zero](../api/pipelines/pix2pix_zero) allows modifying an image so that one concept or subject is translated to another one while preserving general image semantics.
|
||||
|
||||
@@ -104,7 +104,7 @@ apply Pix2Pix Zero to any of the available Stable Diffusion models.
|
||||
|
||||
## Attend and Excite
|
||||
|
||||
[Paper](https://arxiv.org/abs/2301.13826)
|
||||
[Paper](https://huggingface.co/papers/2301.13826)
|
||||
|
||||
[Attend and Excite](../api/pipelines/attend_and_excite) allows subjects in the prompt to be faithfully represented in the final image.
|
||||
|
||||
@@ -114,7 +114,7 @@ Like Pix2Pix Zero, Attend and Excite also involves a mini optimization loop (lea
|
||||
|
||||
## Semantic Guidance (SEGA)
|
||||
|
||||
[Paper](https://arxiv.org/abs/2301.12247)
|
||||
[Paper](https://huggingface.co/papers/2301.12247)
|
||||
|
||||
[SEGA](../api/pipelines/semantic_stable_diffusion) allows applying or removing one or more concepts from an image. The strength of the concept can also be controlled. I.e. the smile concept can be used to incrementally increase or decrease the smile of a portrait.
|
||||
|
||||
@@ -124,7 +124,7 @@ Unlike Pix2Pix Zero or Attend and Excite, SEGA directly interacts with the diffu
|
||||
|
||||
## Self-attention Guidance (SAG)
|
||||
|
||||
[Paper](https://arxiv.org/abs/2210.00939)
|
||||
[Paper](https://huggingface.co/papers/2210.00939)
|
||||
|
||||
[Self-attention Guidance](../api/pipelines/self_attention_guidance) improves the general quality of images.
|
||||
|
||||
@@ -140,7 +140,7 @@ It conditions on a monocular depth estimate of the original image.
|
||||
|
||||
## MultiDiffusion Panorama
|
||||
|
||||
[Paper](https://arxiv.org/abs/2302.08113)
|
||||
[Paper](https://huggingface.co/papers/2302.08113)
|
||||
|
||||
[MultiDiffusion Panorama](../api/pipelines/panorama) defines a new generation process over a pre-trained diffusion model. This process binds together multiple diffusion generation methods that can be readily applied to generate high quality and diverse images. Results adhere to user-provided controls, such as desired aspect ratio (e.g., panorama), and spatial guiding signals, ranging from tight segmentation masks to bounding boxes.
|
||||
MultiDiffusion Panorama allows to generate high-quality images at arbitrary aspect ratios (e.g., panoramas).
|
||||
@@ -157,13 +157,13 @@ In addition to pre-trained models, Diffusers has training scripts for fine-tunin
|
||||
|
||||
## Textual Inversion
|
||||
|
||||
[Paper](https://arxiv.org/abs/2208.01618)
|
||||
[Paper](https://huggingface.co/papers/2208.01618)
|
||||
|
||||
[Textual Inversion](../training/text_inversion) fine-tunes a model to teach it about a new concept. I.e. a few pictures of a style of artwork can be used to generate images in that style.
|
||||
|
||||
## ControlNet
|
||||
|
||||
[Paper](https://arxiv.org/abs/2302.05543)
|
||||
[Paper](https://huggingface.co/papers/2302.05543)
|
||||
|
||||
[ControlNet](../api/pipelines/controlnet) is an auxiliary network which adds an extra condition.
|
||||
There are 8 canonical pre-trained ControlNets trained on different conditionings such as edge detection, scribbles,
|
||||
@@ -176,7 +176,7 @@ input.
|
||||
|
||||
## Custom Diffusion
|
||||
|
||||
[Paper](https://arxiv.org/abs/2212.04488)
|
||||
[Paper](https://huggingface.co/papers/2212.04488)
|
||||
|
||||
[Custom Diffusion](../training/custom_diffusion) only fine-tunes the cross-attention maps of a pre-trained
|
||||
text-to-image diffusion model. It also allows for additionally performing Textual Inversion. It supports
|
||||
@@ -186,7 +186,7 @@ concept(s) of interest.
|
||||
|
||||
## Model Editing
|
||||
|
||||
[Paper](https://arxiv.org/abs/2303.08084)
|
||||
[Paper](https://huggingface.co/papers/2303.08084)
|
||||
|
||||
The [text-to-image model editing pipeline](../api/pipelines/model_editing) helps you mitigate some of the incorrect implicit assumptions a pre-trained text-to-image
|
||||
diffusion model might make about the subjects present in the input prompt. For example, if you prompt Stable Diffusion to generate images for "A pack of roses", the roses in the generated images
|
||||
@@ -194,14 +194,14 @@ are more likely to be red. This pipeline helps you change that assumption.
|
||||
|
||||
## DiffEdit
|
||||
|
||||
[Paper](https://arxiv.org/abs/2210.11427)
|
||||
[Paper](https://huggingface.co/papers/2210.11427)
|
||||
|
||||
[DiffEdit](../api/pipelines/diffedit) allows for semantic editing of input images along with
|
||||
input prompts while preserving the original input images as much as possible.
|
||||
|
||||
## T2I-Adapter
|
||||
|
||||
[Paper](https://arxiv.org/abs/2302.08453)
|
||||
[Paper](https://huggingface.co/papers/2302.08453)
|
||||
|
||||
[T2I-Adapter](../api/pipelines/stable_diffusion/adapter) is an auxiliary network which adds an extra condition.
|
||||
There are 8 canonical pre-trained adapters trained on different conditionings such as edge detection, sketch,
|
||||
@@ -209,7 +209,7 @@ depth maps, and semantic segmentations.
|
||||
|
||||
## Fabric
|
||||
|
||||
[Paper](https://arxiv.org/abs/2307.10159)
|
||||
[Paper](https://huggingface.co/papers/2307.10159)
|
||||
|
||||
[Fabric](https://github.com/huggingface/diffusers/tree/442017ccc877279bcf24fbe92f92d3d0def191b6/examples/community#stable-diffusion-fabric-pipeline) is a training-free
|
||||
approach applicable to a wide range of popular diffusion models, which exploits
|
||||
|
||||
@@ -18,7 +18,7 @@ specific language governing permissions and limitations under the License.
|
||||
|
||||
> [!TIP] Take a look at GitHub Issue [#841](https://github.com/huggingface/diffusers/issues/841) for more context about why we're adding community pipelines to help everyone easily share their work without being slowed down.
|
||||
|
||||
Community pipelines are any [`DiffusionPipeline`] class that are different from the original paper implementation (for example, the [`StableDiffusionControlNetPipeline`] corresponds to the [Text-to-Image Generation with ControlNet Conditioning](https://arxiv.org/abs/2302.05543) paper). They provide additional functionality or extend the original implementation of a pipeline.
|
||||
Community pipelines are any [`DiffusionPipeline`] class that are different from the original paper implementation (for example, the [`StableDiffusionControlNetPipeline`] corresponds to the [Text-to-Image Generation with ControlNet Conditioning](https://huggingface.co/papers/2302.05543) paper). They provide additional functionality or extend the original implementation of a pipeline.
|
||||
|
||||
There are many cool community pipelines like [Marigold Depth Estimation](https://github.com/huggingface/diffusers/tree/main/examples/community#marigold-depth-estimation) or [InstantID](https://github.com/huggingface/diffusers/tree/main/examples/community#instantid-pipeline), and you can find all the official community pipelines [here](https://github.com/huggingface/diffusers/tree/main/examples/community).
|
||||
|
||||
|
||||
@@ -25,7 +25,7 @@ The major advantages of TCD are:
|
||||
- Freely change detail level: During inference, the level of detail in the image can be adjusted with a single hyperparameter, *gamma*.
|
||||
|
||||
> [!TIP]
|
||||
> For more technical details of TCD, please refer to the [paper](https://arxiv.org/abs/2402.19159) or official [project page](https://mhh0318.github.io/tcd/)).
|
||||
> For more technical details of TCD, please refer to the [paper](https://huggingface.co/papers/2402.19159) or official [project page](https://mhh0318.github.io/tcd/).
|
||||
|
||||
For large models like SDXL, TCD is trained with [LoRA](https://huggingface.co/docs/peft/conceptual_guides/adapter#low-rank-adaptation-lora) to reduce memory usage. This is also useful because you can reuse LoRAs between different finetuned models, as long as they share the same base model, without further training.
|
||||
|
||||
|
||||
@@ -15,7 +15,7 @@ OmniGen is an image generation model. Unlike existing text-to-image models, Omni
|
||||
- Minimalist model architecture, consisting of only a VAE and a transformer module, for joint modeling of text and images.
|
||||
- Support for multimodal inputs. It can process any text-image mixed data as instructions for image generation, rather than relying solely on text.
|
||||
|
||||
For more information, please refer to the [paper](https://arxiv.org/pdf/2409.11340).
|
||||
For more information, please refer to the [paper](https://huggingface.co/papers/2409.11340).
|
||||
This guide will walk you through using OmniGen for various tasks and use cases.
|
||||
|
||||
## Load model checkpoints
|
||||
|
||||
Reference in New Issue
Block a user