mirror of
https://github.com/huggingface/diffusers.git
synced 2026-01-29 07:22:12 +03:00
* add t2i_example script * remove in channels logic * remove comments * remove use_euler arg * add requirements * only use canny example * use datasets * comments * make log_validation consistent with other scripts * add readme * fix title in readme * update check_min_version * change a few minor things. * add doc entry * add: test for t2i adapter training * remove use_auth_token * fix: logged info. * remove tests for now. --------- Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
369 lines
12 KiB
YAML
369 lines
12 KiB
YAML
- sections:
|
|
- local: index
|
|
title: 🧨 Diffusers
|
|
- local: quicktour
|
|
title: Quicktour
|
|
- local: stable_diffusion
|
|
title: Effective and efficient diffusion
|
|
- local: installation
|
|
title: Installation
|
|
title: Get started
|
|
- sections:
|
|
- local: tutorials/tutorial_overview
|
|
title: Overview
|
|
- local: using-diffusers/write_own_pipeline
|
|
title: Understanding models and schedulers
|
|
- local: tutorials/autopipeline
|
|
title: AutoPipeline
|
|
- local: tutorials/basic_training
|
|
title: Train a diffusion model
|
|
title: Tutorials
|
|
- sections:
|
|
- sections:
|
|
- local: using-diffusers/loading_overview
|
|
title: Overview
|
|
- local: using-diffusers/loading
|
|
title: Load pipelines, models, and schedulers
|
|
- local: using-diffusers/schedulers
|
|
title: Load and compare different schedulers
|
|
- local: using-diffusers/custom_pipeline_overview
|
|
title: Load community pipelines
|
|
- local: using-diffusers/using_safetensors
|
|
title: Load safetensors
|
|
- local: using-diffusers/other-formats
|
|
title: Load different Stable Diffusion formats
|
|
- local: using-diffusers/push_to_hub
|
|
title: Push files to the Hub
|
|
title: Loading & Hub
|
|
- sections:
|
|
- local: using-diffusers/unconditional_image_generation
|
|
title: Unconditional image generation
|
|
- local: using-diffusers/conditional_image_generation
|
|
title: Text-to-image
|
|
- local: using-diffusers/img2img
|
|
title: Image-to-image
|
|
- local: using-diffusers/inpaint
|
|
title: Inpainting
|
|
- local: using-diffusers/depth2img
|
|
title: Depth-to-image
|
|
title: Tasks
|
|
- sections:
|
|
- local: using-diffusers/textual_inversion_inference
|
|
title: Textual inversion
|
|
- local: training/distributed_inference
|
|
title: Distributed inference with multiple GPUs
|
|
- local: using-diffusers/reusing_seeds
|
|
title: Improve image quality with deterministic generation
|
|
- local: using-diffusers/control_brightness
|
|
title: Control image brightness
|
|
- local: using-diffusers/weighted_prompts
|
|
title: Prompt weighting
|
|
title: Techniques
|
|
- sections:
|
|
- local: using-diffusers/pipeline_overview
|
|
title: Overview
|
|
- local: using-diffusers/sdxl
|
|
title: Stable Diffusion XL
|
|
- local: using-diffusers/controlnet
|
|
title: ControlNet
|
|
- local: using-diffusers/shap-e
|
|
title: Shap-E
|
|
- local: using-diffusers/diffedit
|
|
title: DiffEdit
|
|
- local: using-diffusers/distilled_sd
|
|
title: Distilled Stable Diffusion inference
|
|
- local: using-diffusers/reproducibility
|
|
title: Create reproducible pipelines
|
|
- local: using-diffusers/custom_pipeline_examples
|
|
title: Community pipelines
|
|
- local: using-diffusers/contribute_pipeline
|
|
title: How to contribute a community pipeline
|
|
title: Pipelines for Inference
|
|
- sections:
|
|
- local: training/overview
|
|
title: Overview
|
|
- local: training/create_dataset
|
|
title: Create a dataset for training
|
|
- local: training/adapt_a_model
|
|
title: Adapt a model to a new task
|
|
- local: training/unconditional_training
|
|
title: Unconditional image generation
|
|
- local: training/text_inversion
|
|
title: Textual Inversion
|
|
- local: training/dreambooth
|
|
title: DreamBooth
|
|
- local: training/text2image
|
|
title: Text-to-image
|
|
- local: training/lora
|
|
title: Low-Rank Adaptation of Large Language Models (LoRA)
|
|
- local: training/controlnet
|
|
title: ControlNet
|
|
- local: training/instructpix2pix
|
|
title: InstructPix2Pix Training
|
|
- local: training/custom_diffusion
|
|
title: Custom Diffusion
|
|
- local: training/t2i_adapters
|
|
title: T2I-Adapters
|
|
title: Training
|
|
- sections:
|
|
- local: using-diffusers/other-modalities
|
|
title: Other Modalities
|
|
title: Taking Diffusers Beyond Images
|
|
title: Using Diffusers
|
|
- sections:
|
|
- local: optimization/opt_overview
|
|
title: Overview
|
|
- local: optimization/fp16
|
|
title: Memory and Speed
|
|
- local: optimization/torch2.0
|
|
title: Torch2.0 support
|
|
- local: using-diffusers/stable_diffusion_jax_how_to
|
|
title: Stable Diffusion in JAX/Flax
|
|
- local: optimization/xformers
|
|
title: xFormers
|
|
- local: optimization/onnx
|
|
title: ONNX
|
|
- local: optimization/open_vino
|
|
title: OpenVINO
|
|
- local: optimization/coreml
|
|
title: Core ML
|
|
- local: optimization/mps
|
|
title: MPS
|
|
- local: optimization/habana
|
|
title: Habana Gaudi
|
|
- local: optimization/tome
|
|
title: Token Merging
|
|
title: Optimization/Special Hardware
|
|
- sections:
|
|
- local: conceptual/philosophy
|
|
title: Philosophy
|
|
- local: using-diffusers/controlling_generation
|
|
title: Controlled generation
|
|
- local: conceptual/contribution
|
|
title: How to contribute?
|
|
- local: conceptual/ethical_guidelines
|
|
title: Diffusers' Ethical Guidelines
|
|
- local: conceptual/evaluation
|
|
title: Evaluating Diffusion Models
|
|
title: Conceptual Guides
|
|
- sections:
|
|
- sections:
|
|
- local: api/attnprocessor
|
|
title: Attention Processor
|
|
- local: api/diffusion_pipeline
|
|
title: Diffusion Pipeline
|
|
- local: api/logging
|
|
title: Logging
|
|
- local: api/configuration
|
|
title: Configuration
|
|
- local: api/outputs
|
|
title: Outputs
|
|
- local: api/loaders
|
|
title: Loaders
|
|
- local: api/utilities
|
|
title: Utilities
|
|
- local: api/image_processor
|
|
title: VAE Image Processor
|
|
title: Main Classes
|
|
- sections:
|
|
- local: api/models/overview
|
|
title: Overview
|
|
- local: api/models/unet
|
|
title: UNet1DModel
|
|
- local: api/models/unet2d
|
|
title: UNet2DModel
|
|
- local: api/models/unet2d-cond
|
|
title: UNet2DConditionModel
|
|
- local: api/models/unet3d-cond
|
|
title: UNet3DConditionModel
|
|
- local: api/models/vq
|
|
title: VQModel
|
|
- local: api/models/autoencoderkl
|
|
title: AutoencoderKL
|
|
- local: api/models/asymmetricautoencoderkl
|
|
title: AsymmetricAutoencoderKL
|
|
- local: api/models/autoencoder_tiny
|
|
title: Tiny AutoEncoder
|
|
- local: api/models/transformer2d
|
|
title: Transformer2D
|
|
- local: api/models/transformer_temporal
|
|
title: Transformer Temporal
|
|
- local: api/models/prior_transformer
|
|
title: Prior Transformer
|
|
- local: api/models/controlnet
|
|
title: ControlNet
|
|
title: Models
|
|
- sections:
|
|
- local: api/pipelines/overview
|
|
title: Overview
|
|
- local: api/pipelines/alt_diffusion
|
|
title: AltDiffusion
|
|
- local: api/pipelines/attend_and_excite
|
|
title: Attend-and-Excite
|
|
- local: api/pipelines/audio_diffusion
|
|
title: Audio Diffusion
|
|
- local: api/pipelines/audioldm
|
|
title: AudioLDM
|
|
- local: api/pipelines/audioldm2
|
|
title: AudioLDM 2
|
|
- local: api/pipelines/auto_pipeline
|
|
title: AutoPipeline
|
|
- local: api/pipelines/consistency_models
|
|
title: Consistency Models
|
|
- local: api/pipelines/controlnet
|
|
title: ControlNet
|
|
- local: api/pipelines/controlnet_sdxl
|
|
title: ControlNet with Stable Diffusion XL
|
|
- local: api/pipelines/cycle_diffusion
|
|
title: Cycle Diffusion
|
|
- local: api/pipelines/dance_diffusion
|
|
title: Dance Diffusion
|
|
- local: api/pipelines/ddim
|
|
title: DDIM
|
|
- local: api/pipelines/ddpm
|
|
title: DDPM
|
|
- local: api/pipelines/deepfloyd_if
|
|
title: DeepFloyd IF
|
|
- local: api/pipelines/diffedit
|
|
title: DiffEdit
|
|
- local: api/pipelines/dit
|
|
title: DiT
|
|
- local: api/pipelines/pix2pix
|
|
title: InstructPix2Pix
|
|
- local: api/pipelines/kandinsky
|
|
title: Kandinsky
|
|
- local: api/pipelines/kandinsky_v22
|
|
title: Kandinsky 2.2
|
|
- local: api/pipelines/latent_diffusion
|
|
title: Latent Diffusion
|
|
- local: api/pipelines/panorama
|
|
title: MultiDiffusion
|
|
- local: api/pipelines/musicldm
|
|
title: MusicLDM
|
|
- local: api/pipelines/paint_by_example
|
|
title: PaintByExample
|
|
- local: api/pipelines/paradigms
|
|
title: Parallel Sampling of Diffusion Models
|
|
- local: api/pipelines/pix2pix_zero
|
|
title: Pix2Pix Zero
|
|
- local: api/pipelines/pndm
|
|
title: PNDM
|
|
- local: api/pipelines/repaint
|
|
title: RePaint
|
|
- local: api/pipelines/score_sde_ve
|
|
title: Score SDE VE
|
|
- local: api/pipelines/self_attention_guidance
|
|
title: Self-Attention Guidance
|
|
- local: api/pipelines/semantic_stable_diffusion
|
|
title: Semantic Guidance
|
|
- local: api/pipelines/shap_e
|
|
title: Shap-E
|
|
- local: api/pipelines/spectrogram_diffusion
|
|
title: Spectrogram Diffusion
|
|
- sections:
|
|
- local: api/pipelines/stable_diffusion/overview
|
|
title: Overview
|
|
- local: api/pipelines/stable_diffusion/text2img
|
|
title: Text-to-image
|
|
- local: api/pipelines/stable_diffusion/img2img
|
|
title: Image-to-image
|
|
- local: api/pipelines/stable_diffusion/inpaint
|
|
title: Inpainting
|
|
- local: api/pipelines/stable_diffusion/depth2img
|
|
title: Depth-to-image
|
|
- local: api/pipelines/stable_diffusion/image_variation
|
|
title: Image variation
|
|
- local: api/pipelines/stable_diffusion/stable_diffusion_safe
|
|
title: Safe Stable Diffusion
|
|
- local: api/pipelines/stable_diffusion/stable_diffusion_2
|
|
title: Stable Diffusion 2
|
|
- local: api/pipelines/stable_diffusion/stable_diffusion_xl
|
|
title: Stable Diffusion XL
|
|
- local: api/pipelines/stable_diffusion/latent_upscale
|
|
title: Latent upscaler
|
|
- local: api/pipelines/stable_diffusion/upscale
|
|
title: Super-resolution
|
|
- local: api/pipelines/stable_diffusion/ldm3d_diffusion
|
|
title: LDM3D Text-to-(RGB, Depth)
|
|
- local: api/pipelines/stable_diffusion/adapter
|
|
title: Stable Diffusion T2I-adapter
|
|
- local: api/pipelines/stable_diffusion/gligen
|
|
title: GLIGEN (Grounded Language-to-Image Generation)
|
|
title: Stable Diffusion
|
|
- local: api/pipelines/stable_unclip
|
|
title: Stable unCLIP
|
|
- local: api/pipelines/stochastic_karras_ve
|
|
title: Stochastic Karras VE
|
|
- local: api/pipelines/model_editing
|
|
title: Text-to-image model editing
|
|
- local: api/pipelines/text_to_video
|
|
title: Text-to-video
|
|
- local: api/pipelines/text_to_video_zero
|
|
title: Text2Video-Zero
|
|
- local: api/pipelines/unclip
|
|
title: UnCLIP
|
|
- local: api/pipelines/latent_diffusion_uncond
|
|
title: Unconditional Latent Diffusion
|
|
- local: api/pipelines/unidiffuser
|
|
title: UniDiffuser
|
|
- local: api/pipelines/value_guided_sampling
|
|
title: Value-guided sampling
|
|
- local: api/pipelines/versatile_diffusion
|
|
title: Versatile Diffusion
|
|
- local: api/pipelines/vq_diffusion
|
|
title: VQ Diffusion
|
|
- local: api/pipelines/wuerstchen
|
|
title: Wuerstchen
|
|
title: Pipelines
|
|
- sections:
|
|
- local: api/schedulers/overview
|
|
title: Overview
|
|
- local: api/schedulers/cm_stochastic_iterative
|
|
title: CMStochasticIterativeScheduler
|
|
- local: api/schedulers/ddim_inverse
|
|
title: DDIMInverseScheduler
|
|
- local: api/schedulers/ddim
|
|
title: DDIMScheduler
|
|
- local: api/schedulers/ddpm
|
|
title: DDPMScheduler
|
|
- local: api/schedulers/deis
|
|
title: DEISMultistepScheduler
|
|
- local: api/schedulers/multistep_dpm_solver_inverse
|
|
title: DPMSolverMultistepInverse
|
|
- local: api/schedulers/multistep_dpm_solver
|
|
title: DPMSolverMultistepScheduler
|
|
- local: api/schedulers/dpm_sde
|
|
title: DPMSolverSDEScheduler
|
|
- local: api/schedulers/singlestep_dpm_solver
|
|
title: DPMSolverSinglestepScheduler
|
|
- local: api/schedulers/euler_ancestral
|
|
title: EulerAncestralDiscreteScheduler
|
|
- local: api/schedulers/euler
|
|
title: EulerDiscreteScheduler
|
|
- local: api/schedulers/heun
|
|
title: HeunDiscreteScheduler
|
|
- local: api/schedulers/ipndm
|
|
title: IPNDMScheduler
|
|
- local: api/schedulers/stochastic_karras_ve
|
|
title: KarrasVeScheduler
|
|
- local: api/schedulers/dpm_discrete_ancestral
|
|
title: KDPM2AncestralDiscreteScheduler
|
|
- local: api/schedulers/dpm_discrete
|
|
title: KDPM2DiscreteScheduler
|
|
- local: api/schedulers/lms_discrete
|
|
title: LMSDiscreteScheduler
|
|
- local: api/schedulers/pndm
|
|
title: PNDMScheduler
|
|
- local: api/schedulers/repaint
|
|
title: RePaintScheduler
|
|
- local: api/schedulers/score_sde_ve
|
|
title: ScoreSdeVeScheduler
|
|
- local: api/schedulers/score_sde_vp
|
|
title: ScoreSdeVpScheduler
|
|
- local: api/schedulers/unipc
|
|
title: UniPCMultistepScheduler
|
|
- local: api/schedulers/vq_diffusion
|
|
title: VQDiffusionScheduler
|
|
title: Schedulers
|
|
title: API
|