yiyixuxu
db4b54cfab
finish the autopipelines section!
2025-06-30 21:05:32 +02:00
yiyixuxu
bbd9340781
up
2025-06-30 11:30:06 +02:00
yiyixuxu
363737ec4b
add loop sequential blocks
2025-06-30 11:09:08 +02:00
yiyixuxu
c5849ba9d5
more
2025-06-30 09:46:34 +02:00
yiyixuxu
f09b1ccfae
start the section on sequential pipelines
2025-06-30 07:48:44 +02:00
yiyixuxu
c75b88f86f
up
2025-06-30 03:23:44 +02:00
YiYi Xu
b43e703fae
Update docs/source/en/modular_diffusers/write_own_pipeline_block.md
2025-06-29 14:49:54 -10:00
YiYi Xu
9fae3828a7
Apply suggestions from code review
2025-06-29 14:49:31 -10:00
yiyixuxu
3a3441cb45
start the write your own pipeline block tutorial
2025-06-30 02:47:38 +02:00
yiyixuxu
fdd2bedae9
2024 -> 2025; fix a circular import
2025-06-29 03:00:46 +02:00
YiYi Xu
fedaa00bd5
Merge branch 'main' into modular-diffusers
2025-06-28 14:50:58 -10:00
yiyixuxu
8c680bc0b4
up
2025-06-28 14:11:17 +02:00
yiyixuxu
92b6b43805
add some visuals
2025-06-28 13:39:45 +02:00
yiyixuxu
58dbe0c29e
finimsh the quickstart!
2025-06-28 12:46:21 +02:00
Aryan
d7dd924ece
Kontext fixes ( #11815 )
...
fix
2025-06-26 13:03:44 -10:00
Sayak Paul
00f95b9755
Kontext training ( #11813 )
...
* support flux kontext
* make fix-copies
* add example
* add tests
* update docs
* update
* add note on integrity checker
* initial commit
* initial commit
* add readme section and fixes in the training script.
* add test
* rectify ckpt_id
* fix ckpt
* fixes
* change id
* update
* Update examples/dreambooth/train_dreambooth_lora_flux_kontext.py
Co-authored-by: Aryan <aryan@huggingface.co >
* Update examples/dreambooth/README_flux.md
---------
Co-authored-by: Aryan <aryan@huggingface.co >
Co-authored-by: linoytsaban <linoy@huggingface.co >
Co-authored-by: Linoy Tsaban <57615435+linoytsaban@users.noreply.github.com >
2025-06-26 19:31:42 +03:00
Aryan
eea76892e8
Flux Kontext ( #11812 )
...
* support flux kontext
* make fix-copies
* add example
* add tests
* update docs
* update
* add note on integrity checker
* make fix-copies issue
* add copied froms
* make style
* update repository ids
* more copied froms
2025-06-26 21:29:59 +05:30
yiyixuxu
b92cda25e2
move quicktour to first page
2025-06-26 12:39:13 +02:00
Sayak Paul
a185e1ab91
[tests] add a test on torch compile for varied resolutions ( #11776 )
...
* add test for checking compile on different shapes.
* update
* update
* Apply suggestions from code review
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com >
---------
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com >
2025-06-26 10:07:03 +05:30
Animesh Jain
d93381cd41
[rfc][compile] compile method for DiffusionPipeline ( #11705 )
...
* [rfc][compile] compile method for DiffusionPipeline
* Apply suggestions from code review
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
* Apply style fixes
* Update docs/source/en/optimization/fp16.md
* check
---------
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2025-06-26 08:41:38 +05:30
yiyixuxu
9530245e17
correct code format
2025-06-25 12:10:35 +02:00
YiYi Xu
174628edf4
Merge branch 'main' into modular-diffusers
2025-06-24 22:01:03 -10:00
yiyixuxu
1c9f0a83c9
ujpdate toctree
2025-06-25 09:14:19 +02:00
yiyixuxu
e49413d87d
update doc
2025-06-25 08:52:15 +02:00
yiyixuxu
48e4ff5c05
update overview
2025-06-24 10:17:35 +02:00
yiyixuxu
7c78fb1aad
add a overview doc page
2025-06-24 08:16:34 +02:00
Sayak Paul
92542719ed
[docs] minor cleanups in the lora docs. ( #11770 )
...
* minor cleanups in the lora docs.
* Apply suggestions from code review
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com >
* format docs
* fix copies
---------
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com >
2025-06-24 08:10:07 +05:30
yiyixuxu
bb4044362e
up
2025-06-23 18:37:28 +02:00
yiyixuxu
1ae591e817
update code format
2025-06-23 18:08:55 +02:00
yiyixuxu
42c06e90f4
update doc
2025-06-23 17:55:32 +02:00
yiyixuxu
085ade03be
add doc (developer guide)
2025-06-23 16:12:31 +02:00
Steven Liu
0874dd04dc
[docs] LoRA scale scheduling ( #11727 )
...
draft
2025-06-20 10:15:29 -07:00
Steven Liu
6184d8a433
[docs] device_map ( #11711 )
...
draft
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
2025-06-20 10:14:48 -07:00
Steven Liu
5a6e386464
[docs] Quantization + torch.compile + offloading ( #11703 )
...
* draft
* feedback
* update
* feedback
* fix
* feedback
* feedback
* fix
* feedback
2025-06-20 10:11:39 -07:00
Dhruv Nair
195926bbdc
Update Chroma Docs ( #11753 )
...
* update
* update
---------
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
2025-06-19 19:33:19 +02:00
Sayak Paul
85a916bb8b
make group offloading work with disk/nvme transfers ( #11682 )
...
* start implementing disk offloading in group.
* delete diff file.
* updates.patch
* offload_to_disk_path
* check if safetensors already exist.
* add test and clarify.
* updates
* update todos.
* update more docs.
* update docs
2025-06-19 18:09:30 +05:30
Aryan
a4df8dbc40
Update more licenses to 2025 ( #11746 )
...
update
2025-06-19 07:46:01 +05:30
David Berenstein
9b834f8710
Add Pruna optimization framework documentation ( #11688 )
...
* Add Pruna optimization framework documentation
- Introduced a new section for Pruna in the table of contents.
- Added comprehensive documentation for Pruna, detailing its optimization techniques, installation instructions, and examples for optimizing and evaluating models
* Enhance Pruna documentation with image alt text and code block formatting
- Added alt text to images for better accessibility and context.
- Changed code block syntax from diff to python for improved clarity.
* Add installation section to Pruna documentation
- Introduced a new installation section in the Pruna documentation to guide users on how to install the framework.
- Enhanced the overall clarity and usability of the documentation for new users.
* Update pruna.md
* Update pruna.md
* Update Pruna documentation for model optimization and evaluation
- Changed section titles for consistency and clarity, from "Optimizing models" to "Optimize models" and "Evaluating and benchmarking optimized models" to "Evaluate and benchmark models".
- Enhanced descriptions to clarify the use of `diffusers` models and the evaluation process.
- Added a new example for evaluating standalone `diffusers` models.
- Updated references and links for better navigation within the documentation.
* Refactor Pruna documentation for clarity and consistency
- Removed outdated references to FLUX-juiced and streamlined the explanation of benchmarking.
- Enhanced the description of evaluating standalone `diffusers` models.
- Cleaned up code examples by removing unnecessary imports and comments for better readability.
* Apply suggestions from code review
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com >
* Enhance Pruna documentation with new examples and clarifications
- Added an image to illustrate the optimization process.
- Updated the explanation for sharing and loading optimized models on the Hugging Face Hub.
- Clarified the evaluation process for optimized models using the EvaluationAgent.
- Improved descriptions for defining metrics and evaluating standalone diffusers models.
---------
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com >
2025-06-16 12:25:05 -07:00
Edna
8adc6003ba
Chroma Pipeline ( #11698 )
...
* working state from hameerabbasi and iddl
* working state form hameerabbasi and iddl (transformer)
* working state (normalization)
* working state (embeddings)
* add chroma loader
* add chroma to mappings
* add chroma to transformer init
* take out variant stuff
* get decently far in changing variant stuff
* add chroma init
* make chroma output class
* add chroma transformer to dummy tp
* add chroma to init
* add chroma to init
* fix single file
* update
* update
* add chroma to auto pipeline
* add chroma to pipeline init
* change to chroma transformer
* take out variant from blocks
* swap embedder location
* remove prompt_2
* work on swapping text encoders
* remove mask function
* dont modify mask (for now)
* wrap attn mask
* no attn mask (can't get it to work)
* remove pooled prompt embeds
* change to my own unpooled embeddeer
* fix load
* take pooled projections out of transformer
* ensure correct dtype for chroma embeddings
* update
* use dn6 attn mask + fix true_cfg_scale
* use chroma pipeline output
* use DN6 embeddings
* remove guidance
* remove guidance embed (pipeline)
* remove guidance from embeddings
* don't return length
* dont change dtype
* remove unused stuff, fix up docs
* add chroma autodoc
* add .md (oops)
* initial chroma docs
* undo don't change dtype
* undo arxiv change
unsure why that happened
* fix hf papers regression in more places
* Update docs/source/en/api/pipelines/chroma.md
Co-authored-by: Dhruv Nair <dhruv.nair@gmail.com >
* do_cfg -> self.do_classifier_free_guidance
* Update docs/source/en/api/models/chroma_transformer.md
Co-authored-by: Dhruv Nair <dhruv.nair@gmail.com >
* Update chroma.md
* Move chroma layers into transformer
* Remove pruned AdaLayerNorms
* Add chroma fast tests
* (untested) batch cond and uncond
* Add # Copied from for shift
* Update # Copied from statements
* update norm imports
* Revert cond + uncond batching
* Add transformer tests
* move chroma test (oops)
* chroma init
* fix chroma pipeline fast tests
* Update src/diffusers/models/transformers/transformer_chroma.py
Co-authored-by: Dhruv Nair <dhruv.nair@gmail.com >
* Move Approximator and Embeddings
* Fix auto pipeline + make style, quality
* make style
* Apply style fixes
* switch to new input ids
* fix # Copied from error
* remove # Copied from on protected members
* try to fix import
* fix import
* make fix-copes
* revert style fix
* update chroma transformer params
* update chroma transformer approximator init params
* update to pad tokens
* fix batch inference
* Make more pipeline tests work
* Make most transformer tests work
* fix docs
* make style, make quality
* skip batch tests
* fix test skipping
* fix test skipping again
* fix for tests
* Fix all pipeline test
* update
* push local changes, fix docs
* add encoder test, remove pooled dim
* default proj dim
* fix tests
* fix equal size list input
* update
* push local changes, fix docs
* add encoder test, remove pooled dim
* default proj dim
* fix tests
* fix equal size list input
* Revert "fix equal size list input"
This reverts commit 3fe4ad67d5 .
* update
* update
* update
* update
* update
---------
Co-authored-by: Dhruv Nair <dhruv.nair@gmail.com >
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2025-06-14 06:52:56 +05:30
Aryan
9f91305f85
Cosmos Predict2 ( #11695 )
...
* support text-to-image
* update example
* make fix-copies
* support use_flow_sigmas in EDM scheduler instead of maintain cosmos-specific scheduler
* support video-to-world
* update
* rename text2image pipeline
* make fix-copies
* add t2i test
* add test for v2w pipeline
* support edm dpmsolver multistep
* update
* update
* update
* update tests
* fix tests
* safety checker
* make conversion script work without guardrail
2025-06-14 01:51:29 +05:30
Sayak Paul
62cbde8d41
[docs] mention fp8 benefits on supported hardware. ( #11699 )
...
* mention fp8 benefits on supported hardware.
* Update docs/source/en/quantization/torchao.md
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com >
---------
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com >
2025-06-13 07:17:03 +05:30
Sayak Paul
00b179fb1a
[docs] add compilation bits to the bitsandbytes docs. ( #11693 )
...
* add compilation bits to the bitsandbytes docs.
* Apply suggestions from code review
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com >
* finish
---------
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com >
2025-06-12 08:49:24 +05:30
Aryan
73a9d5856f
Wan VACE ( #11582 )
...
* initial support
* make fix-copies
* fix no split modules
* add conversion script
* refactor
* add pipeline test
* refactor
* fix bug with mask
* fix for reference images
* remove print
* update docs
* update slices
* update
* update
* update example
2025-06-06 17:53:10 +05:30
Steven Liu
c934720629
[docs] Model cards ( #11112 )
...
* initial
* update
* hunyuanvideo
* ltx
* fix
* wan
* gen guide
* feedback
* feedback
* pipeline-level quant config
* feedback
* ltx
2025-06-02 16:55:14 -07:00
Steven Liu
9f48394bf7
[docs] Caching methods ( #11625 )
...
* cache
* feedback
2025-06-02 10:58:47 -07:00
Sayak Paul
b975bceff3
[docs] update torchao doc link ( #11634 )
...
update torchao doc link
2025-05-30 08:30:36 -07:00
VLT Media
d0ec6601df
Bug: Fixed Image 2 Image example ( #11619 )
...
Bug: Fixed Image 2 Image example where a PIL.Image was improperly being asked for an item via index.
2025-05-30 11:30:52 +05:30
Steven Liu
be2fb77dc1
[docs] PyTorch 2.0 ( #11618 )
...
* combine
* Update docs/source/en/optimization/fp16.md
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
---------
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
2025-05-28 09:42:41 -07:00
Linoy Tsaban
28ef0165b9
[Sana Sprint] add image-to-image pipeline ( #11602 )
...
* sana sprint img2img
* fix import
* fix name
* fix image encoding
* fix image encoding
* fix image encoding
* fix image encoding
* fix image encoding
* fix image encoding
* try w/o strength
* try scaling differently
* try with strength
* revert unnecessary changes to scheduler
* revert unnecessary changes to scheduler
* Apply style fixes
* remove comment
* add copy statements
* add copy statements
* add to doc
* add to doc
* add to doc
* add to doc
* Apply style fixes
* empty commit
* fix copies
* fix copies
* fix copies
* fix copies
* fix copies
* docs
* make fix-copies.
* fix doc building error.
* initial commit - add img2img test
* initial commit - add img2img test
* fix import
* fix imports
* Apply style fixes
* empty commit
* remove
* empty commit
* test vocab size
* fix
* fix prompt missing from last commits
* small changes
* fix image processing when input is tensor
* fix order
* Apply style fixes
* empty commit
* fix shape
* remove comment
* image processing
* remove comment
* skip vae tiling test for now
* Apply style fixes
* empty commit
---------
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
Co-authored-by: sayakpaul <spsayakpaul@gmail.com >
2025-05-27 22:09:51 +03:00
Steven Liu
7ae546f8d1
[docs] Pipeline-level quantization ( #11604 )
...
refactor
2025-05-26 14:12:57 +05:30