Teriks
b4be42282d
Kolors additional pipelines, community contrib ( #11372 )
...
* Kolors additional pipelines, community contrib
---------
Co-authored-by: Teriks <Teriks@users.noreply.github.com >
Co-authored-by: Linoy Tsaban <57615435+linoytsaban@users.noreply.github.com >
2025-04-23 11:07:27 -10:00
Ishan Modi
a4f9c3cbc3
[Feature] Added Xlab Controlnet support ( #11249 )
...
update
2025-04-23 10:43:50 -10:00
Ishan Dutta
4b60f4b602
[train_dreambooth_flux] Add LANCZOS as the default interpolation mode for image resizing ( #11395 )
2025-04-23 10:47:05 -04:00
Aryan
6cef71de3a
Fix group offloading with block_level and use_stream=True ( #11375 )
...
* fix
* add tests
* add message check
2025-04-23 18:17:53 +05:30
Ameer Azam
026507c06c
Update README_hidream.md ( #11386 )
...
Small change
requirements_sana.txt to
requirements_hidream.txt
2025-04-22 20:08:26 -04:00
YiYi Xu
448c72a230
[HiDream] move deprecation to 0.35.0 ( #11384 )
...
up
2025-04-22 08:08:08 -10:00
Aryan
f108ad8888
Update modeling imports ( #11129 )
...
update
2025-04-22 06:59:25 -10:00
Linoy Tsaban
e30d3bf544
[LoRA] add LoRA support to HiDream and fine-tuning script ( #11281 )
...
* initial commit
* initial commit
* initial commit
* initial commit
* initial commit
* initial commit
* Update examples/dreambooth/train_dreambooth_lora_hidream.py
Co-authored-by: Bagheera <59658056+bghira@users.noreply.github.com >
* move prompt embeds, pooled embeds outside
* Update examples/dreambooth/train_dreambooth_lora_hidream.py
Co-authored-by: hlky <hlky@hlky.ac >
* Update examples/dreambooth/train_dreambooth_lora_hidream.py
Co-authored-by: hlky <hlky@hlky.ac >
* fix import
* fix import and tokenizer 4, text encoder 4 loading
* te
* prompt embeds
* fix naming
* shapes
* initial commit to add HiDreamImageLoraLoaderMixin
* fix init
* add tests
* loader
* fix model input
* add code example to readme
* fix default max length of text encoders
* prints
* nullify training cond in unpatchify for temp fix to incompatible shaping of transformer output during training
* smol fix
* unpatchify
* unpatchify
* fix validation
* flip pred and loss
* fix shift!!!
* revert unpatchify changes (for now)
* smol fix
* Apply style fixes
* workaround moe training
* workaround moe training
* remove prints
* to reduce some memory, keep vae in `weight_dtype` same as we have for flux (as it's the same vae)
bbd0c161b5/examples/dreambooth/train_dreambooth_lora_flux.py (L1207)
* refactor to align with HiDream refactor
* refactor to align with HiDream refactor
* refactor to align with HiDream refactor
* add support for cpu offloading of text encoders
* Apply style fixes
* adjust lr and rank for train example
* fix copies
* Apply style fixes
* update README
* update README
* update README
* fix license
* keep prompt2,3,4 as None in validation
* remove reverse ode comment
* Update examples/dreambooth/train_dreambooth_lora_hidream.py
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
* Update examples/dreambooth/train_dreambooth_lora_hidream.py
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
* vae offload change
* fix text encoder offloading
* Apply style fixes
* cleaner to_kwargs
* fix module name in copied from
* add requirements
* fix offloading
* fix offloading
* fix offloading
* update transformers version in reqs
* try AutoTokenizer
* try AutoTokenizer
* Apply style fixes
* empty commit
* Delete tests/lora/test_lora_layers_hidream.py
* change tokenizer_4 to load with AutoTokenizer as well
* make text_encoder_four and tokenizer_four configurable
* save model card
* save model card
* revert T5
* fix test
* remove non diffusers lumina2 conversion
---------
Co-authored-by: Bagheera <59658056+bghira@users.noreply.github.com >
Co-authored-by: hlky <hlky@hlky.ac >
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2025-04-22 11:44:02 +03:00
apolinário
6ab62c7431
Add stochastic sampling to FlowMatchEulerDiscreteScheduler ( #11369 )
...
* Add stochastic sampling to FlowMatchEulerDiscreteScheduler
This PR adds stochastic sampling to FlowMatchEulerDiscreteScheduler based on b1aeddd7cc ltx_video/schedulers/rf.py
* Apply style fixes
* Use config value directly
* Apply style fixes
* Swap order
* Update src/diffusers/schedulers/scheduling_flow_match_euler_discrete.py
Co-authored-by: YiYi Xu <yixu310@gmail.com >
* Update src/diffusers/schedulers/scheduling_flow_match_euler_discrete.py
Co-authored-by: YiYi Xu <yixu310@gmail.com >
---------
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
Co-authored-by: YiYi Xu <yixu310@gmail.com >
2025-04-21 17:18:30 -10:00
Ishan Modi
f59df3bb8b
[Refactor] Minor Improvement for import utils ( #11161 )
...
* update
* update
* addressed PR comments
* update
---------
Co-authored-by: YiYi Xu <yixu310@gmail.com >
2025-04-21 09:56:55 -10:00
josephrocca
a00c73a5e1
Support different-length pos/neg prompts for FLUX.1-schnell variants like Chroma ( #11120 )
...
Co-authored-by: YiYi Xu <yixu310@gmail.com >
2025-04-21 09:28:19 -10:00
OleehyO
0434db9a99
[cogview4][feat] Support attention mechanism with variable-length support and batch packing ( #11349 )
...
* [cogview4] Enhance attention mechanism with variable-length support and batch packing
---------
Co-authored-by: YiYi Xu <yixu310@gmail.com >
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2025-04-21 09:27:55 -10:00
Aamir Nazir
aff574fb29
Add Serialized Type Name kwarg in Model Output ( #10502 )
...
* Update outputs.py
2025-04-21 08:45:28 -10:00
Ishan Modi
79ea8eb258
[BUG] fixes in kadinsky pipeline ( #11080 )
...
* bug fix kadinsky pipeline
2025-04-21 08:41:09 -10:00
Aryan
e7f3a73786
Fix Wan I2V prepare_latents dtype ( #11371 )
...
update
2025-04-21 08:18:50 -10:00
PromeAI
7a4a126db8
fix issue that training flux controlnet was unstable and validation r… ( #11373 )
...
* fix issue that training flux controlnet was unstable and validation results were unstable
* del unused code pieces, fix grammar
---------
Co-authored-by: Your Name <you@example.com >
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
2025-04-21 08:16:05 -10:00
Kenneth Gerald Hamilton
0dec414d5b
[train_dreambooth_lora_sdxl.py] Fix the LR Schedulers when num_train_epochs is passed in a distributed training env ( #11240 )
...
Co-authored-by: Linoy Tsaban <57615435+linoytsaban@users.noreply.github.com >
2025-04-21 12:51:03 +05:30
Linoy Tsaban
44eeba07b2
[Flux LoRAs] fix lr scheduler bug in distributed scenarios ( #11242 )
...
* add fix
* add fix
* Apply style fixes
---------
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2025-04-21 10:08:45 +03:00
YiYi Xu
5873377a66
[Wan2.1-FLF2V] update conversion script ( #11365 )
...
update scheuler config in conversion sript
2025-04-18 14:08:44 -10:00
YiYi Xu
5a2e0f715c
update output for Hidream transformer ( #11366 )
...
up
2025-04-18 14:07:21 -10:00
Kazuki Yoda
ef47726e2d
Fix: StableDiffusionXLControlNetAdapterInpaintPipeline incorrectly inherited StableDiffusionLoraLoaderMixin ( #11357 )
...
Fix: Inherit `StableDiffusionXLLoraLoaderMixin`
`StableDiffusionXLControlNetAdapterInpaintPipeline`
used to incorrectly inherit
`StableDiffusionLoraLoaderMixin`
instead of `StableDiffusionXLLoraLoaderMixin`
2025-04-18 12:46:06 -10:00
YiYi Xu
0021bfa1e1
support Wan-FLF2V ( #11353 )
...
* update transformer
---------
Co-authored-by: Aryan <aryan@huggingface.co >
2025-04-18 10:27:50 -10:00
Marc Sun
bbd0c161b5
[BNB] Fix test_moving_to_cpu_throws_warning ( #11356 )
...
fix
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
2025-04-18 09:44:51 +05:30
Yao Matrix
eef3d65954
enable 2 test cases on XPU ( #11332 )
...
* enable 2 test cases on XPU
Signed-off-by: YAO Matrix <matrix.yao@intel.com >
* Apply style fixes
---------
Signed-off-by: YAO Matrix <matrix.yao@intel.com >
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
Co-authored-by: Dhruv Nair <dhruv.nair@gmail.com >
2025-04-17 13:27:41 -10:00
Frank (Haofan) Wang
ee6ad51d96
Update controlnet_flux.py ( #11350 )
2025-04-17 10:05:01 -10:00
Sayak Paul
4397f59a37
[bitsandbytes] improve dtype mismatch handling for bnb + lora. ( #11270 )
...
* improve dtype mismatch handling for bnb + lora.
* add a test
* fix and updates
* update
2025-04-17 19:51:49 +05:30
YiYi Xu
056793295c
[Hi Dream] follow-up ( #11296 )
...
* add
2025-04-17 01:17:44 -10:00
Sayak Paul
29d2afbfe2
[LoRA] Propagate hotswap better ( #11333 )
...
* propagate hotswap to other load_lora_weights() methods.
* simplify documentations.
* updates
* propagate to load_lora_into_text_encoder.
* empty commit
2025-04-17 10:35:38 +05:30
Sayak Paul
b00a564dac
[docs] add note about use_duck_shape in auraflow docs. ( #11348 )
...
add note about use_duck_shape in auraflow docs.
2025-04-17 10:25:39 +05:30
Sayak Paul
efc9d68b15
[chore] fix lora docs utils ( #11338 )
...
fix lora docs utils
2025-04-17 09:25:53 +05:30
nPeppon
3e59d531d1
Fix wrong dtype argument name as torch_dtype ( #11346 )
2025-04-16 16:00:25 -04:00
Ishan Modi
d63e6fccb1
[BUG] fixed _toctree.yml alphabetical ordering ( #11277 )
...
update
2025-04-16 09:04:22 -07:00
Dhruv Nair
59f1b7b1c8
Hunyuan I2V fast tests fix ( #11341 )
...
* update
* update
2025-04-16 18:40:33 +05:30
Sayak Paul
ce1063acfa
[docs] add a snippet for compilation in the auraflow docs. ( #11327 )
...
* add a snippet for compilation in the auraflow docs.
* include speedups.
2025-04-16 11:12:09 +05:30
Sayak Paul
7212f35de2
[single file] enable telemetry for single file loading when using GGUF. ( #11284 )
...
* enable telemetry for single file loading when using GGUF.
* quality
2025-04-16 08:33:52 +05:30
Sayak Paul
3252d7ad11
unpin torch versions for onnx Dockerfile ( #11290 )
...
unpin torch versions for onnx
2025-04-16 08:16:38 +05:30
Dhruv Nair
b316104ddd
Fix Hunyuan I2V for transformers>4.47.1 ( #11293 )
...
* update
* update
2025-04-16 07:53:32 +05:30
Álvaro Somoza
d3b2699a7f
another fix for FlowMatchLCMScheduler forgotten import ( #11330 )
...
fix
2025-04-15 13:53:16 -10:00
Sayak Paul
4b868f14c1
post release 0.33.0 ( #11255 )
...
* post release
* update
* fix deprecations
* remaining
* update
---------
Co-authored-by: YiYi Xu <yixu310@gmail.com >
2025-04-15 06:50:08 -10:00
AstraliteHeart
b6156aafe9
Rewrite AuraFlowPatchEmbed.pe_selection_index_based_on_dim to be torch.compile compatible ( #11297 )
...
* Update pe_selection_index_based_on_dim
* Make pe_selection_index_based_on_dim work with torh.compile
* Fix AuraFlowTransformer2DModel's dpcstring default values
---------
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
2025-04-15 19:30:30 +05:30
Sayak Paul
7ecfe29160
[docs] fix hidream docstrings. ( #11325 )
...
* fix hidream docstrings.
* fix
* empty commit
2025-04-15 16:26:21 +05:30
Yao Matrix
7edace9a05
fix CPU offloading related fail cases on XPU ( #11288 )
...
* fix CPU offloading related fail cases on XPU
Signed-off-by: YAO Matrix <matrix.yao@intel.com >
* fix style
Signed-off-by: YAO Matrix <matrix.yao@intel.com >
* Apply style fixes
* trigger tests
* test_pipe_same_device_id_offload
---------
Signed-off-by: YAO Matrix <matrix.yao@intel.com >
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
Co-authored-by: hlky <hlky@hlky.ac >
2025-04-15 09:06:56 +01:00
hlky
6e80d240d3
Fix vae.Decoder prev_output_channel ( #11280 )
2025-04-15 08:03:50 +01:00
Hameer Abbasi
9352a5ca56
[LoRA] Add LoRA support to AuraFlow ( #10216 )
...
* Add AuraFlowLoraLoaderMixin
* Add comments, remove qkv fusion
* Add Tests
* Add AuraFlowLoraLoaderMixin to documentation
* Add Suggested changes
* Change attention_kwargs->joint_attention_kwargs
* Rebasing derp.
* fix
* fix
* Quality fixes.
* make style
* `make fix-copies`
* `ruff check --fix`
* Attept 1 to fix tests.
* Attept 2 to fix tests.
* Attept 3 to fix tests.
* Address review comments.
* Rebasing derp.
* Get more tests passing by copying from Flux. Address review comments.
* `joint_attention_kwargs`->`attention_kwargs`
* Add `lora_scale` property for te LoRAs.
* Make test better.
* Remove useless property.
* Skip TE-only tests for AuraFlow.
* Support LoRA for non-CLIP TEs.
* Restore LoRA tests.
* Undo adding LoRA support for non-CLIP TEs.
* Undo support for TE in AuraFlow LoRA.
* `make fix-copies`
* Sync with upstream changes.
* Remove unneeded stuff.
* Mirror `Lumina2`.
* Skip for MPS.
* Address review comments.
* Remove duplicated code.
* Remove unnecessary code.
* Remove repeated docs.
* Propagate attention.
* Fix TE target modules.
* MPS fix for LoRA tests.
* Unrelated TE LoRA tests fix.
* Fix AuraFlow LoRA tests by applying to the right denoiser layers.
Co-authored-by: AstraliteHeart <81396681+AstraliteHeart@users.noreply.github.com >
* Apply style fixes
* empty commit
* Fix the repo consistency issues.
* Remove unrelated changes.
* Style.
* Fix `test_lora_fuse_nan`.
* fix quality issues.
* `pytest.xfail` -> `ValueError`.
* Add back `skip_mps`.
* Apply style fixes
* `make fix-copies`
---------
Co-authored-by: Warlord-K <warlordk28@gmail.com >
Co-authored-by: hlky <hlky@hlky.ac >
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
Co-authored-by: AstraliteHeart <81396681+AstraliteHeart@users.noreply.github.com >
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2025-04-15 10:41:28 +05:30
Sayak Paul
cefa28f449
[docs] Promote AutoModel usage ( #11300 )
...
* docs: promote the usage of automodel.
* bitsandbytes
* Apply suggestions from code review
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com >
---------
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com >
2025-04-15 09:25:40 +05:30
Beinsezii
8819cda6c0
Add skrample section to community_projects.md ( #11319 )
...
Update community_projects.md
https://github.com/huggingface/diffusers/discussions/11158#discussioncomment-12681691
2025-04-14 12:12:59 -10:00
hlky
dcf836cf47
Use float32 on mps or npu in transformer_hidream_image's rope ( #11316 )
2025-04-14 20:19:21 +01:00
Álvaro Somoza
1cb73cb19f
import for FlowMatchLCMScheduler ( #11318 )
...
* add
* fix-copies
2025-04-14 06:28:57 -10:00
Linoy Tsaban
ba6008abfe
[HiDream] code example ( #11317 )
2025-04-14 16:19:30 +01:00
Sayak Paul
a8f5134c11
[LoRA] support more SDXL loras. ( #11292 )
...
* support more SDXL loras.
* update
---------
Co-authored-by: hlky <hlky@hlky.ac >
2025-04-14 17:09:59 +05:30