Sayak Paul
78bc824729
[Tests] modify the test slices for the failing flax test ( #10630 )
...
* fixes
* fixes
* fixes
* updates
2025-01-23 12:10:24 +05:30
Aryan
beacaa5528
[core] Layerwise Upcasting ( #10347 )
...
* update
* update
* make style
* remove dynamo disable
* add coauthor
Co-Authored-By: Dhruv Nair <dhruv.nair@gmail.com >
* update
* update
* update
* update mixin
* add some basic tests
* update
* update
* non_blocking
* improvements
* update
* norm.* -> norm
* apply suggestions from review
* add example
* update hook implementation to the latest changes from pyramid attention broadcast
* deinitialize should raise an error
* update doc page
* Apply suggestions from code review
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com >
* update docs
* update
* refactor
* fix _always_upcast_modules for asym ae and vq_model
* fix lumina embedding forward to not depend on weight dtype
* refactor tests
* add simple lora inference tests
* _always_upcast_modules -> _precision_sensitive_module_patterns
* remove todo comments about review; revert changes to self.dtype in unets because .dtype on ModelMixin should be able to handle fp8 weight case
* check layer dtypes in lora test
* fix UNet1DModelTests::test_layerwise_upcasting_inference
* _precision_sensitive_module_patterns -> _skip_layerwise_casting_patterns based on feedback
* skip test in NCSNppModelTests
* skip tests for AutoencoderTinyTests
* skip tests for AutoencoderOobleckTests
* skip tests for UNet1DModelTests - unsupported pytorch operations
* layerwise_upcasting -> layerwise_casting
* skip tests for UNetRLModelTests; needs next pytorch release for currently unimplemented operation support
* add layerwise fp8 pipeline test
* use xfail
* Apply suggestions from code review
Co-authored-by: Dhruv Nair <dhruv.nair@gmail.com >
* add assertion with fp32 comparison; add tolerance to fp8-fp32 vs fp32-fp32 comparison (required for a few models' test to pass)
* add note about memory consumption on tesla CI runner for failing test
---------
Co-authored-by: Dhruv Nair <dhruv.nair@gmail.com >
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com >
2025-01-22 19:49:37 +05:30
YiYi Xu
a1f9a71238
fix offload gpu tests etc ( #10366 )
...
* add
* style
2025-01-21 18:52:36 +05:30
Fanli Lin
ec37e20972
[tests] make tests device-agnostic (part 3) ( #10437 )
...
* initial comit
* fix empty cache
* fix one more
* fix style
* update device functions
* update
* update
* Update src/diffusers/utils/testing_utils.py
Co-authored-by: hlky <hlky@hlky.ac >
* Update src/diffusers/utils/testing_utils.py
Co-authored-by: hlky <hlky@hlky.ac >
* Update src/diffusers/utils/testing_utils.py
Co-authored-by: hlky <hlky@hlky.ac >
* Update tests/pipelines/controlnet/test_controlnet.py
Co-authored-by: hlky <hlky@hlky.ac >
* Update src/diffusers/utils/testing_utils.py
Co-authored-by: hlky <hlky@hlky.ac >
* Update src/diffusers/utils/testing_utils.py
Co-authored-by: hlky <hlky@hlky.ac >
* Update tests/pipelines/controlnet/test_controlnet.py
Co-authored-by: hlky <hlky@hlky.ac >
* with gc.collect
* update
* make style
* check_torch_dependencies
* add mps empty cache
* bug fix
* Apply suggestions from code review
---------
Co-authored-by: hlky <hlky@hlky.ac >
2025-01-21 12:15:45 +00:00
Shenghai Yuan
23b467c79c
[core] ConsisID ( #10140 )
...
* Update __init__.py
* add consisid
* update consisid
* update consisid
* make style
* make_style
* Update src/diffusers/pipelines/consisid/pipeline_consisid.py
Co-authored-by: hlky <hlky@hlky.ac >
* Update src/diffusers/pipelines/consisid/pipeline_consisid.py
Co-authored-by: hlky <hlky@hlky.ac >
* Update src/diffusers/pipelines/consisid/pipeline_consisid.py
Co-authored-by: hlky <hlky@hlky.ac >
* Update src/diffusers/pipelines/consisid/pipeline_consisid.py
Co-authored-by: hlky <hlky@hlky.ac >
* Update src/diffusers/pipelines/consisid/pipeline_consisid.py
Co-authored-by: hlky <hlky@hlky.ac >
* Update src/diffusers/pipelines/consisid/pipeline_consisid.py
Co-authored-by: hlky <hlky@hlky.ac >
* add doc
* make style
* Rename consisid .md to consisid.md
* Update geodiff_molecule_conformation.ipynb
* Update geodiff_molecule_conformation.ipynb
* Update geodiff_molecule_conformation.ipynb
* Update demo.ipynb
* Update pipeline_consisid.py
* make fix-copies
* Update docs/source/en/using-diffusers/consisid.md
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com >
* Update src/diffusers/pipelines/consisid/pipeline_consisid.py
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com >
* Update src/diffusers/pipelines/consisid/pipeline_consisid.py
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com >
* Update docs/source/en/using-diffusers/consisid.md
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com >
* Update docs/source/en/using-diffusers/consisid.md
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com >
* update doc & pipeline code
* fix typo
* make style
* update example
* Update docs/source/en/using-diffusers/consisid.md
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com >
* update example
* update example
* Update src/diffusers/pipelines/consisid/pipeline_consisid.py
Co-authored-by: hlky <hlky@hlky.ac >
* Update src/diffusers/pipelines/consisid/pipeline_consisid.py
Co-authored-by: hlky <hlky@hlky.ac >
* update
* add test and update
* remove some changes from docs
* refactor
* fix
* undo changes to examples
* remove save/load and fuse methods
* update
* link hf-doc-img & make test extremely small
* update
* add lora
* fix test
* update
* update
* change expected_diff_max to 0.4
* fix typo
* fix link
* fix typo
* update docs
* update
* remove consisid lora tests
---------
Co-authored-by: hlky <hlky@hlky.ac >
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com >
Co-authored-by: Aryan <aryan@huggingface.co >
2025-01-19 13:10:08 +05:30
hlky
08e62fe0c2
Scheduling fixes on MPS ( #10549 )
...
* use np.int32 in scheduling
* test_add_noise_device
* -np.int32, fixes
2025-01-16 07:45:03 -10:00
hlky
0b065c099a
Move buffers to device ( #10523 )
...
* Move buffers to device
* add test
* named_buffers
2025-01-16 07:42:56 -10:00
Daniel Regado
e8114bd068
IP-Adapter for StableDiffusion3Img2ImgPipeline ( #10589 )
...
Added support for IP-Adapter
2025-01-16 09:46:22 +00:00
Sayak Paul
bba59fb88b
[Tests] add: test to check 8bit bnb quantized models work with lora loading. ( #10576 )
...
* add: test to check 8bit bnb quantized models work with lora loading.
* Update tests/quantization/bnb/test_mixed_int8.py
Co-authored-by: Dhruv Nair <dhruv.nair@gmail.com >
---------
Co-authored-by: Dhruv Nair <dhruv.nair@gmail.com >
2025-01-15 13:05:26 +05:30
Sayak Paul
2432f80ca3
[LoRA] feat: support loading loras into 4bit quantized Flux models. ( #10578 )
...
* feat: support loading loras into 4bit quantized models.
* updates
* update
* remove weight check.
2025-01-15 12:40:40 +05:30
Aryan
f9e957f011
Fix offload tests for CogVideoX and CogView3 ( #10547 )
...
* update
* update
2025-01-15 12:24:46 +05:30
Daniel Regado
4dec63c18e
IP-Adapter for StableDiffusion3InpaintPipeline ( #10581 )
...
* Added support for IP-Adapter
* Added joint_attention_kwargs property
2025-01-15 06:52:23 +00:00
Marc Sun
fbff43acc9
[FEAT] DDUF format ( #10037 )
...
* load and save dduf archive
* style
* switch to zip uncompressed
* updates
* Update src/diffusers/pipelines/pipeline_utils.py
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
* Update src/diffusers/pipelines/pipeline_utils.py
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
* first draft
* remove print
* switch to dduf_file for consistency
* switch to huggingface hub api
* fix log
* add a basic test
* Update src/diffusers/configuration_utils.py
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
* Update src/diffusers/pipelines/pipeline_utils.py
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
* Update src/diffusers/pipelines/pipeline_utils.py
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
* fix
* fix variant
* change saving logic
* DDUF - Load transformers components manually (#10171 )
* update hfh version
* Load transformers components manually
* load encoder from_pretrained with state_dict
* working version with transformers and tokenizer !
* add generation_config case
* fix tests
* remove saving for now
* typing
* need next version from transformers
* Update src/diffusers/configuration_utils.py
Co-authored-by: Lucain <lucain@huggingface.co >
* check path corectly
* Apply suggestions from code review
Co-authored-by: Lucain <lucain@huggingface.co >
* udapte
* typing
* remove check for subfolder
* quality
* revert setup changes
* oups
* more readable condition
* add loading from the hub test
* add basic docs.
* Apply suggestions from code review
Co-authored-by: Lucain <lucain@huggingface.co >
* add example
* add
* make functions private
* Apply suggestions from code review
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com >
* minor.
* fixes
* fix
* change the precdence of parameterized.
* error out when custom pipeline is passed with dduf_file.
* updates
* fix
* updates
* fixes
* updates
* fix xfail condition.
* fix xfail
* fixes
* sharded checkpoint compat
* add test for sharded checkpoint
* add suggestions
* Update src/diffusers/models/model_loading_utils.py
Co-authored-by: YiYi Xu <yixu310@gmail.com >
* from suggestions
* add class attributes to flag dduf tests
* last one
* fix logic
* remove comment
* revert changes
---------
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
Co-authored-by: Lucain <lucain@huggingface.co >
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com >
Co-authored-by: YiYi Xu <yixu310@gmail.com >
2025-01-14 13:21:42 +05:30
Aryan
aa79d7da46
Test sequential cpu offload for torchao quantization ( #10506 )
...
test sequential cpu offload
2025-01-14 09:54:06 +05:30
Vinh H. Pham
794f7e49a9
Implement framewise encoding/decoding in LTX Video VAE ( #10488 )
...
* add framewise decode
* add framewise encode, refactor tiled encode/decode
* add sanity test tiling for ltx
* run make style
* Update src/diffusers/models/autoencoders/autoencoder_kl_ltx.py
Co-authored-by: Aryan <contact.aryanvs@gmail.com >
---------
Co-authored-by: Pham Hong Vinh <vinhph3@vng.com.vn >
Co-authored-by: Aryan <contact.aryanvs@gmail.com >
2025-01-13 10:58:32 -10:00
Daniel Regado
9fc9c6dd71
Added IP-Adapter for StableDiffusion3ControlNetInpaintingPipeline ( #10561 )
...
* Added support for IP-Adapter
* Fixed Copied inconsistency
2025-01-13 10:15:36 -10:00
Dhruv Nair
f7cb595428
[Single File] Fix loading Flux Dev finetunes with Comfy Prefix ( #10545 )
...
* update
* update
* update
* update
---------
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
2025-01-13 21:25:07 +05:30
hlky
c3478a42b9
Fix Nightly AudioLDM2PipelineFastTests ( #10556 )
...
* Fix Nightly AudioLDM2PipelineFastTests
* add phonemizer to setup extras test
* fix
* make style
2025-01-13 13:54:06 +00:00
hlky
50c81df4e7
Fix StableDiffusionInstructPix2PixPipelineSingleFileSlowTests ( #10557 )
2025-01-13 13:47:10 +00:00
Sayak Paul
edb8c1bce6
[Flux] Improve true cfg condition ( #10539 )
...
* improve flux true cfg condition
* add test
2025-01-12 18:33:34 +05:30
Sayak Paul
36acdd7517
[Tests] skip tests properly with unittest.skip() ( #10527 )
...
* skip tests properly.
* more
* more
2025-01-11 08:46:22 +05:30
Junyu Chen
e7db062e10
[DC-AE] support tiling for DC-AE ( #10510 )
...
* autoencoder_dc tiling
* add tiling and slicing support in SANA pipelines
* create variables for padding length because the line becomes too long
* add tiling and slicing support in pag SANA pipelines
* revert changes to tile size
* make style
* add vae tiling test
---------
Co-authored-by: Aryan <aryan@huggingface.co >
2025-01-11 07:15:26 +05:30
Sayak Paul
9f06a0d1a4
[CI] Match remaining assertions from big runner ( #10521 )
...
* print
* remove print.
* print
* update slice.
* empty
2025-01-10 16:37:36 +05:30
Sayak Paul
a6f043a80f
[LoRA] allow big CUDA tests to run properly for LoRA (and others) ( #9845 )
...
* allow big lora tests to run on the CI.
* print
* print.
* print
* print
* print
* print
* more
* print
* remove print.
* remove print
* directly place on cuda.
* remove pipeline.
* remove
* fix
* fix
* spaces
* quality
* updates
* directly place flux controlnet pipeline on cuda.
* torch_device instead of cuda.
* style
* device placement.
* fixes
* add big gpu marker for mochi; rename test correctly
* address feedback
* fix
---------
Co-authored-by: Aryan <aryan@huggingface.co >
2025-01-10 12:50:24 +05:30
Sayak Paul
daf9d0f119
[chore] remove prints from tests. ( #10505 )
...
remove prints from tests.
2025-01-09 14:19:43 +05:30
hlky
b13cdbb294
UNet2DModel mid_block_type ( #10469 )
2025-01-08 10:50:29 -10:00
AstraliteHeart
cb342b745a
Add AuraFlow GGUF support ( #10463 )
...
* Add support for loading AuraFlow models from GGUF
https://huggingface.co/city96/AuraFlow-v0.3-gguf
* Update AuraFlow documentation for GGUF, add GGUF tests and model detection.
* Address code review comments.
* Remove unused config.
---------
Co-authored-by: hlky <hlky@hlky.ac >
2025-01-08 13:23:12 +05:30
Aryan
71ad16b463
Add _no_split_modules to some models ( #10308 )
...
* set supports gradient checkpointing to true where necessary; add missing no split modules
* fix cogvideox tests
* update
---------
Co-authored-by: Dhruv Nair <dhruv.nair@gmail.com >
2025-01-08 06:34:19 +05:30
Aryan
811560b1d7
[LoRA] Support original format loras for HunyuanVideo ( #10376 )
...
* update
* fix make copies
* update
* add relevant markers to the integration test suite.
* add copied.
* fox-copies
* temporarily add print.
* directly place on CUDA as CPU isn't that big on the CIO.
* fixes to fuse_lora, aryan was right.
* fixes
---------
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
2025-01-07 13:18:57 +05:30
hlky
8f2253c58c
Add torch_xla and from_single_file to instruct-pix2pix ( #10444 )
...
* Add torch_xla and from_single_file to instruct-pix2pix
* StableDiffusionInstructPix2PixPipelineSingleFileSlowTests
* StableDiffusionInstructPix2PixPipelineSingleFileSlowTests
---------
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
Co-authored-by: YiYi Xu <yixu310@gmail.com >
2025-01-06 10:11:16 -10:00
Sayak Paul
d9d94e12f3
[LoRA] fix: lora unloading when using expanded Flux LoRAs. ( #10397 )
...
* fix: lora unloading when using expanded Flux LoRAs.
* fix argument name.
Co-authored-by: a-r-r-o-w <contact.aryanvs@gmail.com >
* docs.
---------
Co-authored-by: a-r-r-o-w <contact.aryanvs@gmail.com >
2025-01-06 08:35:05 -10:00
Sayak Paul
b5726358cf
[Tests] add slow and nightly markers to sd3 lora integation. ( #10458 )
...
add slow and nightly markers to sd3 lora integation.
2025-01-06 07:29:04 +05:30
Daniel Regado
68bd6934b1
IP-Adapter support for StableDiffusion3ControlNetPipeline ( #10363 )
...
* IP-Adapter support for `StableDiffusion3ControlNetPipeline`
* Update src/diffusers/pipelines/controlnet_sd3/pipeline_stable_diffusion_3_controlnet.py
Co-authored-by: hlky <hlky@hlky.ac >
---------
Co-authored-by: hlky <hlky@hlky.ac >
2025-01-02 10:02:32 -10:00
maxs-kan
44640c8358
Fix Flux multiple Lora loading bug ( #10388 )
...
* check for base_layer key in transformer state dict
* test_lora_expansion_works_for_absent_keys
* check
* Update tests/lora/test_lora_layers_flux.py
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
* check
* test_lora_expansion_works_for_absent_keys/test_lora_expansion_works_for_extra_keys
* absent->extra
---------
Co-authored-by: hlky <hlky@hlky.ac >
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
2025-01-02 08:34:48 -10:00
Sayak Paul
1b202c5730
[LoRA] feat: support unload_lora_weights() for Flux Control. ( #10206 )
...
* feat: support unload_lora_weights() for Flux Control.
* tighten test
* minor
* updates
* meta device fixes.
2024-12-25 17:27:16 +05:30
Aryan
cd991d1e1a
Fix TorchAO related bugs; revert device_map changes ( #10371 )
...
* Revert "Add support for sharded models when TorchAO quantization is enabled (#10256 )"
This reverts commit 41ba8c0bf6 .
* update tests
* udpate
* update
* update
* update device map tests
* apply review suggestions
* update
* make style
* fix
* update docs
* update tests
* update workflow
* update
* improve tests
* allclose tolerance
* Update src/diffusers/models/modeling_utils.py
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
* Update tests/quantization/torchao/test_torchao.py
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
* improve tests
* fix
* update correct slices
---------
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
2024-12-25 15:37:49 +05:30
Fanli Lin
023b0e0d55
[tests] fix AssertionError: Torch not compiled with CUDA enabled ( #10356 )
...
fix bug on xpu
2024-12-24 15:28:50 +00:00
Aryan
4b557132ce
[core] LTX Video 0.9.1 ( #10330 )
...
* update
* make style
* update
* update
* update
* make style
* single file related changes
* update
* fix
* update single file urls and docs
* update
* fix
2024-12-23 19:51:33 +05:30
Sayak Paul
851dfa30ae
[Tests] Fix more tests sayak ( #10359 )
...
* fixes to tests
* fixture
* fixes
2024-12-23 19:11:21 +05:30
Sayak Paul
ea1ba0ba53
[LoRA] test fix ( #10351 )
...
updates
2024-12-23 15:45:45 +05:30
Sayak Paul
c34fc34563
[Tests] QoL improvements to the LoRA test suite ( #10304 )
...
* misc lora test improvements.
* updates
* fixes to tests
2024-12-23 13:59:55 +05:30
Sayak Paul
76e2727b5c
[SANA LoRA] sana lora training tests and misc. ( #10296 )
...
* sana lora training tests and misc.
* remove push to hub
* Update examples/dreambooth/train_dreambooth_lora_sana.py
Co-authored-by: Aryan <aryan@huggingface.co >
---------
Co-authored-by: Aryan <aryan@huggingface.co >
2024-12-23 12:35:13 +05:30
Aryan
02c777c065
[tests] Refactor TorchAO serialization fast tests ( #10271 )
...
refactor
2024-12-23 11:04:57 +05:30
Aryan
ffc0eaab6d
Bump minimum TorchAO version to 0.7.0 ( #10293 )
...
* bump min torchao version to 0.7.0
* update
2024-12-23 11:03:04 +05:30
Junsong Chen
b58868e6f4
[Sana bug] bug fix for 2K model config ( #10340 )
...
* fix the Positinoal Embedding bug in 2K model;
* Change the default model to the BF16 one for more stable training and output
* make style
* substract buffer size
* add compute_module_persistent_sizes
---------
Co-authored-by: yiyixuxu <yixu310@gmail.com >
2024-12-23 08:56:25 +05:30
hlky
be2070991f
Support Flux IP Adapter ( #10261 )
...
* Flux IP-Adapter
* test cfg
* make style
* temp remove copied from
* fix test
* fix test
* v2
* fix
* make style
* temp remove copied from
* Apply suggestions from code review
Co-authored-by: YiYi Xu <yixu310@gmail.com >
* Move encoder_hid_proj to inside FluxTransformer2DModel
* merge
* separate encode_prompt, add copied from, image_encoder offload
* make
* fix test
* fix
* Update src/diffusers/pipelines/flux/pipeline_flux.py
* test_flux_prompt_embeds change not needed
* true_cfg -> true_cfg_scale
* fix merge conflict
* test_flux_ip_adapter_inference
* add fast test
* FluxIPAdapterMixin not test mixin
* Update pipeline_flux.py
Co-authored-by: YiYi Xu <yixu310@gmail.com >
---------
Co-authored-by: YiYi Xu <yixu310@gmail.com >
2024-12-21 17:49:58 +00:00
hlky
bf9a641f1a
Fix EMAModel test_from_pretrained ( #10325 )
2024-12-21 14:10:44 +00:00
Sayak Paul
bf6eaa8aec
[Tests] add integration tests for lora expansion stuff in Flux. ( #10318 )
...
add integration tests for lora expansion stuff in Flux.
2024-12-20 16:14:58 +05:30
Sayak Paul
17128c42a4
[LoRA] feat: support loading regular Flux LoRAs into Flux Control, and Fill ( #10259 )
...
* lora expansion with dummy zeros.
* updates
* fix working 🥳
* working.
* use torch.device meta for state dict expansion.
* tests
Co-authored-by: a-r-r-o-w <contact.aryanvs@gmail.com >
* fixes
* fixes
* switch to debug
* fix
* Apply suggestions from code review
Co-authored-by: Aryan <aryan@huggingface.co >
* fix stuff
* docs
---------
Co-authored-by: a-r-r-o-w <contact.aryanvs@gmail.com >
Co-authored-by: Aryan <aryan@huggingface.co >
2024-12-20 14:30:32 +05:30
Aryan
41ba8c0bf6
Add support for sharded models when TorchAO quantization is enabled ( #10256 )
...
* add sharded + device_map check
2024-12-19 15:42:20 -10:00