Fanli Lin
ec37e20972
[tests] make tests device-agnostic (part 3) ( #10437 )
...
* initial comit
* fix empty cache
* fix one more
* fix style
* update device functions
* update
* update
* Update src/diffusers/utils/testing_utils.py
Co-authored-by: hlky <hlky@hlky.ac >
* Update src/diffusers/utils/testing_utils.py
Co-authored-by: hlky <hlky@hlky.ac >
* Update src/diffusers/utils/testing_utils.py
Co-authored-by: hlky <hlky@hlky.ac >
* Update tests/pipelines/controlnet/test_controlnet.py
Co-authored-by: hlky <hlky@hlky.ac >
* Update src/diffusers/utils/testing_utils.py
Co-authored-by: hlky <hlky@hlky.ac >
* Update src/diffusers/utils/testing_utils.py
Co-authored-by: hlky <hlky@hlky.ac >
* Update tests/pipelines/controlnet/test_controlnet.py
Co-authored-by: hlky <hlky@hlky.ac >
* with gc.collect
* update
* make style
* check_torch_dependencies
* add mps empty cache
* bug fix
* Apply suggestions from code review
---------
Co-authored-by: hlky <hlky@hlky.ac >
2025-01-21 12:15:45 +00:00
Muyang Li
158a5a87fb
Remove the FP32 Wrapper when evaluating ( #10617 )
...
Remove the FP32 Wrapper
Co-authored-by: Linoy Tsaban <57615435+linoytsaban@users.noreply.github.com >
2025-01-21 16:16:54 +05:30
jiqing-feng
012d08b1bc
Enable dreambooth lora finetune example on other devices ( #10602 )
...
* enable dreambooth_lora on other devices
Signed-off-by: jiqing-feng <jiqing.feng@intel.com >
* enable xpu
Signed-off-by: jiqing-feng <jiqing.feng@intel.com >
* check cuda device before empty cache
Signed-off-by: jiqing-feng <jiqing.feng@intel.com >
* fix comment
Signed-off-by: jiqing-feng <jiqing.feng@intel.com >
* import free_memory
Signed-off-by: jiqing-feng <jiqing.feng@intel.com >
---------
Signed-off-by: jiqing-feng <jiqing.feng@intel.com >
2025-01-21 14:09:45 +05:30
Sayak Paul
4ace7d0483
[chore] change licensing to 2025 from 2024. ( #10615 )
...
change licensing to 2025 from 2024.
2025-01-20 16:57:27 -10:00
baymax591
75a636da48
bugfix for npu not support float64 ( #10123 )
...
* bugfix for npu not support float64
* is_mps is_npu
---------
Co-authored-by: 白超 <baichao19@huawei.com >
Co-authored-by: hlky <hlky@hlky.ac >
2025-01-20 09:35:24 -10:00
sunxunle
4842f5d8de
chore: remove redundant words ( #10609 )
...
Signed-off-by: sunxunle <sunxunle@ampere.tech >
2025-01-20 08:15:26 -10:00
Sayak Paul
328e0d20a7
[training] set rest of the blocks with requires_grad False. ( #10607 )
...
set rest of the blocks with requires_grad False.
2025-01-19 19:34:53 +05:30
Shenghai Yuan
23b467c79c
[core] ConsisID ( #10140 )
...
* Update __init__.py
* add consisid
* update consisid
* update consisid
* make style
* make_style
* Update src/diffusers/pipelines/consisid/pipeline_consisid.py
Co-authored-by: hlky <hlky@hlky.ac >
* Update src/diffusers/pipelines/consisid/pipeline_consisid.py
Co-authored-by: hlky <hlky@hlky.ac >
* Update src/diffusers/pipelines/consisid/pipeline_consisid.py
Co-authored-by: hlky <hlky@hlky.ac >
* Update src/diffusers/pipelines/consisid/pipeline_consisid.py
Co-authored-by: hlky <hlky@hlky.ac >
* Update src/diffusers/pipelines/consisid/pipeline_consisid.py
Co-authored-by: hlky <hlky@hlky.ac >
* Update src/diffusers/pipelines/consisid/pipeline_consisid.py
Co-authored-by: hlky <hlky@hlky.ac >
* add doc
* make style
* Rename consisid .md to consisid.md
* Update geodiff_molecule_conformation.ipynb
* Update geodiff_molecule_conformation.ipynb
* Update geodiff_molecule_conformation.ipynb
* Update demo.ipynb
* Update pipeline_consisid.py
* make fix-copies
* Update docs/source/en/using-diffusers/consisid.md
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com >
* Update src/diffusers/pipelines/consisid/pipeline_consisid.py
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com >
* Update src/diffusers/pipelines/consisid/pipeline_consisid.py
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com >
* Update docs/source/en/using-diffusers/consisid.md
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com >
* Update docs/source/en/using-diffusers/consisid.md
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com >
* update doc & pipeline code
* fix typo
* make style
* update example
* Update docs/source/en/using-diffusers/consisid.md
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com >
* update example
* update example
* Update src/diffusers/pipelines/consisid/pipeline_consisid.py
Co-authored-by: hlky <hlky@hlky.ac >
* Update src/diffusers/pipelines/consisid/pipeline_consisid.py
Co-authored-by: hlky <hlky@hlky.ac >
* update
* add test and update
* remove some changes from docs
* refactor
* fix
* undo changes to examples
* remove save/load and fuse methods
* update
* link hf-doc-img & make test extremely small
* update
* add lora
* fix test
* update
* update
* change expected_diff_max to 0.4
* fix typo
* fix link
* fix typo
* update docs
* update
* remove consisid lora tests
---------
Co-authored-by: hlky <hlky@hlky.ac >
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com >
Co-authored-by: Aryan <aryan@huggingface.co >
2025-01-19 13:10:08 +05:30
Juan Acevedo
aeac0a00f8
implementing flux on TPUs with ptxla ( #10515 )
...
* implementing flux on TPUs with ptxla
* add xla flux attention class
* run make style/quality
* Update src/diffusers/models/attention_processor.py
Co-authored-by: YiYi Xu <yixu310@gmail.com >
* Update src/diffusers/models/attention_processor.py
Co-authored-by: YiYi Xu <yixu310@gmail.com >
* run style and quality
---------
Co-authored-by: Juan Acevedo <jfacevedo@google.com >
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
Co-authored-by: YiYi Xu <yixu310@gmail.com >
2025-01-16 08:46:02 -10:00
Leo Jiang
cecada5280
NPU adaption for RMSNorm ( #10534 )
...
* NPU adaption for RMSNorm
* NPU adaption for RMSNorm
---------
Co-authored-by: J石页 <jiangshuo9@h-partners.com >
2025-01-16 08:45:29 -10:00
C
17d99c4d22
[Docs] Add documentation about using ParaAttention to optimize FLUX and HunyuanVideo ( #10544 )
...
* add para_attn_flux.md and para_attn_hunyuan_video.md
* add enable_sequential_cpu_offload in para_attn_hunyuan_video.md
* add comment
* refactor
* fix
* fix
* Update docs/source/en/optimization/para_attn.md
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com >
* Update docs/source/en/optimization/para_attn.md
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com >
* Update docs/source/en/optimization/para_attn.md
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com >
* Update docs/source/en/optimization/para_attn.md
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com >
* Update docs/source/en/optimization/para_attn.md
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com >
* Update docs/source/en/optimization/para_attn.md
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com >
* Update docs/source/en/optimization/para_attn.md
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com >
* Update docs/source/en/optimization/para_attn.md
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com >
* Update docs/source/en/optimization/para_attn.md
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com >
* Update docs/source/en/optimization/para_attn.md
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com >
* fix
* update links
* Update docs/source/en/optimization/para_attn.md
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com >
* Update docs/source/en/optimization/para_attn.md
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com >
* Update docs/source/en/optimization/para_attn.md
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com >
* fix
* Update docs/source/en/optimization/para_attn.md
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com >
* Update docs/source/en/optimization/para_attn.md
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com >
* Update docs/source/en/optimization/para_attn.md
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com >
* Update docs/source/en/optimization/para_attn.md
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com >
* Update docs/source/en/optimization/para_attn.md
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com >
* Update docs/source/en/optimization/para_attn.md
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com >
* Update docs/source/en/optimization/para_attn.md
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com >
* Update docs/source/en/optimization/para_attn.md
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com >
---------
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com >
2025-01-16 10:05:13 -08:00
hlky
08e62fe0c2
Scheduling fixes on MPS ( #10549 )
...
* use np.int32 in scheduling
* test_add_noise_device
* -np.int32, fixes
2025-01-16 07:45:03 -10:00
Daniel Regado
9e1b8a0017
[Docs] Update SD3 ip_adapter model_id to diffusers checkpoint ( #10597 )
...
Update to diffusers ip_adapter ckpt
2025-01-16 07:43:29 -10:00
hlky
0b065c099a
Move buffers to device ( #10523 )
...
* Move buffers to device
* add test
* named_buffers
2025-01-16 07:42:56 -10:00
Junyu Chen
b785ddb654
[DC-AE, SANA] fix SanaMultiscaleLinearAttention apply_quadratic_attention bf16 ( #10595 )
...
* autoencoder_dc tiling
* add tiling and slicing support in SANA pipelines
* create variables for padding length because the line becomes too long
* add tiling and slicing support in pag SANA pipelines
* revert changes to tile size
* make style
* add vae tiling test
* fix SanaMultiscaleLinearAttention apply_quadratic_attention bf16
---------
Co-authored-by: Aryan <aryan@huggingface.co >
2025-01-16 16:49:02 +05:30
Daniel Regado
e8114bd068
IP-Adapter for StableDiffusion3Img2ImgPipeline ( #10589 )
...
Added support for IP-Adapter
2025-01-16 09:46:22 +00:00
Leo Jiang
b0c8973834
[Sana 4K] Add vae tiling option to avoid OOM ( #10583 )
...
Co-authored-by: J石页 <jiangshuo9@h-partners.com >
2025-01-16 02:06:07 +05:30
Sayak Paul
c944f0651f
[Chore] fix vae annotation in mochi pipeline ( #10585 )
...
fix vae annotation in mochi pipeline
2025-01-15 15:19:51 +05:30
Sayak Paul
bba59fb88b
[Tests] add: test to check 8bit bnb quantized models work with lora loading. ( #10576 )
...
* add: test to check 8bit bnb quantized models work with lora loading.
* Update tests/quantization/bnb/test_mixed_int8.py
Co-authored-by: Dhruv Nair <dhruv.nair@gmail.com >
---------
Co-authored-by: Dhruv Nair <dhruv.nair@gmail.com >
2025-01-15 13:05:26 +05:30
Sayak Paul
2432f80ca3
[LoRA] feat: support loading loras into 4bit quantized Flux models. ( #10578 )
...
* feat: support loading loras into 4bit quantized models.
* updates
* update
* remove weight check.
2025-01-15 12:40:40 +05:30
Aryan
f9e957f011
Fix offload tests for CogVideoX and CogView3 ( #10547 )
...
* update
* update
2025-01-15 12:24:46 +05:30
Daniel Regado
4dec63c18e
IP-Adapter for StableDiffusion3InpaintPipeline ( #10581 )
...
* Added support for IP-Adapter
* Added joint_attention_kwargs property
2025-01-15 06:52:23 +00:00
Junsong Chen
3d70777379
[Sana-4K] ( #10537 )
...
* [Sana 4K]
add 4K support for Sana
* [Sana-4K] fix SanaPAGPipeline
* add VAE automatically tiling function;
* set clean_caption to False;
* add warnings for VAE OOM.
* style
---------
Co-authored-by: yiyixuxu <yixu310@gmail.com >
2025-01-14 11:48:56 -10:00
Teriks
6b727842d7
allow passing hf_token to load_textual_inversion ( #10546 )
...
Co-authored-by: Teriks <Teriks@users.noreply.github.com >
2025-01-14 11:48:34 -10:00
Dhruv Nair
be62c85cd9
[CI] Update HF Token on Fast GPU Model Tests ( #10570 )
...
update
2025-01-14 17:00:32 +05:30
Marc Sun
fbff43acc9
[FEAT] DDUF format ( #10037 )
...
* load and save dduf archive
* style
* switch to zip uncompressed
* updates
* Update src/diffusers/pipelines/pipeline_utils.py
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
* Update src/diffusers/pipelines/pipeline_utils.py
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
* first draft
* remove print
* switch to dduf_file for consistency
* switch to huggingface hub api
* fix log
* add a basic test
* Update src/diffusers/configuration_utils.py
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
* Update src/diffusers/pipelines/pipeline_utils.py
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
* Update src/diffusers/pipelines/pipeline_utils.py
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
* fix
* fix variant
* change saving logic
* DDUF - Load transformers components manually (#10171 )
* update hfh version
* Load transformers components manually
* load encoder from_pretrained with state_dict
* working version with transformers and tokenizer !
* add generation_config case
* fix tests
* remove saving for now
* typing
* need next version from transformers
* Update src/diffusers/configuration_utils.py
Co-authored-by: Lucain <lucain@huggingface.co >
* check path corectly
* Apply suggestions from code review
Co-authored-by: Lucain <lucain@huggingface.co >
* udapte
* typing
* remove check for subfolder
* quality
* revert setup changes
* oups
* more readable condition
* add loading from the hub test
* add basic docs.
* Apply suggestions from code review
Co-authored-by: Lucain <lucain@huggingface.co >
* add example
* add
* make functions private
* Apply suggestions from code review
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com >
* minor.
* fixes
* fix
* change the precdence of parameterized.
* error out when custom pipeline is passed with dduf_file.
* updates
* fix
* updates
* fixes
* updates
* fix xfail condition.
* fix xfail
* fixes
* sharded checkpoint compat
* add test for sharded checkpoint
* add suggestions
* Update src/diffusers/models/model_loading_utils.py
Co-authored-by: YiYi Xu <yixu310@gmail.com >
* from suggestions
* add class attributes to flag dduf tests
* last one
* fix logic
* remove comment
* revert changes
---------
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
Co-authored-by: Lucain <lucain@huggingface.co >
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com >
Co-authored-by: YiYi Xu <yixu310@gmail.com >
2025-01-14 13:21:42 +05:30
Dhruv Nair
3279751bf9
[CI] Update HF Token in Fast GPU Tests ( #10568 )
...
update
2025-01-14 13:04:26 +05:30
hlky
4a4afd5ece
Fix batch > 1 in HunyuanVideo ( #10548 )
2025-01-14 10:25:06 +05:30
Aryan
aa79d7da46
Test sequential cpu offload for torchao quantization ( #10506 )
...
test sequential cpu offload
2025-01-14 09:54:06 +05:30
Sayak Paul
74b67524b5
[Docs] Update hunyuan_video.md to rectify the checkpoint id ( #10524 )
...
* Update hunyuan_video.md to rectify the checkpoint id
* bfloat16
* more fixes
* don't update the checkpoint ids.
* update
* t -> T
* Apply suggestions from code review
* fix
---------
Co-authored-by: YiYi Xu <yixu310@gmail.com >
2025-01-13 10:59:13 -10:00
Vinh H. Pham
794f7e49a9
Implement framewise encoding/decoding in LTX Video VAE ( #10488 )
...
* add framewise decode
* add framewise encode, refactor tiled encode/decode
* add sanity test tiling for ltx
* run make style
* Update src/diffusers/models/autoencoders/autoencoder_kl_ltx.py
Co-authored-by: Aryan <contact.aryanvs@gmail.com >
---------
Co-authored-by: Pham Hong Vinh <vinhph3@vng.com.vn >
Co-authored-by: Aryan <contact.aryanvs@gmail.com >
2025-01-13 10:58:32 -10:00
Daniel Regado
9fc9c6dd71
Added IP-Adapter for StableDiffusion3ControlNetInpaintingPipeline ( #10561 )
...
* Added support for IP-Adapter
* Fixed Copied inconsistency
2025-01-13 10:15:36 -10:00
Omar Awile
df355ea2c6
Fix documentation for FluxPipeline ( #10563 )
...
Fix argument name in 8bit quantized example
Found a tiny mistake in the documentation where the text encoder model was passed to the wrong argument in the FluxPipeline.from_pretrained function.
2025-01-13 11:56:32 -08:00
Junsong Chen
ae019da9e3
[Sana] add Sana to auto-text2image-pipeline; ( #10538 )
...
add Sana to auto-text2image-pipeline;
2025-01-13 09:54:37 -10:00
Sayak Paul
329771e542
[LoRA] improve failure handling for peft. ( #10551 )
...
* improve failure handling for peft.
* emppty
* Update src/diffusers/loaders/peft.py
Co-authored-by: Benjamin Bossan <BenjaminBossan@users.noreply.github.com >
---------
Co-authored-by: Benjamin Bossan <BenjaminBossan@users.noreply.github.com >
2025-01-13 09:20:49 -10:00
Dhruv Nair
f7cb595428
[Single File] Fix loading Flux Dev finetunes with Comfy Prefix ( #10545 )
...
* update
* update
* update
* update
---------
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
2025-01-13 21:25:07 +05:30
hlky
c3478a42b9
Fix Nightly AudioLDM2PipelineFastTests ( #10556 )
...
* Fix Nightly AudioLDM2PipelineFastTests
* add phonemizer to setup extras test
* fix
* make style
2025-01-13 13:54:06 +00:00
hlky
980736b792
Fix train_dreambooth_lora_sd3_miniature ( #10554 )
2025-01-13 13:47:27 +00:00
hlky
50c81df4e7
Fix StableDiffusionInstructPix2PixPipelineSingleFileSlowTests ( #10557 )
2025-01-13 13:47:10 +00:00
Aryan
e1c7269720
Fix Latte output_type ( #10558 )
...
update
2025-01-13 19:15:59 +05:30
Sayak Paul
edb8c1bce6
[Flux] Improve true cfg condition ( #10539 )
...
* improve flux true cfg condition
* add test
2025-01-12 18:33:34 +05:30
Sayak Paul
0785dba4df
[Docs] Add negative prompt docs to FluxPipeline ( #10531 )
...
* add negative_prompt documentation.
* add proper docs for negative prompts
* fix-copies
* remove comment.
* Apply suggestions from code review
Co-authored-by: hlky <hlky@hlky.ac >
* fix-copies
---------
Co-authored-by: hlky <hlky@hlky.ac >
2025-01-12 18:02:46 +05:30
Muyang Li
5cda8ea521
Use randn_tensor to replace torch.randn ( #10535 )
...
`torch.randn` requires `generator` and `latents` on the same device, while the wrapped function `randn_tensor` does not have this issue.
2025-01-12 11:41:41 +05:30
Sayak Paul
36acdd7517
[Tests] skip tests properly with unittest.skip() ( #10527 )
...
* skip tests properly.
* more
* more
2025-01-11 08:46:22 +05:30
Junyu Chen
e7db062e10
[DC-AE] support tiling for DC-AE ( #10510 )
...
* autoencoder_dc tiling
* add tiling and slicing support in SANA pipelines
* create variables for padding length because the line becomes too long
* add tiling and slicing support in pag SANA pipelines
* revert changes to tile size
* make style
* add vae tiling test
---------
Co-authored-by: Aryan <aryan@huggingface.co >
2025-01-11 07:15:26 +05:30
andreabosisio
1b0fe63656
Typo fix in the table number of a referenced paper ( #10528 )
...
Correcting a typo in the table number of a referenced paper (in scheduling_ddim_inverse.py)
Changed the number of the referenced table from 1 to 2 in a comment of the set_timesteps() method of the DDIMInverseScheduler class (also according to the description of the 'timestep_spacing' attribute of its __init__ method).
2025-01-10 17:15:25 -08:00
chaowenguo
d6c030fd37
add the xm.mark_step for the first denosing loop ( #10530 )
...
* Update rerender_a_video.py
* Update rerender_a_video.py
* Update examples/community/rerender_a_video.py
Co-authored-by: hlky <hlky@hlky.ac >
* Update rerender_a_video.py
* make style
---------
Co-authored-by: hlky <hlky@hlky.ac >
Co-authored-by: YiYi Xu <yixu310@gmail.com >
2025-01-10 21:03:41 +00:00
Sayak Paul
9f06a0d1a4
[CI] Match remaining assertions from big runner ( #10521 )
...
* print
* remove print.
* print
* update slice.
* empty
2025-01-10 16:37:36 +05:30
Daniel Hipke
52c05bd4cd
Add a disable_mmap option to the from_single_file loader to improve load performance on network mounts ( #10305 )
...
* Add no_mmap arg.
* Fix arg parsing.
* Update another method to force no mmap.
* logging
* logging2
* propagate no_mmap
* logging3
* propagate no_mmap
* logging4
* fix open call
* clean up logging
* cleanup
* fix missing arg
* update logging and comments
* Rename to disable_mmap and update other references.
* [Docs] Update ltx_video.md to remove generator from `from_pretrained()` (#10316 )
Update ltx_video.md to remove generator from `from_pretrained()`
* docs: fix a mistake in docstring (#10319 )
Update pipeline_hunyuan_video.py
docs: fix a mistake
* [BUG FIX] [Stable Audio Pipeline] Resolve torch.Tensor.new_zeros() TypeError in function prepare_latents caused by audio_vae_length (#10306 )
[BUG FIX] [Stable Audio Pipeline] TypeError: new_zeros(): argument 'size' failed to unpack the object at pos 3 with error "type must be tuple of ints,but got float"
torch.Tensor.new_zeros() takes a single argument size (int...) – a list, tuple, or torch.Size of integers defining the shape of the output tensor.
in function prepare_latents:
audio_vae_length = self.transformer.config.sample_size * self.vae.hop_length
audio_shape = (batch_size // num_waveforms_per_prompt, audio_channels, audio_vae_length)
...
audio = initial_audio_waveforms.new_zeros(audio_shape)
audio_vae_length evaluates to float because self.transformer.config.sample_size returns a float
Co-authored-by: hlky <hlky@hlky.ac >
* [docs] Fix quantization links (#10323 )
Update overview.md
* [Sana]add 2K related model for Sana (#10322 )
add 2K related model for Sana
* Update src/diffusers/loaders/single_file_model.py
Co-authored-by: Dhruv Nair <dhruv.nair@gmail.com >
* Update src/diffusers/loaders/single_file.py
Co-authored-by: Dhruv Nair <dhruv.nair@gmail.com >
* make style
---------
Co-authored-by: hlky <hlky@hlky.ac >
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
Co-authored-by: Leojc <liao_junchao@outlook.com >
Co-authored-by: Aditya Raj <syntaxticsugr@gmail.com >
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com >
Co-authored-by: Junsong Chen <cjs1020440147@icloud.com >
Co-authored-by: Dhruv Nair <dhruv.nair@gmail.com >
2025-01-10 15:41:04 +05:30
Sayak Paul
a6f043a80f
[LoRA] allow big CUDA tests to run properly for LoRA (and others) ( #9845 )
...
* allow big lora tests to run on the CI.
* print
* print.
* print
* print
* print
* print
* more
* print
* remove print.
* remove print
* directly place on cuda.
* remove pipeline.
* remove
* fix
* fix
* spaces
* quality
* updates
* directly place flux controlnet pipeline on cuda.
* torch_device instead of cuda.
* style
* device placement.
* fixes
* add big gpu marker for mochi; rename test correctly
* address feedback
* fix
---------
Co-authored-by: Aryan <aryan@huggingface.co >
2025-01-10 12:50:24 +05:30