1
0
mirror of https://github.com/huggingface/diffusers.git synced 2026-01-27 17:22:53 +03:00
Commit Graph

4832 Commits

Author SHA1 Message Date
Sayak Paul
3bf5400a64 Update sana.md with minor corrections (#10232) 2024-12-16 10:26:06 +05:30
Sayak Paul
02cbe972c3 [Tests] update always test pipelines list. (#10143)
update always test pipelines list.
2024-12-16 08:51:55 +05:30
Junsong Chen
5a196e3d46 [Sana] Add Sana, including SanaPipeline, SanaPAGPipeline, LinearAttentionProcessor, Flow-based DPM-sovler and so on. (#9982)
* first add a script for DC-AE;

* DC-AE init

* replace triton with custom implementation

* 1. rename file and remove un-used codes;

* no longer rely on omegaconf and dataclass

* replace custom activation with diffuers activation

* remove dc_ae attention in attention_processor.py

* iinherit from ModelMixin

* inherit from ConfigMixin

* dc-ae reduce to one file

* update downsample and upsample

* clean code

* support DecoderOutput

* remove get_same_padding and val2tuple

* remove autocast and some assert

* update ResBlock

* remove contents within super().__init__

* Update src/diffusers/models/autoencoders/dc_ae.py

Co-authored-by: YiYi Xu <yixu310@gmail.com>

* remove opsequential

* update other blocks to support the removal of build_norm

* remove build encoder/decoder project in/out

* remove inheritance of RMSNorm2d from LayerNorm

* remove reset_parameters for RMSNorm2d

Co-authored-by: YiYi Xu <yixu310@gmail.com>

* remove device and dtype in RMSNorm2d __init__

Co-authored-by: YiYi Xu <yixu310@gmail.com>

* Update src/diffusers/models/autoencoders/dc_ae.py

Co-authored-by: YiYi Xu <yixu310@gmail.com>

* Update src/diffusers/models/autoencoders/dc_ae.py

Co-authored-by: YiYi Xu <yixu310@gmail.com>

* Update src/diffusers/models/autoencoders/dc_ae.py

Co-authored-by: YiYi Xu <yixu310@gmail.com>

* remove op_list & build_block

* remove build_stage_main

* change file name to autoencoder_dc

* move LiteMLA to attention.py

* align with other vae decode output;

* add DC-AE into init files;

* update

* make quality && make style;

* quick push before dgx disappears again

* update

* make style

* update

* update

* fix

* refactor

* refactor

* refactor

* update

* possibly change to nn.Linear

* refactor

* make fix-copies

* replace vae with ae

* replace get_block_from_block_type to get_block

* replace downsample_block_type from Conv to conv for consistency

* add scaling factors

* incorporate changes for all checkpoints

* make style

* move mla to attention processor file; split qkv conv to linears

* refactor

* add tests

* from original file loader

* add docs

* add standard autoencoder methods

* combine attention processor

* fix tests

* update

* minor fix

* minor fix

* minor fix & in/out shortcut rename

* minor fix

* make style

* fix paper link

* update docs

* update single file loading

* make style

* remove single file loading support; todo for DN6

* Apply suggestions from code review

Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>

* add abstract

* 1. add DCAE into diffusers;
2. make style and make quality;

* add DCAE_HF into diffusers;

* bug fixed;

* add SanaPipeline, SanaTransformer2D into diffusers;

* add sanaLinearAttnProcessor2_0;

* first update for SanaTransformer;

* first update for SanaPipeline;

* first success run SanaPipeline;

* model output finally match with original model with the same intput;

* code update;

* code update;

* add a flow dpm-solver scripts

* 🎉[important update]
1. Integrate flow-dpm-sovler into diffusers;
2. finally run successfully on both `FlowMatchEulerDiscreteScheduler` and `FlowDPMSolverMultistepScheduler`;

* 🎉🔧[important update & fix huge bugs!!]
1. add SanaPAGPipeline & several related Sana linear attention operators;
2. `SanaTransformer2DModel` not supports multi-resolution input;
2. fix the multi-scale HW bugs in SanaPipeline and SanaPAGPipeline;
3. fix the flow-dpm-solver set_timestep() init `model_output` and `lower_order_nums` bugs;

* remove prints;

* add convert sana official checkpoint to diffusers format Safetensor.

* Update src/diffusers/models/transformers/sana_transformer_2d.py

Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>

* Update src/diffusers/models/transformers/sana_transformer_2d.py

Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>

* Update src/diffusers/models/transformers/sana_transformer_2d.py

Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>

* Update src/diffusers/pipelines/pag/pipeline_pag_sana.py

Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>

* Update src/diffusers/models/transformers/sana_transformer_2d.py

Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>

* Update src/diffusers/models/transformers/sana_transformer_2d.py

Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>

* Update src/diffusers/pipelines/sana/pipeline_sana.py

Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>

* Update src/diffusers/pipelines/sana/pipeline_sana.py

Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>

* update Sana for DC-AE's recent commit;

* make style && make quality

* Add StableDiffusion3PAGImg2Img Pipeline + Fix SD3 Unconditional PAG (#9932)

* fix progress bar updates in SD 1.5 PAG Img2Img pipeline

---------

Co-authored-by: Vinh H. Pham <phamvinh257@gmail.com>
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>

* make the vae can be None in `__init__` of `SanaPipeline`

* Update src/diffusers/models/transformers/sana_transformer_2d.py

Co-authored-by: hlky <hlky@hlky.ac>

* change the ae related code due to the latest update of DCAE branch;

* change the ae related code due to the latest update of DCAE branch;

* 1. change code based on AutoencoderDC;
2. fix the bug of new GLUMBConv;
3. run success;

* update for solving conversation.

* 1. fix bugs and run convert script success;
2. Downloading ckpt from hub automatically;

* make style && make quality;

* 1. remove un-unsed parameters in init;
2. code update;

* remove test file

* refactor; add docs; add tests; update conversion script

* make style

* make fix-copies

* refactor

* udpate pipelines

* pag tests and refactor

* remove sana pag conversion script

* handle weight casting in conversion script

* update conversion script

* add a processor

* 1. add bf16 pth file path;
2. add complex human instruct in pipeline;

* fix fast \tests

* change gemma-2-2b-it ckpt to a non-gated repo;

* fix the pth path bug in conversion script;

* change grad ckpt to original; make style

* fix the complex_human_instruct bug and typo;

* remove dpmsolver flow scheduler

* apply review suggestions

* change the `FlowMatchEulerDiscreteScheduler` to default `DPMSolverMultistepScheduler` with flow matching scheduler.

* fix the tokenizer.padding_side='right' bug;

* update docs

* make fix-copies

* fix imports

* fix docs

* add integration test

* update docs

* update examples

* fix convert_model_output in schedulers

* fix failing tests

---------

Co-authored-by: Junyu Chen <chenjydl2003@gmail.com>
Co-authored-by: YiYi Xu <yixu310@gmail.com>
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
Co-authored-by: chenjy2003 <70215701+chenjy2003@users.noreply.github.com>
Co-authored-by: Aryan <aryan@huggingface.co>
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
Co-authored-by: hlky <hlky@hlky.ac>
2024-12-16 02:16:56 +05:30
Aryan
22c4f079b1 Test error raised when loading normal and expanding loras together in Flux (#10188)
* add test for expanding lora and normal lora error

* Update tests/lora/test_lora_layers_flux.py

* fix things.

* Update src/diffusers/loaders/peft.py

---------

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
2024-12-15 21:46:21 +05:30
Junjie
96a9097445 Add offload option in flux-control training (#10225)
* Add offload option in flux-control training

* Update examples/flux-control/train_control_flux.py

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>

* modify help message

* fix format

---------

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
2024-12-15 20:49:17 +05:30
Juan Acevedo
a5f35ee473 add reshape to fix use_memory_efficient_attention in flax (#7918)
Co-authored-by: Juan Acevedo <jfacevedo@google.com>
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
Co-authored-by: Aryan <aryan@huggingface.co>
2024-12-14 17:45:45 +01:00
hlky
63243406ba Use torch in get_2d_sincos_pos_embed and get_3d_sincos_pos_embed (#10156)
* Use torch in get_2d_sincos_pos_embed

* Use torch in get_3d_sincos_pos_embed

* get_1d_sincos_pos_embed_from_grid in LatteTransformer3DModel

* deprecate

* move deprecate, make private
2024-12-13 10:13:38 -10:00
Miguel Farinha
6bd30ba748 Allow image resolutions multiple of 8 instead of 64 in SVD pipeline (#6646)
allow resolutions not multiple of 64 in SVD

Co-authored-by: Miguel Farinha <mignha@CSL15958.local>
Co-authored-by: hlky <hlky@hlky.ac>
2024-12-13 16:17:15 +00:00
Linoy Tsaban
cef0e3677e [RF inversion community pipeline] add eta_decay (#10199)
* add decay

* add decay

* style
2024-12-13 11:04:26 +02:00
skotapati
ec9bfa9e14 Remove mps workaround for fp16 GELU, which is now supported natively (#10133)
* Remove mps workaround for fp16 GELU, which is now supported natively

---------

Co-authored-by: hlky <hlky@hlky.ac>
2024-12-12 16:05:59 -10:00
Bios
bdbaea8f64 update StableDiffusion3Img2ImgPipeline.add image size validation (#10166)
* update StableDiffusion3Img2ImgPipeline.add image size validation

---------

Co-authored-by: hlky <hlky@hlky.ac>
2024-12-12 12:32:18 -10:00
hlky
e8b65bffa2 refactor StableDiffusionXLControlNetUnion (#10200)
mode
2024-12-12 12:21:27 -10:00
hlky
f2d348d904 Remove negative_* from SDXL callback (#10203)
* Remove `negative_*` from SDXL callback

* Change example and add XL version
2024-12-12 20:58:50 +00:00
Pauline Bailly-Masson
c002724dd5 Ci update tpu (#10197)
* Update nightly_tests.yml for TPU CI

* Update push_tests.yml
2024-12-12 23:54:41 +05:30
Aryan
96c376a5ff [core] LTX Video (#10021)
* transformer

* make style & make fix-copies

* transformer

* add transformer tests

* 80% vae

* make style

* make fix-copies

* fix

* undo cogvideox changes

* update

* update

* match vae

* add docs

* t2v pipeline working; scheduler needs to be checked

* docs

* add pipeline test

* update

* update

* make fix-copies

* Apply suggestions from code review

Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>

* update

* copy t2v to i2v pipeline

* update

* apply review suggestions

* update

* make style

* remove framewise encoding/decoding

* pack/unpack latents

* image2video

* update

* make fix-copies

* update

* update

* rope scale fix

* debug layerwise code

* remove debug

* Apply suggestions from code review

Co-authored-by: YiYi Xu <yixu310@gmail.com>

* propagate precision changes to i2v pipeline

* remove downcast

* address review comments

* fix comment

* address review comments

* [Single File] LTX support for loading original weights (#10135)

* from original file mixin for ltx

* undo config mapping fn changes

* update

* add single file to pipelines

* update docs

* Update src/diffusers/models/autoencoders/autoencoder_kl_ltx.py

* Update src/diffusers/models/autoencoders/autoencoder_kl_ltx.py

* rename classes based on ltx review

* point to original repository for inference

* make style

* resolve conflicts correctly

---------

Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
Co-authored-by: YiYi Xu <yixu310@gmail.com>
2024-12-12 16:21:28 +05:30
Sayak Paul
8170dc368d [WIP][Training] Flux Control LoRA training script (#10130)
* update

* add

* update

* add control-lora conversion script; make flux loader handle norms; fix rank calculation assumption

* control lora updates

* remove copied-from

* create separate pipelines for flux control

* make fix-copies

* update docs

* add tests

* fix

* Apply suggestions from code review

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>

* remove control lora changes

* apply suggestions from review

* Revert "remove control lora changes"

This reverts commit 73cfc519c9.

* update

* update

* improve log messages

* updates.

* updates

* support register_config.

* fix

* fix

* fix

* updates

* updates

* updates

* fix-copies

* fix

* apply suggestions from review

* add tests

* remove conversion script; enable on-the-fly conversion

* bias -> lora_bias.

* fix-copies

* peft.py

* fix lora conversion

* changes

Co-authored-by: a-r-r-o-w <contact.aryanvs@gmail.com>

* fix-copies

* updates for tests

* fix

* alpha_pattern.

* add a test for varied lora ranks and alphas.

* revert changes in num_channels_latents = self.transformer.config.in_channels // 8

* revert moe

* add a sanity check on unexpected keys when loading norm layers.

* contro lora.

* fixes

* fixes

* fixes

* tests

* reviewer feedback

* fix

* proper peft version for lora_bias

* fix-copies

* updates

* updates

* updates

* remove debug code

* update docs

* integration tests

* nis

* fuse and unload.

* fix

* add slices.

* more updates.

* button up readme

* train()

* add full fine-tuning version.

* fixes

* Apply suggestions from code review

Co-authored-by: Aryan <aryan@huggingface.co>

* set_grads_to_none remove.

* readme

---------

Co-authored-by: Aryan <aryan@huggingface.co>
Co-authored-by: yiyixuxu <yixu310@gmail.com>
Co-authored-by: a-r-r-o-w <contact.aryanvs@gmail.com>
2024-12-12 15:34:57 +05:30
Sayak Paul
25f3e91c81 [CI] merge peft pr workflow into the main pr workflow. (#10042)
* merge peft pr workflow into the main pr workflow.

* remove latest

---------

Co-authored-by: Dhruv Nair <dhruv.nair@gmail.com>
2024-12-12 13:13:09 +05:30
Sayak Paul
a6a18cff5e [LoRA] add a test to ensure set_adapters() and attn kwargs outs match (#10110)
* add a test to ensure set_adapters() and attn kwargs outs match

* remove print

* fix

* Apply suggestions from code review

Co-authored-by: Benjamin Bossan <BenjaminBossan@users.noreply.github.com>

* assertFalse.

---------

Co-authored-by: Benjamin Bossan <BenjaminBossan@users.noreply.github.com>
2024-12-12 12:52:50 +05:30
Canva
7db9463e52 Add support for XFormers in SD3 (#8583)
* Add support for XFormers in SD3

* sd3 xformers test

* sd3 xformers quality

* sd3 xformers update

---------

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
Co-authored-by: YiYi Xu <yixu310@gmail.com>
2024-12-12 12:05:39 +05:30
Ethan Smith
26e80e0143 fix min-snr implementation (#8466)
* fix min-snr implementation

https://github.com/kohya-ss/sd-scripts/blob/main/library/custom_train_functions.py#L66

* Update train_dreambooth.py

fix variable name mse_loss_weights

* fix divisor

* make style

---------

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
Co-authored-by: YiYi Xu <yixu310@gmail.com>
2024-12-12 09:55:59 +05:30
hlky
914a585be8 Add ControlNetUnion (#10131)
* ControlNetUnion model
2024-12-11 07:07:50 -10:00
Dhruv Nair
ad40e26515 [Single File] Add single file support for AutoencoderDC (#10183)
* update

* update

* update
2024-12-11 16:57:36 +05:30
SahilCarterr
d041dd5040 Added Error when len(gligen_images ) is not equal to len(gligen_phrases) in StableDiffusionGLIGENTextImagePipeline (#10176)
* added check value error

* fix style
2024-12-11 08:59:41 +00:00
Jonathan Yin
0967593400 Fix Nonetype attribute error when loading multiple Flux loras (#10182)
Fix Nonetype attribute error
2024-12-11 13:33:33 +05:30
Linoy Tsaban
43534a8d1f [community pipeline rf-inversion] - fix example in doc (#10179)
* fix example in doc

* remove redundancies

* change param
2024-12-11 00:30:05 +02:00
Darshil Jariwala
65b98b5da4 Add PAG Support for Stable Diffusion Inpaint Pipeline (#9386)
* using sd inpaint pipeline and sdxl pag inpaint pipeline to add changes

* using sd inpaint pipeline and sdxl pag inpaint pipeline to add changes

* finished the call function

* added auto pipeline

* merging diffusers

* ready to test

* ready to test

* added copied from and removed unnecessary tests

* make style changes

* doc changes

* updating example doc string

* style fix

* init

* adding imports

* quality

* Update src/diffusers/pipelines/pag/pipeline_pag_sd_inpaint.py

* make

* Update tests/pipelines/pag/test_pag_sd_inpaint.py

* slice and size

* slice

---------

Co-authored-by: Darshil Jariwala <darshiljariwala@Darshils-MacBook-Air.local>
Co-authored-by: Darshil Jariwala <jariwala.darshil2002@gmail.com>
Co-authored-by: YiYi Xu <yixu310@gmail.com>
Co-authored-by: hlky <hlky@hlky.ac>
2024-12-10 21:06:31 +00:00
Aryan
49a9143479 Flux Control LoRA (#9999)
* update


---------

Co-authored-by: yiyixuxu <yixu310@gmail.com>
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
2024-12-10 09:08:13 -10:00
hlky
4c4b323c1f Use torch in get_3d_rotary_pos_embed/_allegro (#10161)
Use torch in get_3d_rotary_pos_embed/_allegro
2024-12-10 08:56:26 -10:00
Soof Golan
22d3a82651 Improve post-processing performance (#10170)
* Use multiplication instead of division
* Add fast path when denormalizing all or none of the images
2024-12-10 08:07:26 -10:00
Linoy Tsaban
c9e4fab42c [community pipeline] Add RF-inversion Flux pipeline (#9816)
* initial commit

* update denoising loop

* fix scheduling

* style

* fix import

* fixes

* fixes

* style

* fixes

* change invert

* change denoising & check inputs

* shape & timesteps fixes

* timesteps fixes

* style

* remove redundancies

* small changes

* update documentation a bit

* update documentation a bit

* update documentation a bit

* style

* change strength param, remove redundancies

* style

* forward ode loop change

* add inversion progress bar

* fix image_seq_len

* revert to strength but == 1 by default.

* style

* add "copied from..." comments

* credit authors

* make style

* return inversion outputs without self-assigning

* adjust denoising loop to generate regular images if inverted latents are not provided

* adjust denoising loop to generate regular images if inverted latents are not provided

* fix import

* comment

* remove redundant line

* modify comment on ti

* Update examples/community/pipeline_flux_rf_inversion.py

Co-authored-by: hlky <hlky@hlky.ac>

* Update examples/community/pipeline_flux_rf_inversion.py

Co-authored-by: hlky <hlky@hlky.ac>

* Update examples/community/pipeline_flux_rf_inversion.py

Co-authored-by: hlky <hlky@hlky.ac>

* Update examples/community/pipeline_flux_rf_inversion.py

Co-authored-by: hlky <hlky@hlky.ac>

* Update examples/community/pipeline_flux_rf_inversion.py

Co-authored-by: hlky <hlky@hlky.ac>

* Update examples/community/pipeline_flux_rf_inversion.py

Co-authored-by: hlky <hlky@hlky.ac>

* Update examples/community/pipeline_flux_rf_inversion.py

Co-authored-by: hlky <hlky@hlky.ac>

* fix syntax error

---------

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
Co-authored-by: hlky <hlky@hlky.ac>
2024-12-10 12:41:12 +02:00
Aryan
0e50401e34 [Single file] Support revision argument when loading single file config (#10168)
update
2024-12-10 14:12:13 +05:30
Yu Zheng
6131a93b96 support sd3.5 for controlnet example (#9860)
* support sd3.5 in controlnet

---------

Co-authored-by: YiYi Xu <yixu310@gmail.com>
2024-12-06 10:59:27 -10:00
Juan Acevedo
3cb7b8628c Update ptxla training (#9864)
* update ptxla example

---------

Co-authored-by: Juan Acevedo <jfacevedo@google.com>
Co-authored-by: Pei Zhang <zpcore@gmail.com>
Co-authored-by: Pei Zhang <piz@google.com>
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
Co-authored-by: Pei Zhang <pei@Peis-MacBook-Pro.local>
Co-authored-by: hlky <hlky@hlky.ac>
2024-12-06 10:50:13 -10:00
Sayak Paul
fa3a9100be [LoRA] depcrecate save_attn_procs(). (#10126)
depcrecate save_attn_procs().
2024-12-06 10:38:57 -10:00
zhangp365
188bca3084 fixed a dtype bfloat16 bug in torch_utils.py (#10125)
* fixed a dtype bfloat16 bug in torch_utils.py

when generating 1024*1024 image with bfloat16 dtype, there is an exception:
  File "/opt/conda/lib/python3.10/site-packages/diffusers/utils/torch_utils.py", line 107, in fourier_filter
    x_freq = fftn(x, dim=(-2, -1))
RuntimeError: Unsupported dtype BFloat16

* remove whitespace in torch_utils.py

* Update src/diffusers/utils/torch_utils.py

* Update torch_utils.py

---------

Co-authored-by: hlky <hlky@hlky.ac>
2024-12-06 10:36:39 -10:00
Junsong Chen
cd892041e2 [DC-AE] Add the official Deep Compression Autoencoder code(32x,64x,128x compression ratio); (#9708)
* first add a script for DC-AE;

* DC-AE init

* replace triton with custom implementation

* 1. rename file and remove un-used codes;

* no longer rely on omegaconf and dataclass

* replace custom activation with diffuers activation

* remove dc_ae attention in attention_processor.py

* iinherit from ModelMixin

* inherit from ConfigMixin

* dc-ae reduce to one file

* update downsample and upsample

* clean code

* support DecoderOutput

* remove get_same_padding and val2tuple

* remove autocast and some assert

* update ResBlock

* remove contents within super().__init__

* Update src/diffusers/models/autoencoders/dc_ae.py

Co-authored-by: YiYi Xu <yixu310@gmail.com>

* remove opsequential

* update other blocks to support the removal of build_norm

* remove build encoder/decoder project in/out

* remove inheritance of RMSNorm2d from LayerNorm

* remove reset_parameters for RMSNorm2d

Co-authored-by: YiYi Xu <yixu310@gmail.com>

* remove device and dtype in RMSNorm2d __init__

Co-authored-by: YiYi Xu <yixu310@gmail.com>

* Update src/diffusers/models/autoencoders/dc_ae.py

Co-authored-by: YiYi Xu <yixu310@gmail.com>

* Update src/diffusers/models/autoencoders/dc_ae.py

Co-authored-by: YiYi Xu <yixu310@gmail.com>

* Update src/diffusers/models/autoencoders/dc_ae.py

Co-authored-by: YiYi Xu <yixu310@gmail.com>

* remove op_list & build_block

* remove build_stage_main

* change file name to autoencoder_dc

* move LiteMLA to attention.py

* align with other vae decode output;

* add DC-AE into init files;

* update

* make quality && make style;

* quick push before dgx disappears again

* update

* make style

* update

* update

* fix

* refactor

* refactor

* refactor

* update

* possibly change to nn.Linear

* refactor

* make fix-copies

* replace vae with ae

* replace get_block_from_block_type to get_block

* replace downsample_block_type from Conv to conv for consistency

* add scaling factors

* incorporate changes for all checkpoints

* make style

* move mla to attention processor file; split qkv conv to linears

* refactor

* add tests

* from original file loader

* add docs

* add standard autoencoder methods

* combine attention processor

* fix tests

* update

* minor fix

* minor fix

* minor fix & in/out shortcut rename

* minor fix

* make style

* fix paper link

* update docs

* update single file loading

* make style

* remove single file loading support; todo for DN6

* Apply suggestions from code review

Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>

* add abstract

---------

Co-authored-by: Junyu Chen <chenjydl2003@gmail.com>
Co-authored-by: YiYi Xu <yixu310@gmail.com>
Co-authored-by: chenjy2003 <70215701+chenjy2003@users.noreply.github.com>
Co-authored-by: Aryan <aryan@huggingface.co>
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
2024-12-07 01:01:51 +05:30
suzukimain
6394d905da [community] Load Models from Sources like Civitai into Existing Pipelines (#9986)
* Added example of model search.

* Combine processing into one file

* Add parameters for base model

* Bug Fixes

* bug fix

* Create README.md

* Update search_for_civitai_and_HF.py

* Create requirements.txt

* bug fix

* Update README.md

* bug fix

* Correction of typos

* Update examples/model_search/README.md

Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>

* Update examples/model_search/README.md

Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>

* Update examples/model_search/README.md

Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>

* Update examples/model_search/README.md

Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>

* Update examples/model_search/README.md

Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>

* Update examples/model_search/README.md

Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>

* apply the changes

* Replace search_for_civitai_and_HF.py with pipeline_easy.py

* Update examples/model_search/README.md

Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>

* Update examples/model_search/README.md

Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>

* Update examples/model_search/README.md

Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>

* Update README.md

* Organize the table of parameters

* Update README.md

* Update README.md

* Update README.md

* make style

* Fixing the style of pipeline

* Fix pipeline style

* fix

---------

Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
2024-12-06 07:48:45 -08:00
Aryan
18f9b99088 Remove duplicate checks for len(generator) != batch_size when generator is a list (#10134)
remove duplicate checks
2024-12-06 11:29:10 +00:00
Aritra Roy Gosthipaty
bf64b32652 [Guide] Quantize your Diffusion Models with bnb (#10012)
* chore: initial draft

* Apply suggestions from code review

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>

* chore: link in place

* chore: review suggestions

* Apply suggestions from code review

Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>

* chore: review suggestions

* Update docs/source/en/quantization/bitsandbytes.md

Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>

* review suggestions

* chore: review suggestions

* Apply suggestions from code review

Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>

* adding same changes to 4 bit section

* review suggestions

---------

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
2024-12-05 13:54:03 -08:00
SahilCarterr
3335e2262d [FIX] Bug in FluxPosEmbed (#10115)
* Fix get_1d_rotary_pos_embed in embedding.py

* Update embeddings.py

---------

Co-authored-by: hlky <hlky@hlky.ac>
2024-12-05 13:12:48 +00:00
Sayak Paul
65ab1052b8 [Tests] xfail incompatible SD configs. (#10127)
* xfail incompatible SD configs.

* fix
2024-12-05 15:11:52 +05:30
Sayak Paul
40fc389c44 [Tests] fix condition argument in xfail. (#10099)
* fix condition argument in xfail.

* revert init changes.
2024-12-05 10:13:45 +05:30
Aryan
98d0cd5778 Use torch.device instead of current device index for BnB quantizer (#10069)
* update

* apply review suggestion

---------

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
2024-12-05 08:05:24 +05:30
Steven Liu
0d11ab26c4 [docs] load_lora_adapter (#10119)
* load_lora_adapter

* save

---------

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
2024-12-05 08:00:03 +05:30
YiYi Xu
243d9a4986 pass attn mask arg for flux (#10122) 2024-12-04 14:22:36 -10:00
linjiapro
96220390a2 Fix a bug for SD35 control net training and improve control net block index (#10065)
* wip

---------

Co-authored-by: YiYi Xu <yixu310@gmail.com>
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
2024-12-04 14:20:05 -10:00
zhangp365
73dac0c49e Fix a bug in the state dict judgment in ip_adapter.py. (#10095)
* fix a judging state dict bug in ip_adapter.py

* make

---------

Co-authored-by: hlky <hlky@hlky.ac>
2024-12-04 14:03:43 -10:00
Linoy Tsaban
04bba38725 [Flux Redux] add prompt & multiple image input (#10056)
* add multiple prompts to flux redux

---------

Co-authored-by: hlky <hlky@hlky.ac>
2024-12-04 08:48:32 -10:00
hlky
a2d424eb2e Add sigmas to pipelines using FlowMatch (#10116) 2024-12-04 08:42:47 -10:00
Parag Ekbote
25ddc7945b Fix Broken Links in ReadMe (#10117)
Update broken links in ReadME.
2024-12-04 09:04:31 -08:00