Yuqian Hong
4fa24591a3
create a script to train autoencoderkl ( #10605 )
...
* create a script to train vae
* update main.py
* update train_autoencoderkl.py
* update train_autoencoderkl.py
* add a check of --pretrained_model_name_or_path and --model_config_name_or_path
* remove the comment, remove diffusers in requiremnets.txt, add validation_image ote
* update autoencoderkl.py
* quality
---------
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
2025-01-27 16:41:34 +05:30
Jacob Helwig
4f3ec5364e
Add sigmoid scheduler in scheduling_ddpm.py docs ( #10648 )
...
Sigmoid scheduler in scheduling_ddpm.py docs
2025-01-26 15:37:20 -08:00
Leo Jiang
07860f9916
NPU Adaption for Sanna ( #10409 )
...
* NPU Adaption for Sanna
---------
Co-authored-by: J石页 <jiangshuo9@h-partners.com >
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
2025-01-24 09:08:52 -10:00
Wenhao Sun
87252d80c3
Add pipeline_stable_diffusion_xl_attentive_eraser ( #10579 )
...
* add pipeline_stable_diffusion_xl_attentive_eraser
* add pipeline_stable_diffusion_xl_attentive_eraser_make_style
* make style and add example output
* update Docs
Co-authored-by: Other Contributor <a457435687@126.com >
* add Oral
Co-authored-by: Other Contributor <a457435687@126.com >
* update_review
Co-authored-by: Other Contributor <a457435687@126.com >
* update_review_ms
Co-authored-by: Other Contributor <a457435687@126.com >
---------
Co-authored-by: Other Contributor <a457435687@126.com >
2025-01-24 13:52:45 +00:00
Sayak Paul
5897137397
[chore] add a script to extract loras from full fine-tuned models ( #10631 )
...
* feat: add a lora extraction script.
* updates
2025-01-24 11:50:36 +05:30
Yaniv Galron
a451c0ed14
removing redundant requires_grad = False ( #10628 )
...
We already set the unet to requires grad false at line 506
Co-authored-by: Aryan <aryan@huggingface.co >
2025-01-24 03:25:33 +05:30
hlky
37c9697f5b
Add IP-Adapter example to Flux docs ( #10633 )
...
* Add IP-Adapter example to Flux docs
* Apply suggestions from code review
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
---------
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
2025-01-23 22:15:33 +05:30
Raul Ciotescu
9684c52adf
width and height are mixed-up ( #10629 )
...
vars mixed-up
2025-01-23 06:40:22 -10:00
Steven Liu
5483162d12
[docs] uv installation ( #10622 )
...
* uv
* feedback
2025-01-23 08:34:51 -08:00
Sayak Paul
d77c53b6d2
[docs] fix image path in para attention docs ( #10632 )
...
fix image path in para attention docs
2025-01-23 08:22:42 -08:00
Sayak Paul
78bc824729
[Tests] modify the test slices for the failing flax test ( #10630 )
...
* fixes
* fixes
* fixes
* updates
2025-01-23 12:10:24 +05:30
kahmed10
04d40920a7
add onnxruntime-migraphx as part of check for onnxruntime in import_utils.py ( #10624 )
...
add onnxruntime-migraphx to import_utils.py
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
2025-01-23 07:49:51 +05:30
Dhruv Nair
8d6f6d6b66
[CI] Update HF_TOKEN in all workflows ( #10613 )
...
update
2025-01-22 20:03:41 +05:30
Aryan
ca60ad8e55
Improve TorchAO error message ( #10627 )
...
improve error message
2025-01-22 19:50:02 +05:30
Aryan
beacaa5528
[core] Layerwise Upcasting ( #10347 )
...
* update
* update
* make style
* remove dynamo disable
* add coauthor
Co-Authored-By: Dhruv Nair <dhruv.nair@gmail.com >
* update
* update
* update
* update mixin
* add some basic tests
* update
* update
* non_blocking
* improvements
* update
* norm.* -> norm
* apply suggestions from review
* add example
* update hook implementation to the latest changes from pyramid attention broadcast
* deinitialize should raise an error
* update doc page
* Apply suggestions from code review
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com >
* update docs
* update
* refactor
* fix _always_upcast_modules for asym ae and vq_model
* fix lumina embedding forward to not depend on weight dtype
* refactor tests
* add simple lora inference tests
* _always_upcast_modules -> _precision_sensitive_module_patterns
* remove todo comments about review; revert changes to self.dtype in unets because .dtype on ModelMixin should be able to handle fp8 weight case
* check layer dtypes in lora test
* fix UNet1DModelTests::test_layerwise_upcasting_inference
* _precision_sensitive_module_patterns -> _skip_layerwise_casting_patterns based on feedback
* skip test in NCSNppModelTests
* skip tests for AutoencoderTinyTests
* skip tests for AutoencoderOobleckTests
* skip tests for UNet1DModelTests - unsupported pytorch operations
* layerwise_upcasting -> layerwise_casting
* skip tests for UNetRLModelTests; needs next pytorch release for currently unimplemented operation support
* add layerwise fp8 pipeline test
* use xfail
* Apply suggestions from code review
Co-authored-by: Dhruv Nair <dhruv.nair@gmail.com >
* add assertion with fp32 comparison; add tolerance to fp8-fp32 vs fp32-fp32 comparison (required for a few models' test to pass)
* add note about memory consumption on tesla CI runner for failing test
---------
Co-authored-by: Dhruv Nair <dhruv.nair@gmail.com >
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com >
2025-01-22 19:49:37 +05:30
Lucain
a647682224
Remove cache migration script ( #10619 )
2025-01-21 07:22:59 -10:00
YiYi Xu
a1f9a71238
fix offload gpu tests etc ( #10366 )
...
* add
* style
2025-01-21 18:52:36 +05:30
Fanli Lin
ec37e20972
[tests] make tests device-agnostic (part 3) ( #10437 )
...
* initial comit
* fix empty cache
* fix one more
* fix style
* update device functions
* update
* update
* Update src/diffusers/utils/testing_utils.py
Co-authored-by: hlky <hlky@hlky.ac >
* Update src/diffusers/utils/testing_utils.py
Co-authored-by: hlky <hlky@hlky.ac >
* Update src/diffusers/utils/testing_utils.py
Co-authored-by: hlky <hlky@hlky.ac >
* Update tests/pipelines/controlnet/test_controlnet.py
Co-authored-by: hlky <hlky@hlky.ac >
* Update src/diffusers/utils/testing_utils.py
Co-authored-by: hlky <hlky@hlky.ac >
* Update src/diffusers/utils/testing_utils.py
Co-authored-by: hlky <hlky@hlky.ac >
* Update tests/pipelines/controlnet/test_controlnet.py
Co-authored-by: hlky <hlky@hlky.ac >
* with gc.collect
* update
* make style
* check_torch_dependencies
* add mps empty cache
* bug fix
* Apply suggestions from code review
---------
Co-authored-by: hlky <hlky@hlky.ac >
2025-01-21 12:15:45 +00:00
Muyang Li
158a5a87fb
Remove the FP32 Wrapper when evaluating ( #10617 )
...
Remove the FP32 Wrapper
Co-authored-by: Linoy Tsaban <57615435+linoytsaban@users.noreply.github.com >
2025-01-21 16:16:54 +05:30
jiqing-feng
012d08b1bc
Enable dreambooth lora finetune example on other devices ( #10602 )
...
* enable dreambooth_lora on other devices
Signed-off-by: jiqing-feng <jiqing.feng@intel.com >
* enable xpu
Signed-off-by: jiqing-feng <jiqing.feng@intel.com >
* check cuda device before empty cache
Signed-off-by: jiqing-feng <jiqing.feng@intel.com >
* fix comment
Signed-off-by: jiqing-feng <jiqing.feng@intel.com >
* import free_memory
Signed-off-by: jiqing-feng <jiqing.feng@intel.com >
---------
Signed-off-by: jiqing-feng <jiqing.feng@intel.com >
2025-01-21 14:09:45 +05:30
Sayak Paul
4ace7d0483
[chore] change licensing to 2025 from 2024. ( #10615 )
...
change licensing to 2025 from 2024.
2025-01-20 16:57:27 -10:00
baymax591
75a636da48
bugfix for npu not support float64 ( #10123 )
...
* bugfix for npu not support float64
* is_mps is_npu
---------
Co-authored-by: 白超 <baichao19@huawei.com >
Co-authored-by: hlky <hlky@hlky.ac >
2025-01-20 09:35:24 -10:00
sunxunle
4842f5d8de
chore: remove redundant words ( #10609 )
...
Signed-off-by: sunxunle <sunxunle@ampere.tech >
2025-01-20 08:15:26 -10:00
Sayak Paul
328e0d20a7
[training] set rest of the blocks with requires_grad False. ( #10607 )
...
set rest of the blocks with requires_grad False.
2025-01-19 19:34:53 +05:30
Shenghai Yuan
23b467c79c
[core] ConsisID ( #10140 )
...
* Update __init__.py
* add consisid
* update consisid
* update consisid
* make style
* make_style
* Update src/diffusers/pipelines/consisid/pipeline_consisid.py
Co-authored-by: hlky <hlky@hlky.ac >
* Update src/diffusers/pipelines/consisid/pipeline_consisid.py
Co-authored-by: hlky <hlky@hlky.ac >
* Update src/diffusers/pipelines/consisid/pipeline_consisid.py
Co-authored-by: hlky <hlky@hlky.ac >
* Update src/diffusers/pipelines/consisid/pipeline_consisid.py
Co-authored-by: hlky <hlky@hlky.ac >
* Update src/diffusers/pipelines/consisid/pipeline_consisid.py
Co-authored-by: hlky <hlky@hlky.ac >
* Update src/diffusers/pipelines/consisid/pipeline_consisid.py
Co-authored-by: hlky <hlky@hlky.ac >
* add doc
* make style
* Rename consisid .md to consisid.md
* Update geodiff_molecule_conformation.ipynb
* Update geodiff_molecule_conformation.ipynb
* Update geodiff_molecule_conformation.ipynb
* Update demo.ipynb
* Update pipeline_consisid.py
* make fix-copies
* Update docs/source/en/using-diffusers/consisid.md
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com >
* Update src/diffusers/pipelines/consisid/pipeline_consisid.py
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com >
* Update src/diffusers/pipelines/consisid/pipeline_consisid.py
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com >
* Update docs/source/en/using-diffusers/consisid.md
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com >
* Update docs/source/en/using-diffusers/consisid.md
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com >
* update doc & pipeline code
* fix typo
* make style
* update example
* Update docs/source/en/using-diffusers/consisid.md
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com >
* update example
* update example
* Update src/diffusers/pipelines/consisid/pipeline_consisid.py
Co-authored-by: hlky <hlky@hlky.ac >
* Update src/diffusers/pipelines/consisid/pipeline_consisid.py
Co-authored-by: hlky <hlky@hlky.ac >
* update
* add test and update
* remove some changes from docs
* refactor
* fix
* undo changes to examples
* remove save/load and fuse methods
* update
* link hf-doc-img & make test extremely small
* update
* add lora
* fix test
* update
* update
* change expected_diff_max to 0.4
* fix typo
* fix link
* fix typo
* update docs
* update
* remove consisid lora tests
---------
Co-authored-by: hlky <hlky@hlky.ac >
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com >
Co-authored-by: Aryan <aryan@huggingface.co >
2025-01-19 13:10:08 +05:30
Juan Acevedo
aeac0a00f8
implementing flux on TPUs with ptxla ( #10515 )
...
* implementing flux on TPUs with ptxla
* add xla flux attention class
* run make style/quality
* Update src/diffusers/models/attention_processor.py
Co-authored-by: YiYi Xu <yixu310@gmail.com >
* Update src/diffusers/models/attention_processor.py
Co-authored-by: YiYi Xu <yixu310@gmail.com >
* run style and quality
---------
Co-authored-by: Juan Acevedo <jfacevedo@google.com >
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
Co-authored-by: YiYi Xu <yixu310@gmail.com >
2025-01-16 08:46:02 -10:00
Leo Jiang
cecada5280
NPU adaption for RMSNorm ( #10534 )
...
* NPU adaption for RMSNorm
* NPU adaption for RMSNorm
---------
Co-authored-by: J石页 <jiangshuo9@h-partners.com >
2025-01-16 08:45:29 -10:00
C
17d99c4d22
[Docs] Add documentation about using ParaAttention to optimize FLUX and HunyuanVideo ( #10544 )
...
* add para_attn_flux.md and para_attn_hunyuan_video.md
* add enable_sequential_cpu_offload in para_attn_hunyuan_video.md
* add comment
* refactor
* fix
* fix
* Update docs/source/en/optimization/para_attn.md
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com >
* Update docs/source/en/optimization/para_attn.md
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com >
* Update docs/source/en/optimization/para_attn.md
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com >
* Update docs/source/en/optimization/para_attn.md
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com >
* Update docs/source/en/optimization/para_attn.md
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com >
* Update docs/source/en/optimization/para_attn.md
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com >
* Update docs/source/en/optimization/para_attn.md
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com >
* Update docs/source/en/optimization/para_attn.md
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com >
* Update docs/source/en/optimization/para_attn.md
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com >
* Update docs/source/en/optimization/para_attn.md
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com >
* fix
* update links
* Update docs/source/en/optimization/para_attn.md
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com >
* Update docs/source/en/optimization/para_attn.md
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com >
* Update docs/source/en/optimization/para_attn.md
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com >
* fix
* Update docs/source/en/optimization/para_attn.md
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com >
* Update docs/source/en/optimization/para_attn.md
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com >
* Update docs/source/en/optimization/para_attn.md
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com >
* Update docs/source/en/optimization/para_attn.md
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com >
* Update docs/source/en/optimization/para_attn.md
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com >
* Update docs/source/en/optimization/para_attn.md
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com >
* Update docs/source/en/optimization/para_attn.md
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com >
* Update docs/source/en/optimization/para_attn.md
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com >
---------
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com >
2025-01-16 10:05:13 -08:00
hlky
08e62fe0c2
Scheduling fixes on MPS ( #10549 )
...
* use np.int32 in scheduling
* test_add_noise_device
* -np.int32, fixes
2025-01-16 07:45:03 -10:00
Daniel Regado
9e1b8a0017
[Docs] Update SD3 ip_adapter model_id to diffusers checkpoint ( #10597 )
...
Update to diffusers ip_adapter ckpt
2025-01-16 07:43:29 -10:00
hlky
0b065c099a
Move buffers to device ( #10523 )
...
* Move buffers to device
* add test
* named_buffers
2025-01-16 07:42:56 -10:00
Junyu Chen
b785ddb654
[DC-AE, SANA] fix SanaMultiscaleLinearAttention apply_quadratic_attention bf16 ( #10595 )
...
* autoencoder_dc tiling
* add tiling and slicing support in SANA pipelines
* create variables for padding length because the line becomes too long
* add tiling and slicing support in pag SANA pipelines
* revert changes to tile size
* make style
* add vae tiling test
* fix SanaMultiscaleLinearAttention apply_quadratic_attention bf16
---------
Co-authored-by: Aryan <aryan@huggingface.co >
2025-01-16 16:49:02 +05:30
Daniel Regado
e8114bd068
IP-Adapter for StableDiffusion3Img2ImgPipeline ( #10589 )
...
Added support for IP-Adapter
2025-01-16 09:46:22 +00:00
Leo Jiang
b0c8973834
[Sana 4K] Add vae tiling option to avoid OOM ( #10583 )
...
Co-authored-by: J石页 <jiangshuo9@h-partners.com >
2025-01-16 02:06:07 +05:30
Sayak Paul
c944f0651f
[Chore] fix vae annotation in mochi pipeline ( #10585 )
...
fix vae annotation in mochi pipeline
2025-01-15 15:19:51 +05:30
Sayak Paul
bba59fb88b
[Tests] add: test to check 8bit bnb quantized models work with lora loading. ( #10576 )
...
* add: test to check 8bit bnb quantized models work with lora loading.
* Update tests/quantization/bnb/test_mixed_int8.py
Co-authored-by: Dhruv Nair <dhruv.nair@gmail.com >
---------
Co-authored-by: Dhruv Nair <dhruv.nair@gmail.com >
2025-01-15 13:05:26 +05:30
Sayak Paul
2432f80ca3
[LoRA] feat: support loading loras into 4bit quantized Flux models. ( #10578 )
...
* feat: support loading loras into 4bit quantized models.
* updates
* update
* remove weight check.
2025-01-15 12:40:40 +05:30
Aryan
f9e957f011
Fix offload tests for CogVideoX and CogView3 ( #10547 )
...
* update
* update
2025-01-15 12:24:46 +05:30
Daniel Regado
4dec63c18e
IP-Adapter for StableDiffusion3InpaintPipeline ( #10581 )
...
* Added support for IP-Adapter
* Added joint_attention_kwargs property
2025-01-15 06:52:23 +00:00
Junsong Chen
3d70777379
[Sana-4K] ( #10537 )
...
* [Sana 4K]
add 4K support for Sana
* [Sana-4K] fix SanaPAGPipeline
* add VAE automatically tiling function;
* set clean_caption to False;
* add warnings for VAE OOM.
* style
---------
Co-authored-by: yiyixuxu <yixu310@gmail.com >
2025-01-14 11:48:56 -10:00
Teriks
6b727842d7
allow passing hf_token to load_textual_inversion ( #10546 )
...
Co-authored-by: Teriks <Teriks@users.noreply.github.com >
2025-01-14 11:48:34 -10:00
Dhruv Nair
be62c85cd9
[CI] Update HF Token on Fast GPU Model Tests ( #10570 )
...
update
2025-01-14 17:00:32 +05:30
Marc Sun
fbff43acc9
[FEAT] DDUF format ( #10037 )
...
* load and save dduf archive
* style
* switch to zip uncompressed
* updates
* Update src/diffusers/pipelines/pipeline_utils.py
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
* Update src/diffusers/pipelines/pipeline_utils.py
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
* first draft
* remove print
* switch to dduf_file for consistency
* switch to huggingface hub api
* fix log
* add a basic test
* Update src/diffusers/configuration_utils.py
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
* Update src/diffusers/pipelines/pipeline_utils.py
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
* Update src/diffusers/pipelines/pipeline_utils.py
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
* fix
* fix variant
* change saving logic
* DDUF - Load transformers components manually (#10171 )
* update hfh version
* Load transformers components manually
* load encoder from_pretrained with state_dict
* working version with transformers and tokenizer !
* add generation_config case
* fix tests
* remove saving for now
* typing
* need next version from transformers
* Update src/diffusers/configuration_utils.py
Co-authored-by: Lucain <lucain@huggingface.co >
* check path corectly
* Apply suggestions from code review
Co-authored-by: Lucain <lucain@huggingface.co >
* udapte
* typing
* remove check for subfolder
* quality
* revert setup changes
* oups
* more readable condition
* add loading from the hub test
* add basic docs.
* Apply suggestions from code review
Co-authored-by: Lucain <lucain@huggingface.co >
* add example
* add
* make functions private
* Apply suggestions from code review
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com >
* minor.
* fixes
* fix
* change the precdence of parameterized.
* error out when custom pipeline is passed with dduf_file.
* updates
* fix
* updates
* fixes
* updates
* fix xfail condition.
* fix xfail
* fixes
* sharded checkpoint compat
* add test for sharded checkpoint
* add suggestions
* Update src/diffusers/models/model_loading_utils.py
Co-authored-by: YiYi Xu <yixu310@gmail.com >
* from suggestions
* add class attributes to flag dduf tests
* last one
* fix logic
* remove comment
* revert changes
---------
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
Co-authored-by: Lucain <lucain@huggingface.co >
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com >
Co-authored-by: YiYi Xu <yixu310@gmail.com >
2025-01-14 13:21:42 +05:30
Dhruv Nair
3279751bf9
[CI] Update HF Token in Fast GPU Tests ( #10568 )
...
update
2025-01-14 13:04:26 +05:30
hlky
4a4afd5ece
Fix batch > 1 in HunyuanVideo ( #10548 )
2025-01-14 10:25:06 +05:30
Aryan
aa79d7da46
Test sequential cpu offload for torchao quantization ( #10506 )
...
test sequential cpu offload
2025-01-14 09:54:06 +05:30
Sayak Paul
74b67524b5
[Docs] Update hunyuan_video.md to rectify the checkpoint id ( #10524 )
...
* Update hunyuan_video.md to rectify the checkpoint id
* bfloat16
* more fixes
* don't update the checkpoint ids.
* update
* t -> T
* Apply suggestions from code review
* fix
---------
Co-authored-by: YiYi Xu <yixu310@gmail.com >
2025-01-13 10:59:13 -10:00
Vinh H. Pham
794f7e49a9
Implement framewise encoding/decoding in LTX Video VAE ( #10488 )
...
* add framewise decode
* add framewise encode, refactor tiled encode/decode
* add sanity test tiling for ltx
* run make style
* Update src/diffusers/models/autoencoders/autoencoder_kl_ltx.py
Co-authored-by: Aryan <contact.aryanvs@gmail.com >
---------
Co-authored-by: Pham Hong Vinh <vinhph3@vng.com.vn >
Co-authored-by: Aryan <contact.aryanvs@gmail.com >
2025-01-13 10:58:32 -10:00
Daniel Regado
9fc9c6dd71
Added IP-Adapter for StableDiffusion3ControlNetInpaintingPipeline ( #10561 )
...
* Added support for IP-Adapter
* Fixed Copied inconsistency
2025-01-13 10:15:36 -10:00
Omar Awile
df355ea2c6
Fix documentation for FluxPipeline ( #10563 )
...
Fix argument name in 8bit quantized example
Found a tiny mistake in the documentation where the text encoder model was passed to the wrong argument in the FluxPipeline.from_pretrained function.
2025-01-13 11:56:32 -08:00