dg845
|
d8e4805816
|
[WIP]Add Wan2.2 Animate Pipeline (Continuation of #12442 by tolgacangoz) (#12526)
---------
Co-authored-by: Tolga Cangöz <mtcangoz@gmail.com>
Co-authored-by: Tolga Cangöz <46008593+tolgacangoz@users.noreply.github.com>
|
2025-11-12 16:52:31 -10:00 |
|
Yao Matrix
|
0e12ba7454
|
fix 3 xpu failures uts w/ latest pytorch (#12408)
fix xpu ut failures w/ latest pytorch
Signed-off-by: Yao, Matrix <matrix.yao@intel.com>
|
2025-09-30 14:07:48 +05:30 |
|
Dhruv Nair
|
7aa6af1138
|
[Refactor] Move testing utils out of src (#12238)
* update
* update
* update
* update
* update
* merge main
* Revert "merge main"
This reverts commit 65efbcead5.
|
2025-08-28 19:53:02 +05:30 |
|
Yao Matrix
|
8701e8644b
|
make test_gguf all pass on xpu (#12158)
Signed-off-by: Yao, Matrix <matrix.yao@intel.com>
|
2025-08-16 01:00:31 +05:30 |
|
Sayak Paul
|
f20aba3e87
|
[GGUF] feat: support loading diffusers format gguf checkpoints. (#11684)
* feat: support loading diffusers format gguf checkpoints.
* update
* update
* qwen
---------
Co-authored-by: DN6 <dhruv.nair@gmail.com>
|
2025-08-08 22:27:15 +05:30 |
|
jiqing-feng
|
1082c46afa
|
fix input shape for WanGGUFTexttoVideoSingleFileTests (#12081)
Signed-off-by: jiqing-feng <jiqing.feng@intel.com>
|
2025-08-06 14:12:40 +05:30 |
|
Isotr0py
|
ba2ba9019f
|
Add cuda kernel support for GGUF inference (#11869)
* add gguf kernel support
Signed-off-by: Isotr0py <2037008807@qq.com>
* fix
Signed-off-by: Isotr0py <2037008807@qq.com>
* optimize
Signed-off-by: Isotr0py <2037008807@qq.com>
* update
* update
* update
* update
* update
---------
Signed-off-by: Isotr0py <2037008807@qq.com>
Co-authored-by: DN6 <dhruv.nair@gmail.com>
|
2025-08-05 21:36:48 +05:30 |
|
Sayak Paul
|
7a935a0bbe
|
[tests] Unify compilation + offloading tests in quantization (#11910)
* unify the quant compile + offloading tests.
* fix
* update
|
2025-07-11 17:02:29 +05:30 |
|
Sayak Paul
|
754fe85cac
|
[tests] add compile + offload tests for GGUF. (#11740)
* add compile + offload tests for GGUF.
* quality
* add init.
* prop.
* change to flux.
---------
Co-authored-by: Dhruv Nair <dhruv.nair@gmail.com>
|
2025-07-09 13:42:13 +05:30 |
|
Ju Hoon Park
|
0e95aa853e
|
[From Single File] support from_single_file method for WanVACE3DTransformer (#11807)
* add `WandVACETransformer3DModel` in`SINGLE_FILE_LOADABLE_CLASSES`
* add rename keys for `VACE`
add rename keys for `VACE`
* fix typo
Sincere thanks to @nitinmukesh 🙇♂️
* support for `1.3B VACE` model
Sincere thanks to @nitinmukesh again🙇♂️
* update
* update
* Apply style fixes
---------
Co-authored-by: Dhruv Nair <dhruv.nair@gmail.com>
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
|
2025-07-02 05:55:36 +02:00 |
|
kaixuanliu
|
dd285099eb
|
adjust to get CI test cases passed on XPU (#11759)
* adjust to get CI test cases passed on XPU
Signed-off-by: Liu, Kaixuan <kaixuan.liu@intel.com>
* fix format issue
Signed-off-by: Liu, Kaixuan <kaixuan.liu@intel.com>
* Apply style fixes
---------
Signed-off-by: Liu, Kaixuan <kaixuan.liu@intel.com>
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
Co-authored-by: Aryan <aryan@huggingface.co>
|
2025-06-25 14:02:17 +05:30 |
|
Dhruv Nair
|
4267d8f4eb
|
[Single File] GGUF/Single File Support for HiDream (#11550)
* update
* update
* update
* update
* update
* update
* update
|
2025-05-15 12:25:18 +05:30 |
|
Yao Matrix
|
7567adfc45
|
enable 28 GGUF test cases on XPU (#11404)
* enable gguf test cases on XPU
Signed-off-by: YAO Matrix <matrix.yao@intel.com>
* make SD35LargeGGUFSingleFileTests::test_pipeline_inference pas
Signed-off-by: root <root@a4bf01945cfe.jf.intel.com>
* make FluxControlLoRAGGUFTests::test_lora_loading pass
Signed-off-by: Yao Matrix <matrix.yao@intel.com>
* polish code
Signed-off-by: Yao Matrix <matrix.yao@intel.com>
* Apply style fixes
---------
Signed-off-by: YAO Matrix <matrix.yao@intel.com>
Signed-off-by: root <root@a4bf01945cfe.jf.intel.com>
Signed-off-by: Yao Matrix <matrix.yao@intel.com>
Co-authored-by: root <root@a4bf01945cfe.jf.intel.com>
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
|
2025-04-28 21:32:04 +05:30 |
|
hlky
|
5d49b3e83b
|
Flux quantized with lora (#10990)
* Flux quantized with lora
* fix
* changes
* Apply suggestions from code review
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
* Apply style fixes
* enable model cpu offload()
* Update src/diffusers/loaders/lora_pipeline.py
Co-authored-by: hlky <hlky@hlky.ac>
* update
* Apply suggestions from code review
* update
* add peft as an additional dependency for gguf
---------
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
Co-authored-by: Dhruv Nair <dhruv.nair@gmail.com>
|
2025-04-08 21:17:03 +05:30 |
|
Dhruv Nair
|
7dc52ea769
|
[Quantization] dtype fix for GGUF + fix BnB tests (#11159)
* update
* update
* update
* update
|
2025-03-26 22:22:16 +05:30 |
|
AstraliteHeart
|
cb342b745a
|
Add AuraFlow GGUF support (#10463)
* Add support for loading AuraFlow models from GGUF
https://huggingface.co/city96/AuraFlow-v0.3-gguf
* Update AuraFlow documentation for GGUF, add GGUF tests and model detection.
* Address code review comments.
* Remove unused config.
---------
Co-authored-by: hlky <hlky@hlky.ac>
|
2025-01-08 13:23:12 +05:30 |
|
Dhruv Nair
|
e24941b2a7
|
[Single File] Add GGUF support (#9964)
* update
* update
* update
* update
* update
* update
* update
* update
* update
* update
* update
* update
* update
* update
* update
* update
* update
* update
* update
* update
* update
* update
* update
* update
* update
* update
* update
* Update src/diffusers/quantizers/gguf/utils.py
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
* update
* update
* update
* update
* update
* update
* update
* update
* update
* update
* Update docs/source/en/quantization/gguf.md
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
* update
* update
* update
* update
---------
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
|
2024-12-17 16:09:37 +05:30 |
|