1
0
mirror of https://github.com/huggingface/diffusers.git synced 2026-01-27 17:22:53 +03:00
Commit Graph

4 Commits

Author SHA1 Message Date
Álvaro Somoza
edcbe8038b Fix huggingface-hub failing tests (#11994)
* login

* more logins

* uploads

* missed login

* another missed login

* downloads

* examples and more logins

* fix

* setup

* Apply style fixes

* fix

* Apply style fixes
2025-07-29 02:34:58 -04:00
Sayak Paul
b94cfd7937 [Training] QoL improvements in the Flux Control training scripts (#10461)
* qol improvements to the Flux script.

* propagate the dataloader changes.
2025-01-07 11:56:17 +05:30
Junjie
96a9097445 Add offload option in flux-control training (#10225)
* Add offload option in flux-control training

* Update examples/flux-control/train_control_flux.py

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>

* modify help message

* fix format

---------

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
2024-12-15 20:49:17 +05:30
Sayak Paul
8170dc368d [WIP][Training] Flux Control LoRA training script (#10130)
* update

* add

* update

* add control-lora conversion script; make flux loader handle norms; fix rank calculation assumption

* control lora updates

* remove copied-from

* create separate pipelines for flux control

* make fix-copies

* update docs

* add tests

* fix

* Apply suggestions from code review

Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>

* remove control lora changes

* apply suggestions from review

* Revert "remove control lora changes"

This reverts commit 73cfc519c9.

* update

* update

* improve log messages

* updates.

* updates

* support register_config.

* fix

* fix

* fix

* updates

* updates

* updates

* fix-copies

* fix

* apply suggestions from review

* add tests

* remove conversion script; enable on-the-fly conversion

* bias -> lora_bias.

* fix-copies

* peft.py

* fix lora conversion

* changes

Co-authored-by: a-r-r-o-w <contact.aryanvs@gmail.com>

* fix-copies

* updates for tests

* fix

* alpha_pattern.

* add a test for varied lora ranks and alphas.

* revert changes in num_channels_latents = self.transformer.config.in_channels // 8

* revert moe

* add a sanity check on unexpected keys when loading norm layers.

* contro lora.

* fixes

* fixes

* fixes

* tests

* reviewer feedback

* fix

* proper peft version for lora_bias

* fix-copies

* updates

* updates

* updates

* remove debug code

* update docs

* integration tests

* nis

* fuse and unload.

* fix

* add slices.

* more updates.

* button up readme

* train()

* add full fine-tuning version.

* fixes

* Apply suggestions from code review

Co-authored-by: Aryan <aryan@huggingface.co>

* set_grads_to_none remove.

* readme

---------

Co-authored-by: Aryan <aryan@huggingface.co>
Co-authored-by: yiyixuxu <yixu310@gmail.com>
Co-authored-by: a-r-r-o-w <contact.aryanvs@gmail.com>
2024-12-12 15:34:57 +05:30