Sayak Paul
7a2b78bf0f
post release v0.35.0 ( #12184 )
...
* post release v0.35.0
* quality
2025-08-19 22:10:08 +05:30
Sayak Paul
c9c8217306
[chore] complete the licensing statement. ( #12001 )
...
complete the licensing statement.
2025-08-11 22:15:15 +05:30
Γlvaro Somoza
edcbe8038b
Fix huggingface-hub failing tests ( #11994 )
...
* login
* more logins
* uploads
* missed login
* another missed login
* downloads
* examples and more logins
* fix
* setup
* Apply style fixes
* fix
* Apply style fixes
2025-07-29 02:34:58 -04:00
Luo Yihang
5ef74fd5f6
fix norm not training in train_control_lora_flux.py ( #11832 )
2025-07-01 17:37:54 -10:00
Sayak Paul
10c36e0b78
[chore] post release v0.34.0 ( #11800 )
...
* post release v0.34.0
* code quality
---------
Co-authored-by: YiYi Xu <yixu310@gmail.com >
2025-06-26 06:56:46 +05:30
Markus Pobitzer
745199a869
[examples] flux-control: use num_training_steps_for_scheduler ( #11662 )
...
[examples] flux-control: use num_training_steps_for_scheduler in get_scheduler instead of args.max_train_steps * accelerator.num_processes
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
2025-06-05 14:56:25 +05:30
co63oc
86294d3c7f
Fix typos in docs and comments ( #11416 )
...
* Fix typos in docs and comments
* Apply style fixes
---------
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2025-04-30 20:30:53 -10:00
Sayak Paul
4b868f14c1
post release 0.33.0 ( #11255 )
...
* post release
* update
* fix deprecations
* remaining
* update
---------
Co-authored-by: YiYi Xu <yixu310@gmail.com >
2025-04-15 06:50:08 -10:00
Dhruv Nair
edc154da09
Update Ruff to latest Version ( #10919 )
...
* update
* update
* update
* update
2025-04-09 16:51:34 +05:30
Sayak Paul
4ace7d0483
[chore] change licensing to 2025 from 2024. ( #10615 )
...
change licensing to 2025 from 2024.
2025-01-20 16:57:27 -10:00
Sayak Paul
328e0d20a7
[training] set rest of the blocks with requires_grad False. ( #10607 )
...
set rest of the blocks with requires_grad False.
2025-01-19 19:34:53 +05:30
Sayak Paul
b94cfd7937
[Training] QoL improvements in the Flux Control training scripts ( #10461 )
...
* qol improvements to the Flux script.
* propagate the dataloader changes.
2025-01-07 11:56:17 +05:30
Dev Rajput
4b9f1c7d8c
Add correct number of channels when resuming from checkpoint for Flux Control LoRa training ( #10422 )
...
* Add correct number of channels when resuming from checkpoint
* Fix Formatting
2025-01-02 15:51:44 +05:30
Sayak Paul
825979ddc3
[training] fix: registration of out_channels in the control flux scripts. ( #10367 )
...
* fix: registration of out_channels in the control flux scripts.
* free memory.
2024-12-24 21:44:44 +05:30
Sayak Paul
92933ec36a
[chore] post release 0.32.0 ( #10361 )
...
* post release 0.32.0
* stylew
2024-12-23 10:03:34 -10:00
Junjie
96a9097445
Add offload option in flux-control training ( #10225 )
...
* Add offload option in flux-control training
* Update examples/flux-control/train_control_flux.py
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
* modify help message
* fix format
---------
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
2024-12-15 20:49:17 +05:30
Sayak Paul
8170dc368d
[WIP][Training] Flux Control LoRA training script ( #10130 )
...
* update
* add
* update
* add control-lora conversion script; make flux loader handle norms; fix rank calculation assumption
* control lora updates
* remove copied-from
* create separate pipelines for flux control
* make fix-copies
* update docs
* add tests
* fix
* Apply suggestions from code review
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
* remove control lora changes
* apply suggestions from review
* Revert "remove control lora changes"
This reverts commit 73cfc519c9 .
* update
* update
* improve log messages
* updates.
* updates
* support register_config.
* fix
* fix
* fix
* updates
* updates
* updates
* fix-copies
* fix
* apply suggestions from review
* add tests
* remove conversion script; enable on-the-fly conversion
* bias -> lora_bias.
* fix-copies
* peft.py
* fix lora conversion
* changes
Co-authored-by: a-r-r-o-w <contact.aryanvs@gmail.com >
* fix-copies
* updates for tests
* fix
* alpha_pattern.
* add a test for varied lora ranks and alphas.
* revert changes in num_channels_latents = self.transformer.config.in_channels // 8
* revert moe
* add a sanity check on unexpected keys when loading norm layers.
* contro lora.
* fixes
* fixes
* fixes
* tests
* reviewer feedback
* fix
* proper peft version for lora_bias
* fix-copies
* updates
* updates
* updates
* remove debug code
* update docs
* integration tests
* nis
* fuse and unload.
* fix
* add slices.
* more updates.
* button up readme
* train()
* add full fine-tuning version.
* fixes
* Apply suggestions from code review
Co-authored-by: Aryan <aryan@huggingface.co >
* set_grads_to_none remove.
* readme
---------
Co-authored-by: Aryan <aryan@huggingface.co >
Co-authored-by: yiyixuxu <yixu310@gmail.com >
Co-authored-by: a-r-r-o-w <contact.aryanvs@gmail.com >
2024-12-12 15:34:57 +05:30