Álvaro Somoza
edcbe8038b
Fix huggingface-hub failing tests ( #11994 )
...
* login
* more logins
* uploads
* missed login
* another missed login
* downloads
* examples and more logins
* fix
* setup
* Apply style fixes
* fix
* Apply style fixes
2025-07-29 02:34:58 -04:00
Sayak Paul
01240fecb0
[training ] add Kontext i2i training ( #11858 )
...
* feat: enable i2i fine-tuning in Kontext script.
* readme
* more checks.
* Apply suggestions from code review
Co-authored-by: Linoy Tsaban <57615435+linoytsaban@users.noreply.github.com >
* fixes
* fix
* add proj_mlp to the mix
* Update README_flux.md
add note on installing from commit `05e7a854d0a5661f5b433f6dd5954c224b104f0b`
* fix
* fix
---------
Co-authored-by: Linoy Tsaban <57615435+linoytsaban@users.noreply.github.com >
2025-07-08 21:04:16 +05:30
Sayak Paul
00f95b9755
Kontext training ( #11813 )
...
* support flux kontext
* make fix-copies
* add example
* add tests
* update docs
* update
* add note on integrity checker
* initial commit
* initial commit
* add readme section and fixes in the training script.
* add test
* rectify ckpt_id
* fix ckpt
* fixes
* change id
* update
* Update examples/dreambooth/train_dreambooth_lora_flux_kontext.py
Co-authored-by: Aryan <aryan@huggingface.co >
* Update examples/dreambooth/README_flux.md
---------
Co-authored-by: Aryan <aryan@huggingface.co >
Co-authored-by: linoytsaban <linoy@huggingface.co >
Co-authored-by: Linoy Tsaban <57615435+linoytsaban@users.noreply.github.com >
2025-06-26 19:31:42 +03:00
Linoy Tsaban
1bc6f3dc0f
[LoRA training] update metadata use for lora alpha + README ( #11723 )
...
* lora alpha
* Apply style fixes
* Update examples/advanced_diffusion_training/README_flux.md
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
* fix readme format
---------
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
2025-06-17 12:19:27 +03:00
Linoy Tsaban
cc59505e26
[training docs] smol update to README files ( #11616 )
...
add comment to install prodigy
2025-05-27 06:26:54 -07:00
Quentin Gallouédec
c8bb1ff53e
Use HF Papers ( #11567 )
...
* Use HF Papers
* Apply style fixes
---------
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2025-05-19 06:22:33 -10:00
co63oc
86294d3c7f
Fix typos in docs and comments ( #11416 )
...
* Fix typos in docs and comments
* Apply style fixes
---------
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2025-04-30 20:30:53 -10:00
SkyCol
074e12358b
Add prompt about wandb in examples/dreambooth/readme. ( #10014 )
...
Add files via upload
2024-11-25 18:42:06 +05:30
Linoy Tsaban
743a5697f2
[flux dreambooth lora training] make LoRA target modules configurable + small bug fix ( #9646 )
...
* make lora target modules configurable and change the default
* style
* make lora target modules configurable and change the default
* fix bug when using prodigy and training te
* fix mixed precision training as proposed in https://github.com/huggingface/diffusers/pull/9565 for full dreambooth as well
* add test and notes
* style
* address sayaks comments
* style
* fix test
---------
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
2024-10-28 17:27:41 +02:00
Linoy Tsaban
37e3603c4a
[Flux Dreambooth lora] add latent caching ( #9160 )
...
* add ostris trainer to README & add cache latents of vae
* add ostris trainer to README & add cache latents of vae
* style
* readme
* add test for latent caching
* add ostris noise scheduler
9ee1ef2a0a/toolkit/samplers/custom_flowmatch_sampler.py (L95)
* style
* fix import
* style
* fix tests
* style
* --change upcasting of transformer?
* update readme according to main
* keep only latent caching
* add configurable param for final saving of trained layers- --upcast_before_saving
* style
* Update examples/dreambooth/README_flux.md
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
* Update examples/dreambooth/README_flux.md
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
* use clear_objs_and_retain_memory from utilities
* style
---------
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
2024-09-15 15:30:31 +03:00
Linoy Tsaban
55ac421f7b
improve README for flux dreambooth lora ( #9290 )
...
* improve readme
* improve readme
* improve readme
* improve readme
2024-09-05 17:53:23 +05:30
Tolga Cangöz
7ef8a46523
[Docs] Fix CPU offloading usage ( #9207 )
...
* chore: Fix cpu offloading usage
* Trim trailing white space
* docs: update Kolors model link in kolors.md
2024-08-18 13:12:12 -10:00
Linoy Tsaban
413ca29b71
[Flux Dreambooth LoRA] - te bug fixes & updates ( #9139 )
...
* add requirements + fix link to bghira's guide
* text ecnoder training fixes
* text encoder training fixes
* text encoder training fixes
* text encoder training fixes
* style
* add tests
* fix encode_prompt call
* style
* unpack_latents test
* fix lora saving
* remove default val for max_sequenece_length in encode_prompt
* remove default val for max_sequenece_length in encode_prompt
* style
* testing
* style
* testing
* testing
* style
* fix sizing issue
* style
* revert scaling
* style
* style
* scaling test
* style
* scaling test
* remove model pred operation left from pre-conditioning
* remove model pred operation left from pre-conditioning
* fix trainable params
* remove te2 from casting
* transformer to accelerator
* remove prints
* empty commit
2024-08-12 11:58:03 +05:30
Linoy Tsaban
65e30907b5
[Flux] Dreambooth LoRA training scripts ( #9086 )
...
* initial commit - dreambooth for flux
* update transformer to be FluxTransformer2DModel
* update training loop and validation inference
* fix sd3->flux docs
* add guidance handling, not sure if it makes sense(?)
* inital dreambooth lora commit
* fix text_ids in compute_text_embeddings
* fix imports of static methods
* fix pipeline loading in readme, remove auto1111 docs for now
* fix pipeline loading in readme, remove auto1111 docs for now, remove some irrelevant text_encoder_3 refs
* Update examples/dreambooth/train_dreambooth_flux.py
Co-authored-by: Bagheera <59658056+bghira@users.noreply.github.com >
* fix te2 loading and remove te2 refs from text encoder training
* fix tokenizer_2 initialization
* remove text_encoder training refs from lora script (for now)
* try with vae in bfloat16, fix model hook save
* fix tokenization
* fix static imports
* fix CLIP import
* remove text_encoder training refs (for now) from lora script
* fix minor bug in encode_prompt, add guidance def in lora script, ...
* fix unpack_latents args
* fix license in readme
* add "none" to weighting_scheme options for uniform sampling
* style
* adapt model saving - remove text encoder refs
* adapt model loading - remove text encoder refs
* initial commit for readme
* Update examples/dreambooth/train_dreambooth_lora_flux.py
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
* Update examples/dreambooth/train_dreambooth_lora_flux.py
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
* fix vae casting
* remove precondition_outputs
* readme
* readme
* style
* readme
* readme
* update weighting scheme default & docs
* style
* add text_encoder training to lora script, change vae_scale_factor value in both
* style
* text encoder training fixes
* style
* update readme
* minor fixes
* fix te params
* fix te params
---------
Co-authored-by: Bagheera <59658056+bghira@users.noreply.github.com >
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
2024-08-09 07:31:04 +05:30