Sayak Paul
4ace7d0483
[chore] change licensing to 2025 from 2024. ( #10615 )
...
change licensing to 2025 from 2024.
2025-01-20 16:57:27 -10:00
Sayak Paul
d87fe95f90
[Chore] add LoraLoaderMixin to the inits ( #8981 )
...
* introduce to promote reusability.
* up
* add more tests
* up
* remove comments.
* fix fuse_nan test
* clarify the scope of fuse_lora and unfuse_lora
* remove space
* rewrite fuse_lora a bit.
* feedback
* copy over load_lora_into_text_encoder.
* address dhruv's feedback.
* fix-copies
* fix issubclass.
* num_fused_loras
* fix
* fix
* remove mapping
* up
* fix
* style
* fix-copies
* change to SD3TransformerLoRALoadersMixin
* Apply suggestions from code review
Co-authored-by: Dhruv Nair <dhruv.nair@gmail.com >
* up
* handle wuerstchen
* up
* move lora to lora_pipeline.py
* up
* fix-copies
* fix documentation.
* comment set_adapters().
* fix-copies
* fix set_adapters() at the model level.
* fix?
* fix
* loraloadermixin.
---------
Co-authored-by: Dhruv Nair <dhruv.nair@gmail.com >
2024-07-26 08:59:33 +05:30
YiYi Xu
62863bb1ea
Revert "[LoRA] introduce LoraBaseMixin to promote reusability." ( #8976 )
...
Revert "[LoRA] introduce LoraBaseMixin to promote reusability. (#8774 )"
This reverts commit 527430d0a4 .
2024-07-25 09:10:35 -10:00
Sayak Paul
527430d0a4
[LoRA] introduce LoraBaseMixin to promote reusability. ( #8774 )
...
* introduce to promote reusability.
* up
* add more tests
* up
* remove comments.
* fix fuse_nan test
* clarify the scope of fuse_lora and unfuse_lora
* remove space
* rewrite fuse_lora a bit.
* feedback
* copy over load_lora_into_text_encoder.
* address dhruv's feedback.
* fix-copies
* fix issubclass.
* num_fused_loras
* fix
* fix
* remove mapping
* up
* fix
* style
* fix-copies
* change to SD3TransformerLoRALoadersMixin
* Apply suggestions from code review
Co-authored-by: Dhruv Nair <dhruv.nair@gmail.com >
* up
* handle wuerstchen
* up
* move lora to lora_pipeline.py
* up
* fix-copies
* fix documentation.
* comment set_adapters().
* fix-copies
* fix set_adapters() at the model level.
* fix?
* fix
---------
Co-authored-by: Dhruv Nair <dhruv.nair@gmail.com >
2024-07-25 21:40:58 +05:30
Sayak Paul
984d340534
Revert "[LoRA] introduce LoraBaseMixin to promote reusability." ( #8773 )
...
Revert "[LoRA] introduce `LoraBaseMixin` to promote reusability. (#8670 )"
This reverts commit a2071a1837 .
2024-07-03 07:05:01 +05:30
Sayak Paul
a2071a1837
[LoRA] introduce LoraBaseMixin to promote reusability. ( #8670 )
...
* introduce to promote reusability.
* up
* add more tests
* up
* remove comments.
* fix fuse_nan test
* clarify the scope of fuse_lora and unfuse_lora
* remove space
2024-07-03 07:04:37 +05:30
Tolga Cangöz
468ae09ed8
Errata - Trim trailing white space in the whole repo ( #8575 )
...
* Trim all the trailing white space in the whole repo
* Remove unnecessary empty places
* make style && make quality
* Trim trailing white space
* trim
---------
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
2024-06-24 18:39:15 +05:30
Bagheera
8e963d1c2a
7529 do not disable autocast for cuda devices ( #7530 )
...
* 7529 do not disable autocast for cuda devices
* Remove typecasting error check for non-mps platforms, as a correct autocast implementation makes it a non-issue
* add autocast fix to other training examples
* disable native_amp for dreambooth (sdxl)
* disable native_amp for pix2pix (sdxl)
* remove tests from remaining files
* disable native_amp on huggingface accelerator for every training example that uses it
* convert more usages of autocast to nullcontext, make style fixes
* make style fixes
* style.
* Empty-Commit
---------
Co-authored-by: bghira <bghira@users.github.com >
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
2024-04-02 20:15:06 +05:30
Sayak Paul
30e5e81d58
change to 2024 in the license ( #6902 )
...
change to 2024
2024-02-08 08:19:31 -10:00
Will Berman
0af12f1f8a
amused update links to new repo ( #6344 )
...
* amused update links to new repo
* lint
2023-12-26 22:46:28 +01:00
Will Berman
4039815276
open muse ( #5437 )
...
amused
rename
Update docs/source/en/api/pipelines/amused.md
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com >
AdaLayerNormContinuous default values
custom micro conditioning
micro conditioning docs
put lookup from codebook in constructor
fix conversion script
remove manual fused flash attn kernel
add training script
temp remove training script
add dummy gradient checkpointing func
clarify temperatures is an instance variable by setting it
remove additional SkipFF block args
hardcode norm args
rename tests folder
fix paths and samples
fix tests
add training script
training readme
lora saving and loading
non-lora saving/loading
some readme fixes
guards
Update docs/source/en/api/pipelines/amused.md
Co-authored-by: Suraj Patil <surajp815@gmail.com >
Update examples/amused/README.md
Co-authored-by: Suraj Patil <surajp815@gmail.com >
Update examples/amused/train_amused.py
Co-authored-by: Suraj Patil <surajp815@gmail.com >
vae upcasting
add fp16 integration tests
use tuple for micro cond
copyrights
remove casts
delegate to torch.nn.LayerNorm
move temperature to pipeline call
upsampling/downsampling changes
2023-12-21 11:40:55 -08:00