Aryan
0d1d267b12
[core] Allegro T2V ( #9736 )
...
* update
* refactor transformer part 1
* refactor part 2
* refactor part 3
* make style
* refactor part 4; modeling tests
* make style
* refactor part 5
* refactor part 6
* gradient checkpointing
* pipeline tests (broken atm)
* update
* add coauthor
Co-Authored-By: Huan Yang <hyang@fastmail.com >
* refactor part 7
* add docs
* make style
* add coauthor
Co-Authored-By: YiYi Xu <yixu310@gmail.com >
* make fix-copies
* undo unrelated change
* revert changes to embeddings, normalization, transformer
* refactor part 8
* make style
* refactor part 9
* make style
* fix
* apply suggestions from review
* Apply suggestions from code review
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com >
* update example
* remove attention mask for self-attention
* update
* copied from
* update
* update
---------
Co-authored-by: Huan Yang <hyang@fastmail.com >
Co-authored-by: YiYi Xu <yixu310@gmail.com >
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com >
2024-10-29 13:14:36 +05:30
YiYi Xu
e2d037bbf1
minor doc/test update ( #9734 )
...
* update some docs and tests!
---------
Co-authored-by: Aryan <contact.aryanvs@gmail.com >
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
Co-authored-by: Aryan <aryan@huggingface.co >
Co-authored-by: apolinário <joaopaulo.passos@gmail.com >
2024-10-21 13:06:13 -10:00
Yuxuan.Zhang
8d81564b27
CogView3Plus DiT ( #9570 )
...
* merge 9588
* max_shard_size="5GB" for colab running
* conversion script updates; modeling test; refactor transformer
* make fix-copies
* Update convert_cogview3_to_diffusers.py
* initial pipeline draft
* make style
* fight bugs 🐛 🪳
* add example
* add tests; refactor
* make style
* make fix-copies
* add co-author
YiYi Xu <yixu310@gmail.com >
* remove files
* add docs
* add co-author
Co-Authored-By: YiYi Xu <yixu310@gmail.com >
* fight docs
* address reviews
* make style
* make model work
* remove qkv fusion
* remove qkv fusion tets
* address review comments
* fix make fix-copies error
* remove None and TODO
* for FP16(draft)
* make style
* remove dynamic cfg
* remove pooled_projection_dim as a parameter
* fix tests
---------
Co-authored-by: Aryan <aryan@huggingface.co >
Co-authored-by: YiYi Xu <yixu310@gmail.com >
2024-10-14 19:30:36 +05:30
Dhruv Nair
f6f16a0c11
[CI] More Fast GPU Test Fixes ( #9346 )
...
* update
* update
* update
* update
2024-09-03 13:22:38 +05:30
Dhruv Nair
007ad0e2aa
[CI] More fixes for Fast GPU Tests on main ( #9300 )
...
update
2024-09-02 17:51:48 +05:30
YiYi Xu
c291617518
Flux followup ( #9074 )
...
* refactor rotary embeds
* adding jsmidt as co-author of this PR for https://github.com/huggingface/diffusers/pull/9133
---------
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
Co-authored-by: Joseph Smidt <josephsmidt@gmail.com >
2024-08-21 08:44:58 -10:00
Dhruv Nair
940b8e0358
[CI] Multiple Slow Test fixes. ( #9198 )
...
* update
* update
* update
* update
2024-08-19 13:31:09 +05:30
M Saqlain
ba4348d9a7
[Tests] Improve transformers model test suite coverage - Lumina ( #8987 )
...
* Added test suite for lumina
* Fixed failing tests
* Improved code quality
* Added function docstrings
* Improved formatting
2024-08-19 08:29:03 +05:30
Sayak Paul
f848febacd
feat: allow sharding for auraflow. ( #8853 )
2024-08-18 08:47:26 +05:30
Sayak Paul
39b87b14b5
feat: allow flux transformer to be sharded during inference ( #9159 )
...
* feat: support sharding for flux.
* tests
2024-08-16 10:00:51 +05:30
Aryan
a85b34e7fd
[refactor] CogVideoX followups + tiled decoding support ( #9150 )
...
* refactor context parallel cache; update torch compile time benchmark
* add tiling support
* make style
* remove num_frames % 8 == 0 requirement
* update default num_frames to original value
* add explanations + refactor
* update torch compile example
* update docs
* update
* clean up if-statements
* address review comments
* add test for vae tiling
* update docs
* update docs
* update docstrings
* add modeling test for cogvideox transformer
* make style
2024-08-14 03:53:21 +05:30
Vinh H. Pham
87e50a2f1d
[Tests] Improve transformers model test suite coverage - Hunyuan DiT ( #8916 )
...
* add hunyuan model test
* apply suggestions
* reduce dims further
* reduce dims further
* run make style
---------
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
2024-08-06 12:59:30 +05:30
Vinh H. Pham
e1d508ae92
[Tests] Improve transformers model test suite coverage - Latte ( #8919 )
...
* add LatteTransformer3DModel model test
* change patch_size to 1
* reduce req len
* reduce channel dims
* increase num_layers
* reduce dims further
* run make style
---------
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
Co-authored-by: Aryan <aryan@huggingface.co >
2024-08-05 17:13:03 +05:30
Sayak Paul
0e460675e2
[Flux] allow tests to run ( #9050 )
...
* fix tests
* fix
* float64 skip
* remove sample_size.
* remove
* remove more
* default_sample_size.
* credit black forest for flux model.
* skip
* fix: tests
* remove OriginalModelMixin
* add transformer model test
* add: transformer model tests
2024-08-02 11:49:59 +05:30
Vinh H. Pham
7a95f8d9d8
[Tests] Improve transformers model test suite coverage - Temporal Transformer ( #8932 )
...
* add test for temporal transformer
* remove unused variable
* fix code quality
---------
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
2024-07-23 15:36:30 +05:30
Sayak Paul
2261510bbc
[Core] Add AuraFlow ( #8796 )
...
* add lavender flow transformer
---------
Co-authored-by: YiYi Xu <yixu310@gmail.com >
2024-07-11 08:50:19 -10:00
Tolga Cangöz
57084dacc5
Remove unnecessary lines ( #8569 )
...
* Remove unused line
---------
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
2024-07-08 10:42:02 -10:00
Dhruv Nair
04717fd861
Add Stable Diffusion 3 ( #8483 )
...
* up
* add sd3
* update
* update
* add tests
* fix copies
* fix docs
* update
* add dreambooth lora
* add LoRA
* update
* update
* update
* update
* import fix
* update
* Update src/diffusers/pipelines/stable_diffusion_3/pipeline_stable_diffusion_3.py
Co-authored-by: YiYi Xu <yixu310@gmail.com >
* import fix 2
* update
* Update src/diffusers/models/autoencoders/autoencoder_kl.py
Co-authored-by: YiYi Xu <yixu310@gmail.com >
* Update src/diffusers/models/autoencoders/autoencoder_kl.py
Co-authored-by: YiYi Xu <yixu310@gmail.com >
* Update src/diffusers/models/autoencoders/autoencoder_kl.py
Co-authored-by: YiYi Xu <yixu310@gmail.com >
* Update src/diffusers/models/autoencoders/autoencoder_kl.py
Co-authored-by: YiYi Xu <yixu310@gmail.com >
* Update src/diffusers/models/autoencoders/autoencoder_kl.py
Co-authored-by: YiYi Xu <yixu310@gmail.com >
* Update src/diffusers/models/autoencoders/autoencoder_kl.py
Co-authored-by: YiYi Xu <yixu310@gmail.com >
* Update src/diffusers/models/autoencoders/autoencoder_kl.py
Co-authored-by: YiYi Xu <yixu310@gmail.com >
* Update src/diffusers/models/autoencoders/autoencoder_kl.py
Co-authored-by: YiYi Xu <yixu310@gmail.com >
* Update src/diffusers/models/autoencoders/autoencoder_kl.py
Co-authored-by: YiYi Xu <yixu310@gmail.com >
* Update src/diffusers/models/autoencoders/autoencoder_kl.py
Co-authored-by: YiYi Xu <yixu310@gmail.com >
* Update src/diffusers/models/autoencoders/autoencoder_kl.py
Co-authored-by: YiYi Xu <yixu310@gmail.com >
* update
* update
* update
* fix ckpt id
* fix more ids
* update
* missing doc
* Update src/diffusers/schedulers/scheduling_flow_match_euler_discrete.py
Co-authored-by: YiYi Xu <yixu310@gmail.com >
* Update src/diffusers/schedulers/scheduling_flow_match_euler_discrete.py
Co-authored-by: YiYi Xu <yixu310@gmail.com >
* Update docs/source/en/api/pipelines/stable_diffusion/stable_diffusion_3.md
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
* Update docs/source/en/api/pipelines/stable_diffusion/stable_diffusion_3.md
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
* update'
* fix
* update
* Update src/diffusers/models/autoencoders/autoencoder_kl.py
* Update src/diffusers/models/autoencoders/autoencoder_kl.py
* note on gated access.
* requirements
* licensing
---------
Co-authored-by: sayakpaul <spsayakpaul@gmail.com >
Co-authored-by: YiYi Xu <yixu310@gmail.com >
2024-06-12 20:44:00 +01:00
Sayak Paul
983dec3bf7
[Core] Introduce class variants for Transformer2DModel ( #7647 )
...
* init for patches
* finish patched model.
* continuous transformer
* vectorized transformer2d.
* style.
* inits.
* fix-copies.
* introduce DiTTransformer2DModel.
* fixes
* use REMAPPING as suggested by @DN6
* better logging.
* add pixart transformer model.
* inits.
* caption_channels.
* attention masking.
* fix use_additional_conditions.
* remove print.
* debug
* flatten
* fix: assertion for sigma
* handle remapping for modeling_utils
* add tests for dit transformer2d
* quality
* placeholder for pixart tests
* pixart tests
* add _no_split_modules
* add docs.
* check
* check
* check
* check
* fix tests
* fix tests
* move Transformer output to modeling_output
* move errors better and bring back use_additional_conditions attribute.
* add unnecessary things from DiT.
* clean up pixart
* fix remapping
* fix device_map things in pixart2d.
* replace Transformer2DModel with appropriate classes in dit, pixart tests
* empty
* legacy mixin classes./
* use a remapping dict for fetching class names.
* change to specifc model types in the pipeline implementations.
* move _fetch_remapped_cls_from_config to modeling_loading_utils.py
* fix dependency problems.
* add deprecation note.
2024-05-31 13:40:27 +05:30
Sayak Paul
30e5e81d58
change to 2024 in the license ( #6902 )
...
change to 2024
2024-02-08 08:19:31 -10:00
Sayak Paul
09b7bfce91
[Core] move transformer scripts to transformers modules ( #6747 )
...
* move transformer scripts to transformers modules
* move transformer model test
* move prior transformer test to directory
* fix doc path
* correct doc path
* add: __init__.py
2024-01-29 22:28:28 +05:30