Dhruv Nair
|
7aa6af1138
|
[Refactor] Move testing utils out of src (#12238)
* update
* update
* update
* update
* update
* merge main
* Revert "merge main"
This reverts commit 65efbcead5.
|
2025-08-28 19:53:02 +05:30 |
|
Sayak Paul
|
a8e47978c6
|
[lora] adapt new LoRA config injection method (#11999)
* use state dict when setting up LoRA.
* up
* up
* up
* comment
* up
* up
|
2025-08-08 09:22:48 +05:30 |
|
Sayak Paul
|
87f83d3dd9
|
[tests] add test for hotswapping + compilation on resolution changes (#11825)
* add resolution changes tests to hotswapping test suite.
* fixes
* docs
* explain duck shapes
* fix
|
2025-07-01 09:40:34 +05:30 |
|
Sayak Paul
|
a185e1ab91
|
[tests] add a test on torch compile for varied resolutions (#11776)
* add test for checking compile on different shapes.
* update
* update
* Apply suggestions from code review
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
---------
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
|
2025-06-26 10:07:03 +05:30 |
|
Sayak Paul
|
62cce3045d
|
[chore] change to 2025 licensing for remaining (#11741)
change to 2025 licensing for remaining
|
2025-06-18 20:56:00 +05:30 |
|
Edna
|
8adc6003ba
|
Chroma Pipeline (#11698)
* working state from hameerabbasi and iddl
* working state form hameerabbasi and iddl (transformer)
* working state (normalization)
* working state (embeddings)
* add chroma loader
* add chroma to mappings
* add chroma to transformer init
* take out variant stuff
* get decently far in changing variant stuff
* add chroma init
* make chroma output class
* add chroma transformer to dummy tp
* add chroma to init
* add chroma to init
* fix single file
* update
* update
* add chroma to auto pipeline
* add chroma to pipeline init
* change to chroma transformer
* take out variant from blocks
* swap embedder location
* remove prompt_2
* work on swapping text encoders
* remove mask function
* dont modify mask (for now)
* wrap attn mask
* no attn mask (can't get it to work)
* remove pooled prompt embeds
* change to my own unpooled embeddeer
* fix load
* take pooled projections out of transformer
* ensure correct dtype for chroma embeddings
* update
* use dn6 attn mask + fix true_cfg_scale
* use chroma pipeline output
* use DN6 embeddings
* remove guidance
* remove guidance embed (pipeline)
* remove guidance from embeddings
* don't return length
* dont change dtype
* remove unused stuff, fix up docs
* add chroma autodoc
* add .md (oops)
* initial chroma docs
* undo don't change dtype
* undo arxiv change
unsure why that happened
* fix hf papers regression in more places
* Update docs/source/en/api/pipelines/chroma.md
Co-authored-by: Dhruv Nair <dhruv.nair@gmail.com>
* do_cfg -> self.do_classifier_free_guidance
* Update docs/source/en/api/models/chroma_transformer.md
Co-authored-by: Dhruv Nair <dhruv.nair@gmail.com>
* Update chroma.md
* Move chroma layers into transformer
* Remove pruned AdaLayerNorms
* Add chroma fast tests
* (untested) batch cond and uncond
* Add # Copied from for shift
* Update # Copied from statements
* update norm imports
* Revert cond + uncond batching
* Add transformer tests
* move chroma test (oops)
* chroma init
* fix chroma pipeline fast tests
* Update src/diffusers/models/transformers/transformer_chroma.py
Co-authored-by: Dhruv Nair <dhruv.nair@gmail.com>
* Move Approximator and Embeddings
* Fix auto pipeline + make style, quality
* make style
* Apply style fixes
* switch to new input ids
* fix # Copied from error
* remove # Copied from on protected members
* try to fix import
* fix import
* make fix-copes
* revert style fix
* update chroma transformer params
* update chroma transformer approximator init params
* update to pad tokens
* fix batch inference
* Make more pipeline tests work
* Make most transformer tests work
* fix docs
* make style, make quality
* skip batch tests
* fix test skipping
* fix test skipping again
* fix for tests
* Fix all pipeline test
* update
* push local changes, fix docs
* add encoder test, remove pooled dim
* default proj dim
* fix tests
* fix equal size list input
* update
* push local changes, fix docs
* add encoder test, remove pooled dim
* default proj dim
* fix tests
* fix equal size list input
* Revert "fix equal size list input"
This reverts commit 3fe4ad67d5.
* update
* update
* update
* update
* update
---------
Co-authored-by: Dhruv Nair <dhruv.nair@gmail.com>
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
|
2025-06-14 06:52:56 +05:30 |
|
Sayak Paul
|
7c6e9ef425
|
[tests] Fix how compiler mixin classes are used (#11680)
* fix how compiler tester mixins are used.
* propagate
* more
|
2025-06-09 09:24:45 +05:30 |
|
Sayak Paul
|
7acf8345f6
|
[Tests] Enable more general testing for torch.compile() with LoRA hotswapping (#11322)
* refactor hotswap tester.
* fix seeds..
* add to nightly ci.
* move comment.
* move to nightly
|
2025-05-09 11:29:06 +05:30 |
|
Sayak Paul
|
aa5f5d41d6
|
[tests] add tests to check for graph breaks, recompilation, cuda syncs in pipelines during torch.compile() (#11085)
* test for better torch.compile stuff.
* fixes
* recompilation and graph break.
* clear compilation cache.
* change to modeling level test.
* allow running compilation tests during nightlies.
|
2025-04-28 08:36:33 +08:00 |
|
hlky
|
be2070991f
|
Support Flux IP Adapter (#10261)
* Flux IP-Adapter
* test cfg
* make style
* temp remove copied from
* fix test
* fix test
* v2
* fix
* make style
* temp remove copied from
* Apply suggestions from code review
Co-authored-by: YiYi Xu <yixu310@gmail.com>
* Move encoder_hid_proj to inside FluxTransformer2DModel
* merge
* separate encode_prompt, add copied from, image_encoder offload
* make
* fix test
* fix
* Update src/diffusers/pipelines/flux/pipeline_flux.py
* test_flux_prompt_embeds change not needed
* true_cfg -> true_cfg_scale
* fix merge conflict
* test_flux_ip_adapter_inference
* add fast test
* FluxIPAdapterMixin not test mixin
* Update pipeline_flux.py
Co-authored-by: YiYi Xu <yixu310@gmail.com>
---------
Co-authored-by: YiYi Xu <yixu310@gmail.com>
|
2024-12-21 17:49:58 +00:00 |
|
Sayak Paul
|
4adf6affbb
|
[Tests] clean up and refactor gradient checkpointing tests (#9494)
* check.
* fixes
* fixes
* updates
* fixes
* fixes
|
2024-10-31 18:24:19 +05:30 |
|
Dhruv Nair
|
007ad0e2aa
|
[CI] More fixes for Fast GPU Tests on main (#9300)
update
|
2024-09-02 17:51:48 +05:30 |
|
YiYi Xu
|
c291617518
|
Flux followup (#9074)
* refactor rotary embeds
* adding jsmidt as co-author of this PR for https://github.com/huggingface/diffusers/pull/9133
---------
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
Co-authored-by: Joseph Smidt <josephsmidt@gmail.com>
|
2024-08-21 08:44:58 -10:00 |
|
Sayak Paul
|
39b87b14b5
|
feat: allow flux transformer to be sharded during inference (#9159)
* feat: support sharding for flux.
* tests
|
2024-08-16 10:00:51 +05:30 |
|
Sayak Paul
|
0e460675e2
|
[Flux] allow tests to run (#9050)
* fix tests
* fix
* float64 skip
* remove sample_size.
* remove
* remove more
* default_sample_size.
* credit black forest for flux model.
* skip
* fix: tests
* remove OriginalModelMixin
* add transformer model test
* add: transformer model tests
|
2024-08-02 11:49:59 +05:30 |
|