Tolga Cangöz
c375903db5
Errata - Fix typos & improve contributing page ( #8572 )
...
* Fix typos & improve contributing page
* `make style && make quality`
* fix typos
* Fix typo
---------
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
2024-06-24 14:13:03 +05:30
YiYi Xu
c71c19c5e6
a few fix for shard checkpoints ( #8656 )
...
fix
Co-authored-by: yiyixuxu <yixu310@gmail,com>
2024-06-21 12:50:58 +05:30
Marc Sun
96399c3ec6
Fix sharding when no device_map is passed ( #8531 )
...
* Fix sharding when no device_map is passed
* style
* add tests
* align
* add docstring
* format
---------
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
2024-06-18 05:47:23 -10:00
Dhruv Nair
04717fd861
Add Stable Diffusion 3 ( #8483 )
...
* up
* add sd3
* update
* update
* add tests
* fix copies
* fix docs
* update
* add dreambooth lora
* add LoRA
* update
* update
* update
* update
* import fix
* update
* Update src/diffusers/pipelines/stable_diffusion_3/pipeline_stable_diffusion_3.py
Co-authored-by: YiYi Xu <yixu310@gmail.com >
* import fix 2
* update
* Update src/diffusers/models/autoencoders/autoencoder_kl.py
Co-authored-by: YiYi Xu <yixu310@gmail.com >
* Update src/diffusers/models/autoencoders/autoencoder_kl.py
Co-authored-by: YiYi Xu <yixu310@gmail.com >
* Update src/diffusers/models/autoencoders/autoencoder_kl.py
Co-authored-by: YiYi Xu <yixu310@gmail.com >
* Update src/diffusers/models/autoencoders/autoencoder_kl.py
Co-authored-by: YiYi Xu <yixu310@gmail.com >
* Update src/diffusers/models/autoencoders/autoencoder_kl.py
Co-authored-by: YiYi Xu <yixu310@gmail.com >
* Update src/diffusers/models/autoencoders/autoencoder_kl.py
Co-authored-by: YiYi Xu <yixu310@gmail.com >
* Update src/diffusers/models/autoencoders/autoencoder_kl.py
Co-authored-by: YiYi Xu <yixu310@gmail.com >
* Update src/diffusers/models/autoencoders/autoencoder_kl.py
Co-authored-by: YiYi Xu <yixu310@gmail.com >
* Update src/diffusers/models/autoencoders/autoencoder_kl.py
Co-authored-by: YiYi Xu <yixu310@gmail.com >
* Update src/diffusers/models/autoencoders/autoencoder_kl.py
Co-authored-by: YiYi Xu <yixu310@gmail.com >
* Update src/diffusers/models/autoencoders/autoencoder_kl.py
Co-authored-by: YiYi Xu <yixu310@gmail.com >
* update
* update
* update
* fix ckpt id
* fix more ids
* update
* missing doc
* Update src/diffusers/schedulers/scheduling_flow_match_euler_discrete.py
Co-authored-by: YiYi Xu <yixu310@gmail.com >
* Update src/diffusers/schedulers/scheduling_flow_match_euler_discrete.py
Co-authored-by: YiYi Xu <yixu310@gmail.com >
* Update docs/source/en/api/pipelines/stable_diffusion/stable_diffusion_3.md
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
* Update docs/source/en/api/pipelines/stable_diffusion/stable_diffusion_3.md
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
* update'
* fix
* update
* Update src/diffusers/models/autoencoders/autoencoder_kl.py
* Update src/diffusers/models/autoencoders/autoencoder_kl.py
* note on gated access.
* requirements
* licensing
---------
Co-authored-by: sayakpaul <spsayakpaul@gmail.com >
Co-authored-by: YiYi Xu <yixu310@gmail.com >
2024-06-12 20:44:00 +01:00
Sayak Paul
7d887118b9
[Core] support saving and loading of sharded checkpoints ( #7830 )
...
* feat: support saving a model in sharded checkpoints.
* feat: make loading of sharded checkpoints work.
* add tests
* cleanse the loading logic a bit more.
* more resilience while loading from the Hub.
* parallelize shard downloads by using snapshot_download()/
* default to a shard size.
* more fix
* Empty-Commit
* debug
* fix
* uality
* more debugging
* fix more
* initial comments from Benjamin
* move certain methods to loading_utils
* add test to check if the correct number of shards are present.
* add a test to check if loading of sharded checkpoints from the Hub is okay
* clarify the unit when passed as an int.
* use hf_hub for sharding.
* remove unnecessary code
* remove unnecessary function
* lucain's comments.
* fixes
* address high-level comments.
* fix test
* subfolder shenanigans./
* Update src/diffusers/utils/hub_utils.py
Co-authored-by: Lucain <lucainp@gmail.com >
* Apply suggestions from code review
Co-authored-by: Lucain <lucainp@gmail.com >
* remove _huggingface_hub_version as not needed.
* address more feedback.
* add a test for local_files_only=True/
* need hf hub to be at least 0.23.2
* style
* final comment.
* clean up subfolder.
* deal with suffixes in code.
* _add_variant default.
* use weights_name_pattern
* remove add_suffix_keyword
* clean up downloading of sharded ckpts.
* don't return something special when using index.json
* fix more
* don't use bare except
* remove comments and catch the errors better
* fix a couple of things when using is_file()
* empty
---------
Co-authored-by: Lucain <lucainp@gmail.com >
2024-06-07 14:49:10 +05:30
Sayak Paul
a0542c1917
[LoRA] Remove legacy LoRA code and related adjustments ( #8316 )
...
* remove legacy code from load_attn_procs.
* finish first draft
* fix more.
* fix more
* add test
* add serialization support.
* fix-copies
* require peft backend for lora tests
* style
* fix test
* fix loading.
* empty
* address benjamin's feedback.
2024-06-05 08:15:30 +04:00
Sayak Paul
983dec3bf7
[Core] Introduce class variants for Transformer2DModel ( #7647 )
...
* init for patches
* finish patched model.
* continuous transformer
* vectorized transformer2d.
* style.
* inits.
* fix-copies.
* introduce DiTTransformer2DModel.
* fixes
* use REMAPPING as suggested by @DN6
* better logging.
* add pixart transformer model.
* inits.
* caption_channels.
* attention masking.
* fix use_additional_conditions.
* remove print.
* debug
* flatten
* fix: assertion for sigma
* handle remapping for modeling_utils
* add tests for dit transformer2d
* quality
* placeholder for pixart tests
* pixart tests
* add _no_split_modules
* add docs.
* check
* check
* check
* check
* fix tests
* fix tests
* move Transformer output to modeling_output
* move errors better and bring back use_additional_conditions attribute.
* add unnecessary things from DiT.
* clean up pixart
* fix remapping
* fix device_map things in pixart2d.
* replace Transformer2DModel with appropriate classes in dit, pixart tests
* empty
* legacy mixin classes./
* use a remapping dict for fetching class names.
* change to specifc model types in the pipeline implementations.
* move _fetch_remapped_cls_from_config to modeling_loading_utils.py
* fix dependency problems.
* add deprecation note.
2024-05-31 13:40:27 +05:30
Sayak Paul
ba1bfac20b
[Core] Refactor IPAdapterPlusImageProjection a bit ( #7994 )
...
* use IPAdapterPlusImageProjectionBlock in IPAdapterPlusImageProjection
* reposition IPAdapterPlusImageProjection
* refactor complete?
* fix heads param retrieval.
* update test dict creation method.
2024-05-29 06:30:47 +05:30
Dhruv Nair
baab065679
Remove unnecessary single file tests for SD Cascade UNet ( #7996 )
...
update
2024-05-22 12:29:59 +05:30
Isamu Isozaki
d27e996ccd
Adding VQGAN Training script ( #5483 )
...
* Init commit
* Removed einops
* Added default movq config for training
* Update explanation of prompts
* Fixed inheritance of discriminator and init_tracker
* Fixed incompatible api between muse and here
* Fixed output
* Setup init training
* Basic structure done
* Removed attention for quick tests
* Style fixes
* Fixed vae/vqgan styles
* Removed redefinition of wandb
* Fixed log_validation and tqdm
* Nothing commit
* Added commit loss to lookup_from_codebook
* Update src/diffusers/models/vq_model.py
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
* Adding perliminary README
* Fixed one typo
* Local changes
* Fixed main issues
* Merging
* Update src/diffusers/models/vq_model.py
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
* Testing+Fixed bugs in training script
* Some style fixes
* Added wandb to docs
* Fixed timm test
* get testing suite ready.
* remove return loss
* remove return_loss
* Remove diffs
* Remove diffs
* fix ruff format
---------
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
Co-authored-by: Dhruv Nair <dhruv.nair@gmail.com >
2024-05-15 08:47:12 +05:30
Dhruv Nair
cb0f3b49cb
[Refactor] Better align from_single_file logic with from_pretrained ( #7496 )
...
* refactor unet single file loading a bit.
* retrieve the unet from create_diffusers_unet_model_from_ldm
* update
* update
* updae
* update
* update
* update
* update
* update
* update
* update
* update
* update
* update
* update
* update
* update
* update
* update
* update
* update
* update
* update
* update
* update
* update
* update
* update
* update
* update
* update
* update
* update
* update
* update
* update
* update
* update
* update
* update
* update
* update
* update
* update
* update
* update
* tests
* update
* update
* update
* Update docs/source/en/api/single_file.md
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
* Update docs/source/en/api/single_file.md
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
* update
* update
* update
* update
* update
* update
* update
* update
* update
* update
* update
* update
* update
* Update docs/source/en/api/loaders/single_file.md
Co-authored-by: YiYi Xu <yixu310@gmail.com >
* Update src/diffusers/loaders/single_file.py
Co-authored-by: YiYi Xu <yixu310@gmail.com >
* Update docs/source/en/api/loaders/single_file.md
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
* Update docs/source/en/api/loaders/single_file.md
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
* Update docs/source/en/api/loaders/single_file.md
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
* Update docs/source/en/api/loaders/single_file.md
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
* update
* update
* update
* update
* update
* update
* update
* update
* update
* update
* update
* update
* update
* update
* update
* update
* update
* update
* update
* update
* update
* update
* update
* update
* update
* update
* update
* update
* update
* update
* update
---------
Co-authored-by: sayakpaul <spsayakpaul@gmail.com >
Co-authored-by: YiYi Xu <yixu310@gmail.com >
2024-05-09 19:00:19 +05:30
HelloWorldBeginner
58237364b1
Add Ascend NPU support for SDXL fine-tuning and fix the model saving bug when using DeepSpeed. ( #7816 )
...
* Add Ascend NPU support for SDXL fine-tuning and fix the model saving bug when using DeepSpeed.
* fix check code quality
* Decouple the NPU flash attention and make it an independent module.
* add doc and unit tests for npu flash attention.
---------
Co-authored-by: mhh001 <mahonghao1@huawei.com >
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
2024-05-03 08:14:34 -10:00
Sayak Paul
8909ab4b19
[Tests] fix: device map tests for models ( #7825 )
...
* fix: device module tests
* remove patch file
* Empty-Commit
2024-05-01 18:45:47 +05:30
Sayak Paul
3fd31eef51
[Core] introduce _no_split_modules to ModelMixin ( #6396 )
...
* introduce _no_split_modules.
* unnecessary spaces.
* remove unnecessary kwargs and style
* fix: accelerate imports.
* change to _determine_device_map
* add the blocks that have residual connections.
* add: CrossAttnUpBlock2D
* add: testin
* style
* line-spaces
* quality
* add disk offload test without safetensors.
* checking disk offloading percentages.
* change model split
* add: utility for checking multi-gpu requirement.
* model parallelism test
* splits.
* splits.
* splits
* splits.
* splits.
* splits.
* offload folder to test_disk_offload_with_safetensors
* add _no_split_modules
* fix-copies
2024-04-30 08:46:51 +05:30
Sayak Paul
b833d0fc80
[Tests] mark UNetControlNetXSModelTests::test_forward_no_control to be flaky ( #7771 )
...
decorate UNetControlNetXSModelTests::test_forward_no_control with is_flaky
2024-04-25 07:29:04 +05:30
Dhruv Nair
9ef43f38d4
Fix test for consistency decoder. ( #7746 )
...
update
2024-04-24 12:28:11 +05:30
YiYi Xu
e5674015f3
adding back test_conversion_when_using_device_map ( #7704 )
...
* style
* Fix device map nits (#7705 )
---------
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
2024-04-18 19:21:32 -10:00
Fabio Rigano
b5c8b555d7
Move IP Adapter Face ID to core ( #7186 )
...
* Switch to peft and multi proj layers
* Move Face ID loading and inference to core
---------
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
2024-04-18 14:13:27 -10:00
UmerHA
fda1531d8a
Fixing implementation of ControlNet-XS ( #6772 )
...
* CheckIn - created DownSubBlocks
* Added extra channels, implemented subblock fwd
* Fixed connection sizes
* checkin
* Removed iter, next in forward
* Models for SD21 & SDXL run through
* Added back pipelines, cleared up connections
* Cleaned up connection creation
* added debug logs
* updated logs
* logs: added input loading
* Update umer_debug_logger.py
* log: Loading hint
* Update umer_debug_logger.py
* added logs
* Changed debug logging
* debug: added more logs
* Fixed num_norm_groups
* Debug: Logging all of SDXL input
* Update umer_debug_logger.py
* debug: updated logs
* checkim
* Readded tests
* Removed debug logs
* Fixed Slow Tests
* Added value ckecks | Updated model_cpu_offload_seq
* accelerate-offloading works ; fast tests work
* Made unet & addon explicit in controlnet
* Updated slow tests
* Added dtype/device to ControlNetXS
* Filled in test model paths
* Added image_encoder/feature_extractor to XL pipe
* Fixed fast tests
* Added comments and docstrings
* Fixed copies
* Added docs ; Updates slow tests
* Moved changes to UNetMidBlock2DCrossAttn
* tiny cleanups
* Removed stray prints
* Removed ip adapters + freeU
- Removed ip adapters + freeU as they don't make sense for ControlNet-XS
- Fixed imports of UNet components
* Fixed test_save_load_float16
* Make style, quality, fix-copies
* Changed loading/saving API for ControlNetXS
- Changed loading/saving API for ControlNetXS
- other small fixes
* Removed ControlNet-XS from research examples
* Make style, quality, fix-copies
* Small fixes
- deleted ControlNetXSModel.init_original
- added time_embedding_mix to StableDiffusionControlNetXSPipeline .from_pretrained / StableDiffusionXLControlNetXSPipeline.from_pretrained
- fixed copy hints
* checkin May 11 '23
* CheckIn Mar 12 '24
* Fixed tests for SD
* Added tests for UNetControlNetXSModel
* Fixed SDXL tests
* cleanup
* Delete Pipfile
* CheckIn Mar 20
Started replacing sub blocks by `ControlNetXSCrossAttnDownBlock2D` and `ControlNetXSCrossAttnUplock2D`
* check-in Mar 23
* checkin 24 Mar
* Created init for UNetCnxs and CnxsAddon
* CheckIn
* Made from_modules, from_unet and no_control work
* make style,quality,fix-copies & small changes
* Fixed freezing
* Added gradient ckpt'ing; fixed tests
* Fix slow tests(+compile) ; clear naming confusion
* Don't create UNet in init ; removed class_emb
* Incorporated review feedback
- Deleted get_base_pipeline / get_controlnet_addon for pipes
- Pipes inherit from StableDiffusionXLPipeline
- Made module dicts for cnxs-addon's down/mid/up classes
- Added support for qkv fusion and freeU
* Make style, quality, fix-copies
* Implemented review feedback
* Removed compatibility check for vae/ctrl embedding
* make style, quality, fix-copies
* Delete Pipfile
* Integrated review feedback
- Importing ControlNetConditioningEmbedding now
- get_down/mid/up_block_addon now outside class
- renamed `do_control` to `apply_control`
* Reduced size of test tensors
For this, added `norm_num_groups` as parameter everywhere
* Renamed cnxs-`Addon` to cnxs-`Adapter`
- `ControlNetXSAddon` -> `ControlNetXSAdapter`
- `ControlNetXSAddonDownBlockComponents` -> `DownBlockControlNetXSAdapter`, and similarly for mid/up
- `get_mid_block_addon` -> `get_mid_block_adapter`, and similarly for mid/up
* Fixed save_pretrained/from_pretrained bug
* Removed redundant code
---------
Co-authored-by: Dhruv Nair <dhruv.nair@gmail.com >
2024-04-16 21:56:20 +05:30
YiYi Xu
a341b536a8
disable test_conversion_when_using_device_map ( #7620 )
...
* disable test
* update
---------
Co-authored-by: yiyixuxu <yixu310@gmail,com>
2024-04-09 09:01:19 -10:00
Sayak Paul
1c60e094de
[Tests] reduce block sizes of UNet and VAE tests ( #7560 )
...
* reduce block sizes for unet1d.
* reduce blocks for unet_2d.
* reduce block size for unet_motion
* increase channels.
* correctly increase channels.
* reduce number of layers in unet2dconditionmodel tests.
* reduce block sizes for unet2dconditionmodel tests
* reduce block sizes for unet3dconditionmodel.
* fix: test_feed_forward_chunking
* fix: test_forward_with_norm_groups
* skip spatiotemporal tests on MPS.
* reduce block size in AutoencoderKL.
* reduce block sizes for vqmodel.
* further reduce block size.
* make style.
* Empty-Commit
* reduce sizes for ConsistencyDecoderVAETests
* further reduction.
* further block reductions in AutoencoderKL and AssymetricAutoencoderKL.
* massively reduce the block size in unet2dcontionmodel.
* reduce sizes for unet3d
* fix tests in unet3d.
* reduce blocks further in motion unet.
* fix: output shape
* add attention_head_dim to the test configuration.
* remove unexpected keyword arg
* up a bit.
* groups.
* up again
* fix
2024-04-05 10:08:32 +05:30
Dhruv Nair
4d39b7483d
Memory clean up on all Slow Tests ( #7514 )
...
* update
* update
---------
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
2024-03-29 14:23:28 +05:30
YiYi Xu
34c90dbb31
fix OOM for test_vae_tiling ( #7510 )
...
use float16 and add torch.no_grad()
2024-03-29 08:22:39 +05:30
M. Tolga Cangöz
443aa14e41
Fix Tiling in ConsistencyDecoderVAE ( #7290 )
...
* Fix typos
* Add docstring to `decode` method in `ConsistencyDecoderVAE`
* Fix tiling
* Enable tiled VAE decoding with customizable tile sample size and overlap factor
* Revert "Enable tiled VAE decoding with customizable tile sample size and overlap factor"
This reverts commit 181049675e .
* Add VAE tiling test for `ConsistencyDecoderVAE`
---------
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
2024-03-26 17:59:08 +05:30
Sayak Paul
484c8ef399
[tests] skip dynamo tests when python is 3.12. ( #7458 )
...
skip dynamo tests when python is 3.12.
2024-03-26 08:39:48 +05:30
M. Tolga Cangöz
a51b6cc86a
[Docs] Fix typos ( #7451 )
...
* Fix typos
* Fix typos
* Fix typos
---------
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
2024-03-25 11:48:02 -07:00
M. Tolga Cangöz
e97a633b63
Update access of configuration attributes ( #7343 )
...
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
2024-03-18 08:53:29 -10:00
Dhruv Nair
4974b84564
Update Cascade Tests ( #7324 )
...
* update
* update
* update
2024-03-14 20:51:22 +05:30
Dhruv Nair
41424466e3
[Tests] Fix incorrect constant in VAE scaling test. ( #7301 )
...
update
2024-03-14 10:24:01 +05:30
Dhruv Nair
ed224f94ba
Add single file support for Stable Cascade ( #7274 )
...
* update
* update
* update
* update
* update
* update
---------
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
2024-03-13 08:37:31 +05:30
Sayak Paul
531e719163
[LoRA] use the PyTorch classes wherever needed and start depcrecation cycles ( #7204 )
...
* fix PyTorch classes and start deprecsation cycles.
* remove args crafting for accommodating scale.
* remove scale check in feedforward.
* assert against nn.Linear and not CompatibleLinear.
* remove conv_cls and lineaR_cls.
* remove scale
* 👋 scale.
* fix: unet2dcondition
* fix attention.py
* fix: attention.py again
* fix: unet_2d_blocks.
* fix-copies.
* more fixes.
* fix: resnet.py
* more fixes
* fix i2vgenxl unet.
* depcrecate scale gently.
* fix-copies
* Apply suggestions from code review
Co-authored-by: YiYi Xu <yixu310@gmail.com >
* quality
* throw warning when scale is passed to the the BasicTransformerBlock class.
* remove scale from signature.
* cross_attention_kwargs, very nice catch by Yiyi
* fix: logger.warn
* make deprecation message clearer.
* address final comments.
* maintain same depcrecation message and also add it to activations.
* address yiyi
* fix copies
* Apply suggestions from code review
Co-authored-by: YiYi Xu <yixu310@gmail.com >
* more depcrecation
* fix-copies
---------
Co-authored-by: YiYi Xu <yixu310@gmail.com >
2024-03-13 07:56:19 +05:30
Dhruv Nair
ac49f97a75
Add tests to check configs when using single file loading ( #7099 )
...
* update
* update
* update
* update
---------
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
2024-02-27 15:47:23 +05:30
Aryan
bb1b76d3bf
IPAdapterTesterMixin ( #6862 )
...
* begin IPAdapterTesterMixin
---------
Co-authored-by: YiYi Xu <yixu310@gmail.com >
2024-02-22 14:25:33 -10:00
Sayak Paul
30e5e81d58
change to 2024 in the license ( #6902 )
...
change to 2024
2024-02-08 08:19:31 -10:00
Sayak Paul
ec9840a5db
[Refactor] harmonize the module structure for models in tests ( #6738 )
...
* harmonize the module structure for models in tests
* make the folders modules.
---------
Co-authored-by: YiYi Xu <yixu310@gmail.com >
2024-02-01 14:23:39 +05:30
YiYi Xu
2e8d18e699
[IP-Adapter] Support multiple IP-Adapters ( #6573 )
...
---------
Co-authored-by: yiyixuxu <yixu310@gmail,com>
Co-authored-by: Alvaro Somoza <somoza.alvaro@gmail.com >
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com >
2024-01-31 07:11:15 -10:00
Sayak Paul
09b7bfce91
[Core] move transformer scripts to transformers modules ( #6747 )
...
* move transformer scripts to transformers modules
* move transformer model test
* move prior transformer test to directory
* fix doc path
* correct doc path
* add: __init__.py
2024-01-29 22:28:28 +05:30
Sayak Paul
d4c7ab7bf1
[Hub] feat: explicitly tag to diffusers when using push_to_hub ( #6678 )
...
* feat: explicitly tag to diffusers when using push_to_hub
* remove tags.
* reset repo.
* Apply suggestions from code review
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com >
* fix: tests
* fix: push_to_hub behaviour for tagging from save_pretrained
* Apply suggestions from code review
Co-authored-by: Lucain <lucainp@gmail.com >
* Apply suggestions from code review
Co-authored-by: Lucain <lucainp@gmail.com >
* import fixes.
* add library name to existing model card.
* add: standalone test for generate_model_card
* fix tests for standalone method
* moved library_name to a better place.
* merge create_model_card and generate_model_card.
* fix test
* address lucain's comments
* fix return identation
* Apply suggestions from code review
Co-authored-by: Lucain <lucainp@gmail.com >
* address further comments.
* Update src/diffusers/pipelines/pipeline_utils.py
Co-authored-by: Lucain <lucainp@gmail.com >
---------
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com >
Co-authored-by: Lucain <lucainp@gmail.com >
2024-01-26 23:01:48 +05:30
Sayak Paul
1f0705adcf
[Big refactor] move unets to unets module 🦋 ( #6630 )
...
* move unets to module 🦋
* parameterize unet-level import.
* fix flax unet2dcondition model import
* models __init__
* mildly depcrecating models.unet_2d_blocks in favor of models.unets.unet_2d_blocks.
* noqa
* correct depcrecation behaviour
* inherit from the actual classes.
* Empty-Commit
* backwards compatibility for unet_2d.py
* backward compatibility for unet_2d_condition
* bc for unet_1d
* bc for unet_1d_blocks
2024-01-23 08:57:58 +05:30
YiYi Xu
4c483deb90
[refactor embeddings] gligen + ip-adapter ( #6244 )
...
* refactor ip-adapter-imageproj, gligen
---------
Co-authored-by: yiyixuxu <yixu310@gmail,com>
2023-12-27 18:48:42 -10:00
Dhruv Nair
79a7ab92d1
Fix clearing backend cache from device agnostic testing ( #6075 )
...
update
2023-12-07 11:18:31 +05:30
Arsalan
f427345ab1
Device agnostic testing ( #5612 )
...
* utils and test modifications to enable device agnostic testing
* device for manual seed in unet1d
* fix generator condition in vae test
* consistency changes to testing
* make style
* add device agnostic testing changes to source and one model test
* make dtype check fns private, log cuda fp16 case
* remove dtype checks from import utils, move to testing_utils
* adding tests for most model classes and one pipeline
* fix vae import
2023-12-05 19:04:13 +05:30
takuoko
0a08d41961
[Feature] Support IP-Adapter Plus ( #5915 )
...
* Support IP-Adapter Plus
* fix format
* restore before black format
* restore before black format
* generic
* Refactor PerceiverAttention
* format
* fix test and refactor PerceiverAttention
* generic encode_image
* keep attention implementation
* merge tests
* encode_image backward compatible
* code quality
* fix controlnet inpaint pipeline
* refactor FFN
* refactor FFN
---------
Co-authored-by: YiYi Xu <yixu310@gmail.com >
2023-12-04 12:43:34 +01:00
Suraj Patil
63f767ef15
Add SVD ( #5895 )
...
* begin model
* finish blocks
* add_embedding
* addition_time_embed_dim
* use TimestepEmbedding
* fix temporal res block
* fix time_pos_embed
* fix add_embedding
* add conversion script
* fix model
* up
* add new resnet blocks
* make forward work
* return sample in original shape
* fix temb shape in TemporalResnetBlock
* add spatio temporal transformers
* add vae blocks
* fix blocks
* update
* update
* fix shapes in Alphablender and add time activation in res blcok
* use new blocks
* style
* fix temb shape
* fix SpatioTemporalResBlock
* reuse TemporalBasicTransformerBlock
* fix TemporalBasicTransformerBlock
* use TransformerSpatioTemporalModel
* fix TransformerSpatioTemporalModel
* fix time_context dim
* clean up
* make temb optional
* add blocks
* rename model
* update conversion script
* remove UNetMidBlockSpatioTemporal
* add in init
* remove unused arg
* remove unused arg
* remove more unsed args
* up
* up
* check for None
* update vae
* update up/mid blocks for decoder
* begin pipeline
* adapt scheduler
* add guidance scalings
* fix norm eps in temporal transformers
* add temporal autoencoder
* make pipeline run
* fix frame decodig
* decode in float32
* decode n frames at a time
* pass decoding_t to decode_latents
* fix decode_latents
* vae encode/decode in fp32
* fix dtype in TransformerSpatioTemporalModel
* type image_latents same as image_embeddings
* allow using differnt eps in temporal block for video decoder
* fix default values in vae
* pass num frames in decode
* switch spatial to temporal for mixing in VAE
* fix num frames during split decoding
* cast alpha to sample dtype
* fix attention in MidBlockTemporalDecoder
* fix typo
* fix guidance_scales dtype
* fix missing activation in TemporalDecoder
* skip_post_quant_conv
* add vae conversion
* style
* take guidance scale as input
* up
* allow passing PIL to export_video
* accept fps as arg
* add pipeline and vae in init
* remove hack
* use AutoencoderKLTemporalDecoder
* don't scale image latents
* add unet tests
* clean up unet
* clean TransformerSpatioTemporalModel
* add slow svd test
* clean up
* make temb optional in Decoder mid block
* fix norm eps in TransformerSpatioTemporalModel
* clean up temp decoder
* clean up
* clean up
* use c_noise values for timesteps
* use math for log
* update
* fix copies
* doc
* upcast vae
* update forward pass for gradient checkpointing
* make added_time_ids is tensor
* up
* fix upcasting
* remove post quant conv
* add _resize_with_antialiasing
* fix _compute_padding
* cleanup model
* more cleanup
* more cleanup
* more cleanup
* remove freeu
* remove attn slice
* small clean
* up
* up
* remove extra step kwargs
* remove eta
* remove dropout
* remove callback
* remove merge factor args
* clean
* clean up
* move to dedicated folder
* remove attention_head_dim
* docstr and small fix
* update unet doc strings
* rename decoding_t
* correct linting
* store c_skip and c_out
* cleanup
* clean TemporalResnetBlock
* more cleanup
* clean up vae
* clean up
* begin doc
* more cleanup
* up
* up
* doc
* Improve
* better naming
* better naming
* better naming
* better naming
* better naming
* better naming
* better naming
* better naming
* Apply suggestions from code review
* Default chunk size to None
* add example
* Better
* Apply suggestions from code review
* update doc
* Update src/diffusers/pipelines/stable_diffusion_video/pipeline_stable_diffusion_video.py
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com >
* style
* Get torch compile working
* up
* rename
* fix doc
* add chunking
* torch compile
* torch compile
* add modelling outputs
* torch compile
* Improve chunking
* Apply suggestions from code review
* Update docs/source/en/using-diffusers/svd.md
* Close diff tag
* remove slicing
* resnet docstr
* add docstr in resnet
* rename
* Apply suggestions from code review
* update tests
* Fix output type latents
* fix more
* fix more
* Update docs/source/en/using-diffusers/svd.md
* fix more
* add pipeline tests
* remove unused arg
* clean up
* make sure get_scaling receives tensors
* fix euler scheduler
* fix get_scalings
* simply euler for now
* remove old test file
* use randn_tensor to create noise
* fix device for rand tensor
* increase expected_max_difference
* fix test_inference_batch_single_identical
* actually fix test_inference_batch_single_identical
* disable test_save_load_float16
* skip test_float16_inference
* skip test_inference_batch_single_identical
* fix test_xformers_attention_forwardGenerator_pass
* Apply suggestions from code review
* update StableVideoDiffusionPipelineSlowTests
* update image
* add diffusers example
* fix more
---------
Co-authored-by: Dhruv Nair <dhruv.nair@gmail.com >
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com >
Co-authored-by: apolinário <joaopaulo.passos@gmail.com >
2023-11-29 19:13:36 +01:00
Patrick von Platen
e550163b9f
[Vae] Make sure all vae's work with latent diffusion models ( #5880 )
...
* add comments to explain the code better
* add comments to explain the code better
* add comments to explain the code better
* add comments to explain the code better
* add comments to explain the code better
* fix more
* fix more
* fix more
* fix more
* fix more
* fix more
2023-11-27 14:17:47 +01:00
YiYi Xu
ba352aea29
[feat] IP Adapters (author @okotaku ) ( #5713 )
...
* add ip-adapter
---------
Co-authored-by: okotaku <to78314910@gmail.com >
Co-authored-by: sayakpaul <spsayakpaul@gmail.com >
Co-authored-by: yiyixuxu <yixu310@gmail,com>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com >
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com >
2023-11-21 07:34:30 -10:00
Kashif Rasul
6b04d61cf6
[Styling] stylify using ruff ( #5841 )
...
* ruff format
* not need to use doc-builder's black styling as the doc is styled in ruff
* make fix-copies
* comment
* use run_ruff
2023-11-20 11:48:34 +01:00
Will Berman
2a84e8bb5a
fix memory consistency decoder test ( #5828 )
...
Co-authored-by: Dhruv Nair <dhruv.nair@gmail.com >
2023-11-17 09:31:01 -08:00
Patrick von Platen
bf406ea886
Correct consist dec ( #5722 )
...
* uP
* Update src/diffusers/models/consistency_decoder_vae.py
* uP
* uP
2023-11-09 13:10:24 +01:00
Will Berman
2fd46405cd
consistency decoder ( #5694 )
...
* consistency decoder
* rename
* Apply suggestions from code review
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
* Update src/diffusers/pipelines/consistency_models/pipeline_consistency_models.py
* uP
* Apply suggestions from code review
* uP
* uP
* uP
---------
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com >
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
2023-11-09 12:21:41 +01:00