takuoko
0a08d41961
[Feature] Support IP-Adapter Plus ( #5915 )
...
* Support IP-Adapter Plus
* fix format
* restore before black format
* restore before black format
* generic
* Refactor PerceiverAttention
* format
* fix test and refactor PerceiverAttention
* generic encode_image
* keep attention implementation
* merge tests
* encode_image backward compatible
* code quality
* fix controlnet inpaint pipeline
* refactor FFN
* refactor FFN
---------
Co-authored-by: YiYi Xu <yixu310@gmail.com >
2023-12-04 12:43:34 +01:00
Suraj Patil
63f767ef15
Add SVD ( #5895 )
...
* begin model
* finish blocks
* add_embedding
* addition_time_embed_dim
* use TimestepEmbedding
* fix temporal res block
* fix time_pos_embed
* fix add_embedding
* add conversion script
* fix model
* up
* add new resnet blocks
* make forward work
* return sample in original shape
* fix temb shape in TemporalResnetBlock
* add spatio temporal transformers
* add vae blocks
* fix blocks
* update
* update
* fix shapes in Alphablender and add time activation in res blcok
* use new blocks
* style
* fix temb shape
* fix SpatioTemporalResBlock
* reuse TemporalBasicTransformerBlock
* fix TemporalBasicTransformerBlock
* use TransformerSpatioTemporalModel
* fix TransformerSpatioTemporalModel
* fix time_context dim
* clean up
* make temb optional
* add blocks
* rename model
* update conversion script
* remove UNetMidBlockSpatioTemporal
* add in init
* remove unused arg
* remove unused arg
* remove more unsed args
* up
* up
* check for None
* update vae
* update up/mid blocks for decoder
* begin pipeline
* adapt scheduler
* add guidance scalings
* fix norm eps in temporal transformers
* add temporal autoencoder
* make pipeline run
* fix frame decodig
* decode in float32
* decode n frames at a time
* pass decoding_t to decode_latents
* fix decode_latents
* vae encode/decode in fp32
* fix dtype in TransformerSpatioTemporalModel
* type image_latents same as image_embeddings
* allow using differnt eps in temporal block for video decoder
* fix default values in vae
* pass num frames in decode
* switch spatial to temporal for mixing in VAE
* fix num frames during split decoding
* cast alpha to sample dtype
* fix attention in MidBlockTemporalDecoder
* fix typo
* fix guidance_scales dtype
* fix missing activation in TemporalDecoder
* skip_post_quant_conv
* add vae conversion
* style
* take guidance scale as input
* up
* allow passing PIL to export_video
* accept fps as arg
* add pipeline and vae in init
* remove hack
* use AutoencoderKLTemporalDecoder
* don't scale image latents
* add unet tests
* clean up unet
* clean TransformerSpatioTemporalModel
* add slow svd test
* clean up
* make temb optional in Decoder mid block
* fix norm eps in TransformerSpatioTemporalModel
* clean up temp decoder
* clean up
* clean up
* use c_noise values for timesteps
* use math for log
* update
* fix copies
* doc
* upcast vae
* update forward pass for gradient checkpointing
* make added_time_ids is tensor
* up
* fix upcasting
* remove post quant conv
* add _resize_with_antialiasing
* fix _compute_padding
* cleanup model
* more cleanup
* more cleanup
* more cleanup
* remove freeu
* remove attn slice
* small clean
* up
* up
* remove extra step kwargs
* remove eta
* remove dropout
* remove callback
* remove merge factor args
* clean
* clean up
* move to dedicated folder
* remove attention_head_dim
* docstr and small fix
* update unet doc strings
* rename decoding_t
* correct linting
* store c_skip and c_out
* cleanup
* clean TemporalResnetBlock
* more cleanup
* clean up vae
* clean up
* begin doc
* more cleanup
* up
* up
* doc
* Improve
* better naming
* better naming
* better naming
* better naming
* better naming
* better naming
* better naming
* better naming
* Apply suggestions from code review
* Default chunk size to None
* add example
* Better
* Apply suggestions from code review
* update doc
* Update src/diffusers/pipelines/stable_diffusion_video/pipeline_stable_diffusion_video.py
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com >
* style
* Get torch compile working
* up
* rename
* fix doc
* add chunking
* torch compile
* torch compile
* add modelling outputs
* torch compile
* Improve chunking
* Apply suggestions from code review
* Update docs/source/en/using-diffusers/svd.md
* Close diff tag
* remove slicing
* resnet docstr
* add docstr in resnet
* rename
* Apply suggestions from code review
* update tests
* Fix output type latents
* fix more
* fix more
* Update docs/source/en/using-diffusers/svd.md
* fix more
* add pipeline tests
* remove unused arg
* clean up
* make sure get_scaling receives tensors
* fix euler scheduler
* fix get_scalings
* simply euler for now
* remove old test file
* use randn_tensor to create noise
* fix device for rand tensor
* increase expected_max_difference
* fix test_inference_batch_single_identical
* actually fix test_inference_batch_single_identical
* disable test_save_load_float16
* skip test_float16_inference
* skip test_inference_batch_single_identical
* fix test_xformers_attention_forwardGenerator_pass
* Apply suggestions from code review
* update StableVideoDiffusionPipelineSlowTests
* update image
* add diffusers example
* fix more
---------
Co-authored-by: Dhruv Nair <dhruv.nair@gmail.com >
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com >
Co-authored-by: apolinário <joaopaulo.passos@gmail.com >
2023-11-29 19:13:36 +01:00
Patrick von Platen
e550163b9f
[Vae] Make sure all vae's work with latent diffusion models ( #5880 )
...
* add comments to explain the code better
* add comments to explain the code better
* add comments to explain the code better
* add comments to explain the code better
* add comments to explain the code better
* fix more
* fix more
* fix more
* fix more
* fix more
* fix more
2023-11-27 14:17:47 +01:00
YiYi Xu
ba352aea29
[feat] IP Adapters (author @okotaku ) ( #5713 )
...
* add ip-adapter
---------
Co-authored-by: okotaku <to78314910@gmail.com >
Co-authored-by: sayakpaul <spsayakpaul@gmail.com >
Co-authored-by: yiyixuxu <yixu310@gmail,com>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com >
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com >
2023-11-21 07:34:30 -10:00
Kashif Rasul
6b04d61cf6
[Styling] stylify using ruff ( #5841 )
...
* ruff format
* not need to use doc-builder's black styling as the doc is styled in ruff
* make fix-copies
* comment
* use run_ruff
2023-11-20 11:48:34 +01:00
Will Berman
2a84e8bb5a
fix memory consistency decoder test ( #5828 )
...
Co-authored-by: Dhruv Nair <dhruv.nair@gmail.com >
2023-11-17 09:31:01 -08:00
Patrick von Platen
bf406ea886
Correct consist dec ( #5722 )
...
* uP
* Update src/diffusers/models/consistency_decoder_vae.py
* uP
* uP
2023-11-09 13:10:24 +01:00
Will Berman
2fd46405cd
consistency decoder ( #5694 )
...
* consistency decoder
* rename
* Apply suggestions from code review
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
* Update src/diffusers/pipelines/consistency_models/pipeline_consistency_models.py
* uP
* Apply suggestions from code review
* uP
* uP
* uP
---------
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com >
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
2023-11-09 12:21:41 +01:00
Dhruv Nair
71f56c771a
Model tests xformers fixes ( #5679 )
...
* fix model xformers test
* update
2023-11-07 20:50:41 +05:30
Dhruv Nair
2a8cf8e39f
Animatediff Proposal ( #5413 )
...
* draft design
* clean up
* clean up
* clean up
* clean up
* clean up
* clean up
* clean up
* clean up
* clean up
* update pipeline
* clean up
* clean up
* clean up
* add tests
* change motion block
* clean up
* clean up
* clean up
* update
* update
* update
* update
* update
* update
* update
* update
* clean up
* update
* update
* update model test
* update
* update
* update
* update
* make style
* update
* fix embeddings
* update
* merge upstream
* max fix copies
* fix bug
* fix mistake
* add docs
* update
* clean up
* update
* clean up
* clean up
* fix docstrings
* fix docstrings
* update
* update
* clean up
* update
2023-11-02 15:04:03 +01:00
Vishnu V Jaddipal
8dba180885
Added support to create asymmetrical U-Net structures ( #5400 )
...
* Added args, kwargs to ```U
* Add UNetMidBlock2D as a supported mid block type
* Fix extra init input for UNetMidBlock2D, change allowed types for Mid-block init
* Update unet_2d_condition.py
* Update unet_2d_condition.py
* Update unet_2d_condition.py
* Update unet_2d_condition.py
* Update unet_2d_condition.py
* Update unet_2d_condition.py
* Update unet_2d_condition.py
* Update unet_2d_condition.py
* Update unet_2d_blocks.py
* Update unet_2d_blocks.py
* Update unet_2d_blocks.py
* Update unet_2d_condition.py
* Update unet_2d_blocks.py
* Updated docstring, increased check strictness
Updated the docstring for ```UNet2DConditionModel``` to include ```reverse_transformer_layers_per_block``` and updated checking for nested list type ```transformer_layers_per_block```
* Add basic shape-check test for asymmetrical unets
* Update src/diffusers/models/unet_2d_blocks.py
Removed blank line
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
* Update unet_2d_condition.py
Remove blank space
* Update unet_2d_condition.py
Changed docstring for `mid_block_type`
* Fixed docstring and wrong default value
* Reformat with black
* Reformat with necessary commands
* Add UNetMidBlockFlat to versatile_diffusion/modeling_text_unet.py to ensure consistency
* Removed args, kwargs, use on mid-block type
* Make fix-copies
* Update src/diffusers/models/unet_2d_condition.py
Wrap into single line
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
* make fix-copies
---------
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
2023-10-20 11:42:28 +02:00
Dhruv Nair
4d2c981d55
New xformers test runner ( #5349 )
...
* move xformers to dedicated runner
* fix
* remove ptl from test runner images
2023-10-13 00:32:39 +05:30
Dhruv Nair
9946dcf8db
Test Fixes for CUDA Tests and Fast Tests ( #5172 )
...
* fix other tests
* fix tests
* fix tests
* Update tests/pipelines/shap_e/test_shap_e_img2img.py
* Update tests/pipelines/shap_e/test_shap_e_img2img.py
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com >
* fix upstream merge mistake
* fix tests:
* test fix
* Update tests/lora/test_lora_layers_old_backend.py
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com >
* Update tests/lora/test_lora_layers_old_backend.py
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com >
---------
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com >
2023-09-26 19:08:02 +05:30
Dhruv Nair
bdd2544673
Tests compile fixes ( #5148 )
...
* test fix
* fix tests
* fix report name
---------
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com >
2023-09-26 11:36:46 +05:30
Patrick von Platen
119ad2c3dc
[LoRA] Centralize LoRA tests ( #5086 )
...
* [LoRA] Centralize LoRA tests
* [LoRA] Centralize LoRA tests
* [LoRA] Centralize LoRA tests
* [LoRA] Centralize LoRA tests
* [LoRA] Centralize LoRA tests
2023-09-18 17:54:33 +02:00
dg845
4c8a05f115
Fix Consistency Models UNet2DMidBlock2D Attention GroupNorm Bug ( #4863 )
...
* Add attn_groups argument to UNet2DMidBlock2D to control theinternal Attention block's GroupNorm.
* Add docstring for attn_norm_num_groups in UNet2DModel.
* Since the test UNet config uses resnet_time_scale_shift == 'scale_shift', also set attn_norm_num_groups to 32.
* Add test for attn_norm_num_groups to UNet2DModelTests.
* Fix expected slices for slow tests.
* Also fix tolerances for slow tests.
---------
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
2023-09-15 11:27:51 +01:00
Patrick von Platen
b47f5115da
[Lora] fix lora fuse unfuse ( #5003 )
...
* fix lora fuse unfuse
* add same changes to loaders.py
* add test
---------
Co-authored-by: multimodalart <joaopaulo.passos+multimodal@gmail.com >
2023-09-13 11:21:04 +02:00
Sayak Paul
8009272f48
[Tests and Docs] Add a test on serializing pipelines with components containing fused LoRA modules ( #4962 )
...
* add: test to ensure pipelines can be saved with fused lora modules.
* add docs about serialization with fused lora.
* Apply suggestions from code review
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com >
* Empty-Commit
* Update docs/source/en/training/lora.md
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com >
---------
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com >
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com >
2023-09-13 10:01:37 +01:00
Dhruv Nair
f64d52dbca
fix custom diffusion tests ( #4996 )
2023-09-12 17:50:47 +02:00
Dhruv Nair
b6e0b016ce
Lazy Import for Diffusers ( #4829 )
...
* initial commit
* move modules to import struct
* add dummy objects and _LazyModule
* add lazy import to schedulers
* clean up unused imports
* lazy import on models module
* lazy import for schedulers module
* add lazy import to pipelines module
* lazy import altdiffusion
* lazy import audio diffusion
* lazy import audioldm
* lazy import consistency model
* lazy import controlnet
* lazy import dance diffusion ddim ddpm
* lazy import deepfloyd
* lazy import kandinksy
* lazy imports
* lazy import semantic diffusion
* lazy imports
* lazy import stable diffusion
* move sd output to its own module
* clean up
* lazy import t2iadapter
* lazy import unclip
* lazy import versatile and vq diffsuion
* lazy import vq diffusion
* helper to fetch objects from modules
* lazy import sdxl
* lazy import txt2vid
* lazy import stochastic karras
* fix model imports
* fix bug
* lazy import
* clean up
* clean up
* fixes for tests
* fixes for tests
* clean up
* remove import of torch_utils from utils module
* clean up
* clean up
* fix mistake import statement
* dedicated modules for exporting and loading
* remove testing utils from utils module
* fixes from merge conflicts
* Update src/diffusers/pipelines/kandinsky2_2/__init__.py
* fix docs
* fix alt diffusion copied from
* fix check dummies
* fix more docs
* remove accelerate import from utils module
* add type checking
* make style
* fix check dummies
* remove torch import from xformers check
* clean up error message
* fixes after upstream merges
* dummy objects fix
* fix tests
* remove unused module import
---------
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com >
2023-09-11 09:56:22 +02:00
Will Berman
4191ddee11
Revert revert and install accelerate main ( #4963 )
...
* Revert "Temp Revert "[Core] better support offloading when side loading is enabled… (#4927 )"
This reverts commit 2ab170499e .
* tests: install accelerate from main
2023-09-11 08:49:46 +02:00
Will Berman
2ab170499e
Temp Revert "[Core] better support offloading when side loading is enabled… ( #4927 )
...
Revert "[Core] better support offloading when side loading is enabled. (#4855 )"
This reverts commit e4b8e7928b .
2023-09-08 19:54:59 -07:00
Patrick von Platen
2340ed629e
[Test] Reduce CPU memory ( #4897 )
...
* [Test] Reduce CPU memory
* [Test] Reduce CPU memory
2023-09-05 13:18:35 +05:30
Sayak Paul
e4b8e7928b
[Core] better support offloading when side loading is enabled. ( #4855 )
...
* better support offloading when side loading is enabled.
* load_textual_inversion
* better messaging for textual inversion.
* fixes
* address PR feedback.
* sdxl support.
* improve messaging
* recursive removal when cpu sequential offloading is enabled.
* add: lora tests
* recruse.
* add: offload tests for textual inversion.
2023-09-05 06:55:13 +05:30
Sayak Paul
c81a88b239
[Core] LoRA improvements pt. 3 ( #4842 )
...
* throw warning when more than one lora is attempted to be fused.
* introduce support of lora scale during fusion.
* change test name
* changes
* change to _lora_scale
* lora_scale to call whenever applicable.
* debugging
* lora_scale additional.
* cross_attention_kwargs
* lora_scale -> scale.
* lora_scale fix
* lora_scale in patched projection.
* debugging
* debugging
* debugging
* debugging
* debugging
* debugging
* debugging
* debugging
* debugging
* debugging
* debugging
* debugging
* debugging
* styling.
* debugging
* debugging
* debugging
* debugging
* debugging
* debugging
* debugging
* debugging
* debugging
* debugging
* debugging
* debugging
* remove unneeded prints.
* remove unneeded prints.
* assign cross_attention_kwargs.
* debugging
* debugging
* debugging
* debugging
* debugging
* debugging
* debugging
* debugging
* debugging
* debugging
* debugging
* debugging
* debugging
* debugging
* debugging
* debugging
* debugging
* debugging
* debugging
* clean up.
* refactor scale retrieval logic a bit.
* fix nonetypw
* fix: tests
* add more tests
* more fixes.
* figure out a way to pass lora_scale.
* Apply suggestions from code review
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com >
* unify the retrieval logic of lora_scale.
* move adjust_lora_scale_text_encoder to lora.py.
* introduce dynamic adjustment lora scale support to sd
* fix up copies
* Empty-Commit
* add: test to check fusion equivalence on different scales.
* handle lora fusion warning.
* make lora smaller
* make lora smaller
* make lora smaller
---------
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com >
2023-09-04 23:52:31 +02:00
Dhruv Nair
189e9f01b3
Test Cleanup Precision issues ( #4812 )
...
* proposal for flaky tests
* more precision fixes
* move more tests to use cosine distance
* more test fixes
* clean up
* use default attn
* clean up
* update expected value
* make style
* make style
* Apply suggestions from code review
* Update src/diffusers/pipelines/stable_diffusion/pipeline_onnx_stable_diffusion_img2img.py
* make style
* fix failing tests
---------
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com >
2023-09-01 17:58:37 +05:30
Patrick von Platen
9f1936d2fc
Fix Unfuse Lora ( #4833 )
...
* Fix Unfuse Lora
* add tests
* Fix more
* Fix more
* Fix all
* make style
* make style
2023-08-30 09:32:25 +05:30
Patrick von Platen
c583f3b452
Fuse loras ( #4473 )
...
* Fuse loras
* initial implementation.
* add slow test one.
* styling
* add: test for checking efficiency
* print
* position
* place model offload correctly
* style
* style.
* unfuse test.
* final checks
* remove warning test
* remove warnings altogether
* debugging
* tighten up tests.
* debugging
* debugging
* debugging
* debugging
* debugging
* debugging
* debugging
* debugging
* debugging
* denugging
* debugging
* debugging
* debugging
* debugging
* debugging
* debugging
* debugging
* debugging
* debugging
* debugging
* debugging
* debugging
* debugging
* debugging
* debuging
* debugging
* debugging
* debugging
* suit up the generator initialization a bit.
* remove print
* update assertion.
* debugging
* remove print.
* fix: assertions.
* style
* can generator be a problem?
* generator
* correct tests.
* support text encoder lora fusion.
* tighten up tests.
---------
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
2023-08-29 09:14:24 +02:00
Patrick von Platen
766aa50f70
[LoRA Attn Processors] Refactor LoRA Attn Processors ( #4765 )
...
* [LoRA Attn] Refactor LoRA attn
* correct for network alphas
* fix more
* fix more tests
* fix more tests
* Move below
* Finish
* better version
* correct serialization format
* fix
* fix more
* fix more
* fix more
* Apply suggestions from code review
* Update src/diffusers/pipelines/stable_diffusion/pipeline_onnx_stable_diffusion_img2img.py
* deprecation
* relax atol for slow test slighly
* Finish tests
* make style
* make style
2023-08-28 10:38:09 +05:30
Patrick von Platen
c4d2823601
[SDXL Lora] Fix last ben sdxl lora ( #4797 )
...
* Fix last ben sdxl lora
* Correct typo
* make style
2023-08-26 23:31:56 +02:00
Dhruv Nair
4f05058bb7
Clean up flaky behaviour on Slow CUDA Pytorch Push Tests ( #4759 )
...
use max diff to compare model outputs
2023-08-24 18:58:02 +05:30
Ollin Boer Bohan
052bf3280b
Fix AutoencoderTiny encoder scaling convention ( #4682 )
...
* Fix AutoencoderTiny encoder scaling convention
* Add [-1, 1] -> [0, 1] rescaling to EncoderTiny
* Move [0, 1] -> [-1, 1] rescaling from AutoencoderTiny.decode to DecoderTiny
(i.e. immediately after the final conv, as early as possible)
* Fix missing [0, 255] -> [0, 1] rescaling in AutoencoderTiny.forward
* Update AutoencoderTinyIntegrationTests to protect against scaling issues.
The new test constructs a simple image, round-trips it through AutoencoderTiny,
and confirms the decoded result is approximately equal to the source image.
This test checks behavior with and without tiling enabled.
This test will fail if new AutoencoderTiny scaling issues are introduced.
* Context: Raw TAESD weights expect images in [0, 1], but diffusers'
convention represents images with zero-centered values in [-1, 1],
so AutoencoderTiny needs to scale / unscale images at the start of
encoding and at the end of decoding in order to work with diffusers.
* Re-add existing AutoencoderTiny test, update golden values
* Add comments to AutoencoderTiny.forward
2023-08-23 08:38:37 +05:30
Isotr0py
67ea2b7afa
Support tiled encode/decode for AutoencoderTiny ( #4627 )
...
* Impl tae slicing and tiling
* add tae tiling test
* add parameterized test
* formatted code
* fix failed test
* style docs
2023-08-18 09:12:55 +05:30
Sayak Paul
a10107f92b
fix: lora sdxl tests ( #4652 )
2023-08-17 15:59:50 +05:30
Patrick von Platen
029fb41695
[Safetensors] Make safetensors the default way of saving weights ( #4235 )
...
* make safetensors default
* set default save method as safetensors
* update tests
* update to support saving safetensors
* update test to account for safetensors default
* update example tests to use safetensors
* update example to support safetensors
* update unet tests for safetensors
* fix failing loader tests
* fix qc issues
* fix pipeline tests
* fix example test
---------
Co-authored-by: Dhruv Nair <dhruv.nair@gmail.com >
2023-08-17 10:54:28 +05:30
Batuhan Taskaya
852dc76d6d
Support higher dimension LoRAs ( #4625 )
...
* Support higher dimension LoRAs
* add: tests
* fix: assertion values.
---------
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
2023-08-17 10:07:07 +05:30
Scott Lessans
064f150813
Fix UnboundLocalError during LoRA loading ( #4523 )
...
* fixed
* add: tests
---------
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
2023-08-17 09:33:35 +05:30
Sayak Paul
15782fd506
[Pipeline utils] feat: implement push_to_hub for standalone models, schedulers as well as pipelines ( #4128 )
...
* feat: implement push_to_hub for standalone models.
* address PR feedback.
* Apply suggestions from code review
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com >
* remove max_shard_size.
* add: support for scheduler push_to_hub
* enable push_to_hub support for flax schedulers.
* enable push_to_hub for pipelines.
* Apply suggestions from code review
Co-authored-by: Lucain <lucainp@gmail.com >
* reflect pr feedback.
* address another round of deedback.
* better handling of kwargs.
* add: tests
* Apply suggestions from code review
Co-authored-by: Lucain <lucainp@gmail.com >
* setting hub staging to False for now.
* incorporate staging test as a separate job.
Co-authored-by: ydshieh <2521628+ydshieh@users.noreply.github.com >
* fix: tokenizer loading.
* fix: json dumping.
* move is_staging_test to a better location.
* better treatment to tokens.
* define repo_id to better handle concurrency
* style
* explicitly set token
* Empty-Commit
* move SUER, TOKEN to test
* collate org_repo_id
* delete repo
---------
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com >
Co-authored-by: Lucain <lucainp@gmail.com >
Co-authored-by: ydshieh <2521628+ydshieh@users.noreply.github.com >
2023-08-15 07:39:22 +05:30
Abhipsha Das
c8d86e9f0a
Remove code snippets containing is_safetensors_available() ( #4521 )
...
* [WIP] Remove code snippets containing `is_safetensors_available()`
* Modifying `import_utils.py`
* update pipeline tests for safetensor default
* fix test related to cached requests
* address import nits
---------
Co-authored-by: Dhruv Nair <dhruv.nair@gmail.com >
2023-08-11 11:05:22 +05:30
Patrick von Platen
ea1fcc28a4
[SDXL] Allow SDXL LoRA to be run with less than 16GB of VRAM ( #4470 )
...
* correct
* correct blocks
* finish
* finish
* finish
* Apply suggestions from code review
* fix
* up
* up
* up
* Update examples/dreambooth/README_sdxl.md
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
* Apply suggestions from code review
---------
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
2023-08-04 20:06:38 +02:00
Sayak Paul
06f73bd6d1
[Tests] Adds integration tests for SDXL LoRAs ( #4462 )
...
* add: integration tests for SDXL LoRAs.
* change pipeline class.
* fix assertion values.
* print values again.
* let's see.
* let's see.
* let's see.
* finish
2023-08-04 16:25:53 +05:30
Sayak Paul
18fc40c169
[Feat] add tiny Autoencoder for (almost) instant decoding ( #4384 )
...
* add: model implementation of tiny autoencoder.
* add: inits.
* push the latest devs.
* add: conversion script and finish.
* add: scaling factor args.
* debugging
* fix denormalization.
* fix: positional argument.
* handle use_torch_2_0_or_xformers.
* handle post_quant_conv
* handle dtype
* fix: sdxl image processor for tiny ae.
* fix: sdxl image processor for tiny ae.
* unify upcasting logic.
* copied from madness.
* remove trailing whitespace.
* set is_tiny_vae = False
* address PR comments.
* change to AutoencoderTiny
* make act_fn an str throughout
* fix: apply_forward_hook decorator call
* get rid of the special is_tiny_vae flag.
* directly scale the output.
* fix dummies?
* fix: act_fn.
* get rid of the Clamp() layer.
* bring back copied from.
* movement of the blocks to appropriate modules.
* add: docstrings to AutoencoderTiny
* add: documentation.
* changes to the conversion script.
* add doc entry.
* settle tests.
* style
* add one slow test.
* fix
* fix 2
* fix 2
* fix: 4
* fix: 5
* finish integration tests
* Apply suggestions from code review
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com >
* style
---------
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com >
2023-08-02 23:58:05 +05:30
Sayak Paul
816ca0048f
[LoRA] Fix SDXL text encoder LoRAs ( #4371 )
...
* temporarily disable text encoder loras.
* debugging
* debugging
* debugging
* debugging
* debugging
* debugging
* debugging
* debugging
* debugging
* debugging
* debugging
* debugging
* debugging
* debugging
* debugging
* debugging
* debugging
* debugging
* debugging
* debugging
* debbuging.
* modify doc.
* rename tests.
* print slices.
* fix: assertions
* Apply suggestions from code review
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com >
---------
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com >
2023-08-02 17:00:56 +05:30
Sayak Paul
4a4cdd6b07
[Feat] Support SDXL Kohya-style LoRA ( #4287 )
...
* sdxl lora changes.
* better name replacement.
* better replacement.
* debugging
* debugging
* debugging
* debugging
* debugging
* remove print.
* print state dict keys.
* print
* distingisuih better
* debuggable.
* fxi: tyests
* fix: arg from training script.
* access from class.
* run style
* debug
* save intermediate
* some simplifications for SDXL LoRA
* styling
* unet config is not needed in diffusers format.
* fix: dynamic SGM block mapping for SDXL kohya loras (#4322 )
* Use lora compatible layers for linear proj_in/proj_out (#4323 )
* improve condition for using the sgm_diffusers mapping
* informative comment.
* load compatible keys and embedding layer maaping.
* Get SDXL 1.0 example lora to load
* simplify
* specif ranks and hidden sizes.
* better handling of k rank and hidden
* debug
* debug
* debug
* debug
* debug
* fix: alpha keys
* add check for handling LoRAAttnAddedKVProcessor
* sanity comment
* modifications for text encoder SDXL
* debugging
* debugging
* debugging
* debugging
* debugging
* debugging
* debugging
* debugging
* denugging
* debugging
* debugging
* debugging
* debugging
* debugging
* debugging
* debugging
* debugging
* debugging
* debugging
* debugging
* debugging
* debugging
* debugging
* debugging
* debugging
* debugging
* debugging
* up
* up
* up
* up
* up
* up
* unneeded comments.
* unneeded comments.
* kwargs for the other attention processors.
* kwargs for the other attention processors.
* debugging
* debugging
* debugging
* debugging
* improve
* debugging
* debugging
* more print
* Fix alphas
* debugging
* debugging
* debugging
* debugging
* debugging
* debugging
* clean up
* clean up.
* debugging
* fix: text
---------
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com >
Co-authored-by: Batuhan Taskaya <batuhan@python.org >
2023-07-28 19:49:49 +02:00
Patrick von Platen
1926331eaf
[Local loading] Correct bug with local files only ( #4318 )
...
* [Local loading] Correct bug with local files only
* file not found error
* fix
* finish
2023-07-27 16:16:46 +02:00
Batuhan Taskaya
ff8f58086b
Load Kohya-ss style LoRAs with auxilary states ( #4147 )
...
* Support to load Kohya-ss style LoRA file format (without restrictions)
Co-Authored-By: Takuma Mori <takuma104@gmail.com >
Co-Authored-By: Sayak Paul <spsayakpaul@gmail.com >
* tmp: add sdxl to mlp_modules
---------
Co-authored-by: Takuma Mori <takuma104@gmail.com >
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
2023-07-26 00:24:19 +02:00
Sayak Paul
365e8461ac
[SDXL DreamBooth LoRA] add support for text encoder fine-tuning ( #4097 )
...
* Allow low precision sd xl
* finish
* finish
* feat: initial draft for supporting text encoder lora finetuning for SDXL DreamBooth
* fix: variable assignments.
* add: autocast block.
* add debugging
* vae dtype hell
* fix: vae dtype hell.
* fix: vae dtype hell 3.
* clean up
* lora text encoder loader.
* fix: unwrapping models.
* add: tests.
* docs.
* handle unexpected keys.
* fix vae dtype in the final inference.
* fix scope problem.
* fix: save_model_card args.
* initialize: prefix to None.
* fix: dtype issues.
* apply gixes.
* debgging.
* debugging
* debugging
* debugging
* debugging
* debugging
* add: fast tests.
* pre-tokenize.
* address: will's comments.
* fix: loader and tests.
* fix: dataloader.
* simplify dataloader.
* length.
* simplification.
* make style && make quality
* simplify state_dict munging
* fix: tests.
* fix: state_dict packing.
* Apply suggestions from code review
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com >
---------
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com >
2023-07-25 05:35:48 +05:30
Batuhan Taskaya
ad787082e2
Fix unloading of LoRAs when xformers attention procs are in use ( #4179 )
2023-07-21 14:29:20 +05:30
Ruslan Vorovchenko
07f1fbb18e
Asymmetric vqgan ( #3956 )
...
* added AsymmetricAutoencoderKL
* fixed copies+dummy
* added script to convert original asymmetric vqgan
* added docs
* updated docs
* fixed style
* fixes, added tests
* update doc
* fixed doc
* fixed tests
* naming
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
* naming
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
* udpated code example
* updated doc
* comments fixes
* added docstring
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com >
* comments fixes
* added inpaint pipeline tests
* comment suggestion: delete method
* yet another fixes
---------
Co-authored-by: Ruslan Vorovchenko <r.vorovchenko@prequelapp.com >
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com >
2023-07-20 17:51:06 +02:00
Patrick von Platen
6b1abba18d
Add controlnet and vae from single file ( #4084 )
...
* Add controlnet from single file
* Updates
* make style
* finish
* Apply suggestions from code review
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
---------
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
2023-07-19 14:50:27 +02:00