Dhruv Nair
c78ee143e9
Move more slow tests to nightly ( #5220 )
...
* move to nightly
* fix mistake
2023-09-28 19:00:41 +05:30
Patrick von Platen
a584d42ce5
[LoRA, Xformers] Fix xformers lora ( #5201 )
...
* fix xformers lora
* improve
* fix
2023-09-27 21:46:32 +05:30
Dhruv Nair
ba59e92fb0
Fix memory issues in tests ( #5183 )
...
* fix memory issues
* set _offload_gpu_id
* set gpu offload id
2023-09-27 14:04:57 +02:00
YiYi Xu
940f9410cb
Add test_full_loop_with_noise tests to all scheduler with add_nosie function ( #5184 )
...
* add fast tests for dpm-multi
* add more tests
* style
---------
Co-authored-by: yiyixuxu <yixu310@gmail,com>
2023-09-27 13:08:37 +02:00
Dhruv Nair
9946dcf8db
Test Fixes for CUDA Tests and Fast Tests ( #5172 )
...
* fix other tests
* fix tests
* fix tests
* Update tests/pipelines/shap_e/test_shap_e_img2img.py
* Update tests/pipelines/shap_e/test_shap_e_img2img.py
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com >
* fix upstream merge mistake
* fix tests:
* test fix
* Update tests/lora/test_lora_layers_old_backend.py
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com >
* Update tests/lora/test_lora_layers_old_backend.py
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com >
---------
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com >
2023-09-26 19:08:02 +05:30
Dhruv Nair
bdd2544673
Tests compile fixes ( #5148 )
...
* test fix
* fix tests
* fix report name
---------
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com >
2023-09-26 11:36:46 +05:30
Patrick von Platen
bed8aceca1
make style
2023-09-25 20:24:03 +02:00
Ryan Dick
415093335b
Fix the total_downscale_factor returned by FullAdapterXL T2IAdapters ( #5134 )
...
* Fix FullAdapterXL.total_downscale_factor.
* Fix incorrect error message in T2IAdapter.__init__(...).
* Move IP-Adapter test_total_downscale_factor(...) to pipeline test file (requested in code review).
* Add more info to error message about an unsupported T2I-Adapter adapter_type.
---------
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com >
2023-09-25 20:23:14 +02:00
Anh71me
28254c79b6
Fix type annotation ( #5146 )
...
* Fix type annotation on Scheduler.from_pretrained
* Fix type annotation on PIL.Image
2023-09-25 19:26:39 +02:00
Patrick von Platen
30a512ea69
[Core] Improve .to(...) method, fix offloads multi-gpu, add docstring, add dtype ( #5132 )
...
* fix cpu offload
* fix
* fix
* Update src/diffusers/pipelines/pipeline_utils.py
* make style
* Apply suggestions from code review
Co-authored-by: YiYi Xu <yixu310@gmail.com >
Co-authored-by: Pedro Cuenca <pedro@huggingface.co >
* fix more
* fix more
---------
Co-authored-by: YiYi Xu <yixu310@gmail.com >
Co-authored-by: Pedro Cuenca <pedro@huggingface.co >
2023-09-25 14:10:18 +02:00
Patrick von Platen
22b19d578e
[Tests] Add is flaky decorator ( #5139 )
...
* add is flaky decorator
* fix more
2023-09-25 13:24:44 +02:00
Younes Belkada
493f9529d7
[PEFT / LoRA] PEFT integration - text encoder ( #5058 )
...
* more fixes
* up
* up
* style
* add in setup
* oops
* more changes
* v1 rzfactor CI
* Apply suggestions from code review
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com >
* few todos
* protect torch import
* style
* fix fuse text encoder
* Update src/diffusers/loaders.py
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
* replace with `recurse_replace_peft_layers`
* keep old modules for BC
* adjustments on `adjust_lora_scale_text_encoder`
* nit
* move tests
* add conversion utils
* remove unneeded methods
* use class method instead
* oops
* use `base_version`
* fix examples
* fix CI
* fix weird error with python 3.8
* fix
* better fix
* style
* Apply suggestions from code review
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com >
* Apply suggestions from code review
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com >
* add comment
* Apply suggestions from code review
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
* conv2d support for recurse remove
* added docstrings
* more docstring
* add deprecate
* revert
* try to fix merge conflicts
* v1 tests
* add new decorator
* add saving utilities test
* adapt tests a bit
* add save / from_pretrained tests
* add saving tests
* add scale tests
* fix deps tests
* fix lora CI
* fix tests
* add comment
* fix
* style
* add slow tests
* slow tests pass
* style
* Update src/diffusers/utils/import_utils.py
Co-authored-by: Benjamin Bossan <BenjaminBossan@users.noreply.github.com >
* Apply suggestions from code review
Co-authored-by: Benjamin Bossan <BenjaminBossan@users.noreply.github.com >
* circumvents pattern finding issue
* left a todo
* Apply suggestions from code review
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com >
* update hub path
* add lora workflow
* fix
---------
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com >
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
Co-authored-by: Benjamin Bossan <BenjaminBossan@users.noreply.github.com >
2023-09-22 13:03:39 +02:00
YiYi Xu
2badddfdb6
add multi adapter support to StableDiffusionXLAdapterPipeline ( #5127 )
...
fix and add tests
Co-authored-by: yiyixuxu <yixu310@gmail,com>
2023-09-21 12:54:59 -10:00
Ayush Mangal
157c9011d8
Add BLIP Diffusion ( #4388 )
...
* Add BLIP Diffusion skeleton
* Add other model components
* Add BLIP2, need to change it for now
* Fix pipeline imports
* Load pretrained ViT
* Make qformer fwd pass same
* Replicate fwd passes
* Fix device bug
* Add accelerate functions
* Remove extra functions from Blip2
* Minor bug
* Integrate initial review changes
* Refactoring
* Refactoring
* Refactor
* Add controlnet
* Refactor
* Update conversion script
* Add image processor
* Shift postprocessing to ImageProcessor
* Refactor
* Fix device
* Add fast tests
* Update conversion script
* Fix checkpoint conversion script
* Integrate review changes
* Integrate reivew changes
* Remove unused functions from test
* Reuse HF image processor in Cond image
* Create new BlipImageProcessor based on transfomers
* Fix image preprocessor
* Minor
* Minor
* Add canny preprocessing
* Fix controlnet preprocessing
* Fix blip diffusion test
* Add controlnet test
* Add initial doc strings
* Integrate review changes
* Refactor
* Update examples
* Remove DDIM comments
* Add copied from for prepare_latents
* Add type anotations
* Add docstrings
* Do black formatting
* Add batch support
* Make tests pass
* Make controlnet tests pass
* Black formatting
* Fix progress bar
* Fix some licensing comments
* Fix imports
* Refactor controlnet
* Make tests faster
* Edit examples
* Black formatting/Ruff
* Add doc
* Minor
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com >
* Move controlnet pipeline
* Make tests faster
* Fix imports
* Fix formatting
* Fix make errors
* Fix make errors
* Minor
* Add suggested doc changes
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
* Edit docs
* Fix 16 bit loading
* Update examples
* Edit toctree
* Update docs/source/en/api/pipelines/blip_diffusion.md
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
* Minor
* Add tips
* Edit examples
* Update model paths
---------
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com >
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
2023-09-21 17:05:35 +01:00
Sayak Paul
e312b2302b
[LoRA] support LyCORIS ( #5102 )
...
* better condition.
* debugging
* how about now?
* how about now?
* debugging
* debugging
* debugging
* debugging
* debugging
* debugging
* debugging
* debugging
* debugging
* debugging
* debugging
* debugging
* debugging
* debugging
* debugging
* debugging
* debugging
* debugging
* debugging
* debugging
* support for lycoris.
* style
* add: lycoris test
* fix from_pretrained call.
* fix assertion values.
2023-09-20 10:30:18 +01:00
YiYi Xu
8263cf00f8
refactor DPMSolverMultistepScheduler using sigmas ( #4986 )
...
---------
Co-authored-by: yiyixuxu <yixu310@gmail,com>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com >
2023-09-19 11:21:49 -10:00
Dhruv Nair
29970757de
Fast Tests on PR improvements: Batch Tests fixes ( #5080 )
...
* fix test
* initial commit
* change test
* updates:
* fix tests
* test fix
* test fix
* fix tests
* make test faster
* clean up
* fix precision in test
* fix precision
* Fix tests
* Fix logging test
* fix test
* fix test
---------
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com >
2023-09-19 18:31:21 +05:30
Dhruv Nair
c2787c11c2
Fixes for Float16 inference Fast CUDA Tests ( #5097 )
...
* wip
* fix tests
2023-09-19 17:25:48 +05:30
Dhruv Nair
79a3f39eb5
Move to slow tests to nightly ( #5093 )
...
* move slow tests to nightly
* move slow tests to nightly
2023-09-19 16:04:26 +05:30
Dhruv Nair
431dd2f4d6
Fix precision related issues in Kandinsky Pipelines ( #5098 )
...
* fix failing tests
* make style
2023-09-19 16:02:21 +05:30
Patrick von Platen
5a287d3f23
[SDXL] Make sure multi batch prompt embeds works ( #5073 )
...
* [SDXL] Make sure multi batch prompt embeds works
* [SDXL] Make sure multi batch prompt embeds works
* improve more
* improve more
* Apply suggestions from code review
2023-09-19 11:49:49 +02:00
Will Berman
6d7279adad
t2i Adapter community member fix ( #5090 )
...
* convert tensorrt controlnet
* Fix code quality
* Fix code quality
* Fix code quality
* Fix code quality
* Fix code quality
* Fix code quality
* Fix number controlnet condition
* Add convert SD XL to onnx
* Add convert SD XL to tensorrt
* Add convert SD XL to tensorrt
* Add examples in comments
* Add examples in comments
* Add test onnx controlnet
* Add tensorrt test
* Remove copied
* Move file test to examples/community
* Remove script
* Remove script
* Remove text
* Fix import
* Fix T2I MultiAdapter
* fix tests
---------
Co-authored-by: dotieuthien <thien.do@mservice.com.vn >
Co-authored-by: dotieuthien <dotieuthien9997@gmail.com >
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com >
Co-authored-by: dotieuthien <hades@cinnamon.is >
2023-09-18 22:35:49 +02:00
Patrick von Platen
119ad2c3dc
[LoRA] Centralize LoRA tests ( #5086 )
...
* [LoRA] Centralize LoRA tests
* [LoRA] Centralize LoRA tests
* [LoRA] Centralize LoRA tests
* [LoRA] Centralize LoRA tests
* [LoRA] Centralize LoRA tests
2023-09-18 17:54:33 +02:00
YiYi Xu
6886e28fd8
fix a bug in inpaint pipeline when use regular text2image unet ( #5033 )
...
* fix
* fix num_images_per_prompt >1
* other pipelines
* add fast tests for inpaint pipelines
---------
Co-authored-by: yiyixuxu <yixu310@gmail,com>
2023-09-18 13:40:11 +02:00
dg845
4c8a05f115
Fix Consistency Models UNet2DMidBlock2D Attention GroupNorm Bug ( #4863 )
...
* Add attn_groups argument to UNet2DMidBlock2D to control theinternal Attention block's GroupNorm.
* Add docstring for attn_norm_num_groups in UNet2DModel.
* Since the test UNet config uses resnet_time_scale_shift == 'scale_shift', also set attn_norm_num_groups to 32.
* Add test for attn_norm_num_groups to UNet2DModelTests.
* Fix expected slices for slow tests.
* Also fix tolerances for slow tests.
---------
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
2023-09-15 11:27:51 +01:00
Dhruv Nair
5fd42e5d61
Add SDXL refiner only tests ( #5041 )
...
* add refiner only tests
* make style
2023-09-15 12:58:03 +05:30
Patrick von Platen
342c5c02c0
[Release 0.21] Bump version ( #5018 )
...
* [Release 0.21] Bump version
* fix & remove
* fix more
* fix all, upload
2023-09-14 18:28:57 +02:00
Patrick von Platen
b47f5115da
[Lora] fix lora fuse unfuse ( #5003 )
...
* fix lora fuse unfuse
* add same changes to loaders.py
* add test
---------
Co-authored-by: multimodalart <joaopaulo.passos+multimodal@gmail.com >
2023-09-13 11:21:04 +02:00
Sayak Paul
8009272f48
[Tests and Docs] Add a test on serializing pipelines with components containing fused LoRA modules ( #4962 )
...
* add: test to ensure pipelines can be saved with fused lora modules.
* add docs about serialization with fused lora.
* Apply suggestions from code review
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com >
* Empty-Commit
* Update docs/source/en/training/lora.md
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com >
---------
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com >
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com >
2023-09-13 10:01:37 +01:00
Dhruv Nair
f64d52dbca
fix custom diffusion tests ( #4996 )
2023-09-12 17:50:47 +02:00
Dhruv Nair
4d897aaff5
fix image variation slow test ( #4995 )
...
fix image variation tests
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com >
2023-09-12 17:45:47 +02:00
Kashif Rasul
73bf620dec
fix E721 Do not compare types, use isinstance() ( #4992 )
2023-09-12 16:52:25 +02:00
Patrick von Platen
93579650f8
Refactor model offload ( #4514 )
...
* [Draft] Refactor model offload
* [Draft] Refactor model offload
* Apply suggestions from code review
* cpu offlaod updates
* remove model cpu offload from individual pipelines
* add hook to offload models to cpu
* clean up
* model offload
* add model cpu offload string
* make style
* clean up
* fixes for offload issues
* fix tests issues
* resolve merge conflicts
* update src/diffusers/pipelines/pipeline_utils.py
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com >
* make style
* Update src/diffusers/pipelines/latent_diffusion/pipeline_latent_diffusion.py
---------
Co-authored-by: Dhruv Nair <dhruv.nair@gmail.com >
2023-09-11 19:39:26 +02:00
Kashif Rasul
16a056a7b5
Wuerstchen fixes ( #4942 )
...
* fix arguments and make example code work
* change arguments in combined test
* Add default timesteps
* style
* fixed test
* fix broken test
* formatting
* fix docstrings
* fix num_images_per_prompt
* fix doc styles
* please dont change this
* fix tests
* rename to DEFAULT_STAGE_C_TIMESTEPS
---------
Co-authored-by: Dominic Rampas <d6582533@gmail.com >
2023-09-11 15:47:53 +02:00
Patrick von Platen
6bbee1048b
Make sure Flax pipelines can be loaded into PyTorch ( #4971 )
...
* Make sure Flax pipelines can be loaded into PyTorch
* add test
* Update src/diffusers/pipelines/pipeline_utils.py
2023-09-11 12:03:49 +02:00
Dhruv Nair
b6e0b016ce
Lazy Import for Diffusers ( #4829 )
...
* initial commit
* move modules to import struct
* add dummy objects and _LazyModule
* add lazy import to schedulers
* clean up unused imports
* lazy import on models module
* lazy import for schedulers module
* add lazy import to pipelines module
* lazy import altdiffusion
* lazy import audio diffusion
* lazy import audioldm
* lazy import consistency model
* lazy import controlnet
* lazy import dance diffusion ddim ddpm
* lazy import deepfloyd
* lazy import kandinksy
* lazy imports
* lazy import semantic diffusion
* lazy imports
* lazy import stable diffusion
* move sd output to its own module
* clean up
* lazy import t2iadapter
* lazy import unclip
* lazy import versatile and vq diffsuion
* lazy import vq diffusion
* helper to fetch objects from modules
* lazy import sdxl
* lazy import txt2vid
* lazy import stochastic karras
* fix model imports
* fix bug
* lazy import
* clean up
* clean up
* fixes for tests
* fixes for tests
* clean up
* remove import of torch_utils from utils module
* clean up
* clean up
* fix mistake import statement
* dedicated modules for exporting and loading
* remove testing utils from utils module
* fixes from merge conflicts
* Update src/diffusers/pipelines/kandinsky2_2/__init__.py
* fix docs
* fix alt diffusion copied from
* fix check dummies
* fix more docs
* remove accelerate import from utils module
* add type checking
* make style
* fix check dummies
* remove torch import from xformers check
* clean up error message
* fixes after upstream merges
* dummy objects fix
* fix tests
* remove unused module import
---------
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com >
2023-09-11 09:56:22 +02:00
Will Berman
4191ddee11
Revert revert and install accelerate main ( #4963 )
...
* Revert "Temp Revert "[Core] better support offloading when side loading is enabled… (#4927 )"
This reverts commit 2ab170499e .
* tests: install accelerate from main
2023-09-11 08:49:46 +02:00
Will Berman
2ab170499e
Temp Revert "[Core] better support offloading when side loading is enabled… ( #4927 )
...
Revert "[Core] better support offloading when side loading is enabled. (#4855 )"
This reverts commit e4b8e7928b .
2023-09-08 19:54:59 -07:00
Sayak Paul
9800cc5ece
[InstructPix2Pix] Fix pipeline implementation and add docs ( #4844 )
...
* initial evident fixes.
* instructpix2pix fixes.
* add: entry to doc.
* address PR feedback.
* make fix-copies
2023-09-07 15:34:19 +05:30
Kashif Rasul
541bb6ee63
Würstchen model ( #3849 )
...
* initial
* initial
* added initial convert script for paella vqmodel
* initial wuerstchen pipeline
* add LayerNorm2d
* added modules
* fix typo
* use model_v2
* embed clip caption amd negative_caption
* fixed name of var
* initial modules in one place
* WuerstchenPriorPipeline
* inital shape
* initial denoising prior loop
* fix output
* add WuerstchenPriorPipeline to __init__.py
* use the noise ratio in the Prior
* try to save pipeline
* save_pretrained working
* Few additions
* add _execution_device
* shape is int
* fix batch size
* fix shape of ratio
* fix shape of ratio
* fix output dataclass
* tests folder
* fix formatting
* fix float16 + started with generator
* Update pipeline_wuerstchen.py
* removed vqgan code
* add WuerstchenGeneratorPipeline
* fix WuerstchenGeneratorPipeline
* fix docstrings
* fix imports
* convert generator pipeline
* fix convert
* Work on Generator Pipeline. WIP
* Pipeline works with our diffuzz code
* apply scale factor
* removed vqgan.py
* use cosine schedule
* redo the denoising loop
* Update src/diffusers/models/resnet.py
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com >
* use torch.lerp
* use warp-diffusion org
* clip_sample=False,
* some refactoring
* use model_v3_stage_c
* c_cond size
* use clip-bigG
* allow stage b clip to be None
* add dummy
* würstchen scheduler
* minor changes
* set clip=None in the pipeline
* fix attention mask
* add attention_masks to text_encoder
* make fix-copies
* add back clip
* add text_encoder
* gen_text_encoder and tokenizer
* fix import
* updated pipeline test
* undo changes to pipeline test
* nip
* fix typo
* fix output name
* set guidance_scale=0 and remove diffuze
* fix doc strings
* make style
* nip
* removed unused
* initial docs
* rename
* toc
* cleanup
* remvoe test script
* fix-copies
* fix multi images
* remove dup
* remove unused modules
* undo changes for debugging
* no new line
* remove dup conversion script
* fix doc string
* cleanup
* pass default args
* dup permute
* fix some tests
* fix prepare_latents
* move Prior class to modules
* offload only the text encoder and vqgan
* fix resolution calculation for prior
* nip
* removed testing script
* fix shape
* fix argument to set_timesteps
* do not change .gitignore
* fix resolution calculations + readme
* resolution calculation fix + readme
* small fixes
* Add combined pipeline
* rename generator -> decoder
* Update .gitignore
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com >
* removed efficient_net
* create combined WuerstchenPipeline
* make arguments consistent with VQ model
* fix var names
* no need to return text_encoder_hidden_states
* add latent_dim_scale to config
* split model into its own file
* add WuerschenPipeline to docs
* remove unused latent_size
* register latent_dim_scale
* update script
* update docstring
* use Attention preprocessor
* concat with normed input
* fix-copies
* add docs
* fix test
* fix style
* add to cpu_offloaded_model
* updated type
* remove 1-line func
* updated type
* initial decoder test
* formatting
* formatting
* fix autodoc link
* num_inference_steps is int
* remove comments
* fix example in docs
* Update src/diffusers/pipelines/wuerstchen/diffnext.py
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com >
* rename layernorm to WuerstchenLayerNorm
* rename DiffNext to WuerstchenDiffNeXt
* added comment about MixingResidualBlock
* move paella vq-vae to pipelines' folder
* initial decoder test
* increased test_float16_inference expected diff
* self_attn is always true
* more passing decoder tests
* batch image_embeds
* fix failing tests
* set the correct dtype
* relax inference test
* update prior
* added combined pipeline test
* faster test
* faster test
* Update src/diffusers/pipelines/wuerstchen/pipeline_wuerstchen_combined.py
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com >
* fix issues from review
* update wuerstchen.md + change generator name
* resolve issues
* fix copied from usage and add back batch_size
* fix API
* fix arguments
* fix combined test
* Added timesteps argument + fixes
* Update tests/pipelines/test_pipelines_common.py
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com >
* Update tests/pipelines/wuerstchen/test_wuerstchen_prior.py
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com >
* Update src/diffusers/pipelines/wuerstchen/pipeline_wuerstchen_combined.py
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com >
* Update src/diffusers/pipelines/wuerstchen/pipeline_wuerstchen_combined.py
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com >
* Update src/diffusers/pipelines/wuerstchen/pipeline_wuerstchen_combined.py
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com >
* Update src/diffusers/pipelines/wuerstchen/pipeline_wuerstchen_combined.py
* up
* Fix more
* failing tests
* up
* up
* correct naming
* correct docs
* correct docs
* fix test params
* correct docs
* fix classifier free guidance
* fix classifier free guidance
* fix more
* fix all
* make tests faster
---------
Co-authored-by: Dominic Rampas <d6582533@gmail.com >
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com >
Co-authored-by: Dominic Rampas <61938694+dome272@users.noreply.github.com >
2023-09-06 16:15:51 +02:00
YiYi Xu
ea311e6989
remove latent input for kandinsky prior_emb2emb pipeline ( #4887 )
...
* remove latent input
* fix test
---------
Co-authored-by: yiyixuxu <yixu310@gmail,com>
2023-09-04 22:19:49 -10:00
Patrick von Platen
2340ed629e
[Test] Reduce CPU memory ( #4897 )
...
* [Test] Reduce CPU memory
* [Test] Reduce CPU memory
2023-09-05 13:18:35 +05:30
Sayak Paul
e4b8e7928b
[Core] better support offloading when side loading is enabled. ( #4855 )
...
* better support offloading when side loading is enabled.
* load_textual_inversion
* better messaging for textual inversion.
* fixes
* address PR feedback.
* sdxl support.
* improve messaging
* recursive removal when cpu sequential offloading is enabled.
* add: lora tests
* recruse.
* add: offload tests for textual inversion.
2023-09-05 06:55:13 +05:30
Sayak Paul
c81a88b239
[Core] LoRA improvements pt. 3 ( #4842 )
...
* throw warning when more than one lora is attempted to be fused.
* introduce support of lora scale during fusion.
* change test name
* changes
* change to _lora_scale
* lora_scale to call whenever applicable.
* debugging
* lora_scale additional.
* cross_attention_kwargs
* lora_scale -> scale.
* lora_scale fix
* lora_scale in patched projection.
* debugging
* debugging
* debugging
* debugging
* debugging
* debugging
* debugging
* debugging
* debugging
* debugging
* debugging
* debugging
* debugging
* styling.
* debugging
* debugging
* debugging
* debugging
* debugging
* debugging
* debugging
* debugging
* debugging
* debugging
* debugging
* debugging
* remove unneeded prints.
* remove unneeded prints.
* assign cross_attention_kwargs.
* debugging
* debugging
* debugging
* debugging
* debugging
* debugging
* debugging
* debugging
* debugging
* debugging
* debugging
* debugging
* debugging
* debugging
* debugging
* debugging
* debugging
* debugging
* debugging
* clean up.
* refactor scale retrieval logic a bit.
* fix nonetypw
* fix: tests
* add more tests
* more fixes.
* figure out a way to pass lora_scale.
* Apply suggestions from code review
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com >
* unify the retrieval logic of lora_scale.
* move adjust_lora_scale_text_encoder to lora.py.
* introduce dynamic adjustment lora scale support to sd
* fix up copies
* Empty-Commit
* add: test to check fusion equivalence on different scales.
* handle lora fusion warning.
* make lora smaller
* make lora smaller
* make lora smaller
---------
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com >
2023-09-04 23:52:31 +02:00
YiYi Xu
2c1677eefe
allow passing components to connected pipelines when use the combined pipeline ( #4883 )
...
* fix
* add test
---------
Co-authored-by: yiyixuxu <yixu310@gmail,com>
2023-09-04 06:21:36 -10:00
dg845
c73e609aae
Fix get_dummy_inputs for Stable Diffusion Inpaint Tests ( #4845 )
...
* Change StableDiffusionInpaintPipelineFastTests.get_dummy_inputs to produce a random image and a white mask_image.
* Add dummy expected slices for the test_stable_diffusion_inpaint tests.
* Remove print statement
---------
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com >
2023-09-04 12:04:59 +02:00
Patrick von Platen
705c592ea9
[Tests] Add combined pipeline tests ( #4869 )
...
* [Tests] Add combined pipeline tests
* Update tests/pipelines/kandinsky_v22/test_kandinsky.py
2023-09-02 21:36:20 +02:00
Harutatsu Akiyama
c52acaaf17
[ControlNet SDXL Inpainting] Support inpainting of ControlNet SDXL ( #4694 )
...
* [ControlNet SDXL Inpainting] Support inpainting of ControlNet SDXL
Co-authored-by: Jiabin Bai 1355864570@qq.com
---------
Co-authored-by: Harutatsu Akiyama <kf.zy.qin@gmail.com >
2023-09-02 08:04:22 -10:00
YiYi Xu
5c404f20f4
[WIP] masked_latent_inputs for inpainting pipeline ( #4819 )
...
* add
---------
Co-authored-by: yiyixuxu <yixu310@gmail,com>
2023-09-01 06:55:31 -10:00
YiYi Xu
d8b6f5d09e
support AutoPipeline.from_pipe between a pipeline and its ControlNet pipeline counterpart ( #4861 )
...
add
2023-09-01 06:53:03 -10:00