* [WIP][LoRA] Implement hot-swapping of LoRA
This PR adds the possibility to hot-swap LoRA adapters. It is WIP.
Description
As of now, users can already load multiple LoRA adapters. They can
offload existing adapters or they can unload them (i.e. delete them).
However, they cannot "hotswap" adapters yet, i.e. substitute the weights
from one LoRA adapter with the weights of another, without the need to
create a separate LoRA adapter.
Generally, hot-swapping may not appear not super useful but when the
model is compiled, it is necessary to prevent recompilation. See #9279
for more context.
Caveats
To hot-swap a LoRA adapter for another, these two adapters should target
exactly the same layers and the "hyper-parameters" of the two adapters
should be identical. For instance, the LoRA alpha has to be the same:
Given that we keep the alpha from the first adapter, the LoRA scaling
would be incorrect for the second adapter otherwise.
Theoretically, we could override the scaling dict with the alpha values
derived from the second adapter's config, but changing the dict will
trigger a guard for recompilation, defeating the main purpose of the
feature.
I also found that compilation flags can have an impact on whether this
works or not. E.g. when passing "reduce-overhead", there will be errors
of the type:
> input name: arg861_1. data pointer changed from 139647332027392 to
139647331054592
I don't know enough about compilation to determine whether this is
problematic or not.
Current state
This is obviously WIP right now to collect feedback and discuss which
direction to take this. If this PR turns out to be useful, the
hot-swapping functions will be added to PEFT itself and can be imported
here (or there is a separate copy in diffusers to avoid the need for a
min PEFT version to use this feature).
Moreover, more tests need to be added to better cover this feature,
although we don't necessarily need tests for the hot-swapping
functionality itself, since those tests will be added to PEFT.
Furthermore, as of now, this is only implemented for the unet. Other
pipeline components have yet to implement this feature.
Finally, it should be properly documented.
I would like to collect feedback on the current state of the PR before
putting more time into finalizing it.
* Reviewer feedback
* Reviewer feedback, adjust test
* Fix, doc
* Make fix
* Fix for possible g++ error
* Add test for recompilation w/o hotswapping
* Make hotswap work
Requires https://github.com/huggingface/peft/pull/2366
More changes to make hotswapping work. Together with the mentioned PEFT
PR, the tests pass for me locally.
List of changes:
- docstring for hotswap
- remove code copied from PEFT, import from PEFT now
- adjustments to PeftAdapterMixin.load_lora_adapter (unfortunately, some
state dict renaming was necessary, LMK if there is a better solution)
- adjustments to UNet2DConditionLoadersMixin._process_lora: LMK if this
is even necessary or not, I'm unsure what the overall relationship is
between this and PeftAdapterMixin.load_lora_adapter
- also in UNet2DConditionLoadersMixin._process_lora, I saw that there is
no LoRA unloading when loading the adapter fails, so I added it
there (in line with what happens in PeftAdapterMixin.load_lora_adapter)
- rewritten tests to avoid shelling out, make the test more precise by
making sure that the outputs align, parametrize it
- also checked the pipeline code mentioned in this comment:
https://github.com/huggingface/diffusers/pull/9453#issuecomment-2418508871;
when running this inside the with
torch._dynamo.config.patch(error_on_recompile=True) context, there is
no error, so I think hotswapping is now working with pipelines.
* Address reviewer feedback:
- Revert deprecated method
- Fix PEFT doc link to main
- Don't use private function
- Clarify magic numbers
- Add pipeline test
Moreover:
- Extend docstrings
- Extend existing test for outputs != 0
- Extend existing test for wrong adapter name
* Change order of test decorators
parameterized.expand seems to ignore skip decorators if added in last
place (i.e. innermost decorator).
* Split model and pipeline tests
Also increase test coverage by also targeting conv2d layers (support of
which was added recently on the PEFT PR).
* Reviewer feedback: Move decorator to test classes
... instead of having them on each test method.
* Apply suggestions from code review
Co-authored-by: hlky <hlky@hlky.ac>
* Reviewer feedback: version check, TODO comment
* Add enable_lora_hotswap method
* Reviewer feedback: check _lora_loadable_modules
* Revert changes in unet.py
* Add possibility to ignore enabled at wrong time
* Fix docstrings
* Log possible PEFT error, test
* Raise helpful error if hotswap not supported
I.e. for the text encoder
* Formatting
* More linter
* More ruff
* Doc-builder complaint
* Update docstring:
- mention no text encoder support yet
- make it clear that LoRA is meant
- mention that same adapter name should be passed
* Fix error in docstring
* Update more methods with hotswap argument
- SDXL
- SD3
- Flux
No changes were made to load_lora_into_transformer.
* Add hotswap argument to load_lora_into_transformer
For SD3 and Flux. Use shorter docstring for brevity.
* Extend docstrings
* Add version guards to tests
* Formatting
* Fix LoRA loading call to add prefix=None
See:
https://github.com/huggingface/diffusers/pull/10187#issuecomment-2717571064
* Run make fix-copies
* Add hot swap documentation to the docs
* Apply suggestions from code review
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
---------
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
Co-authored-by: hlky <hlky@hlky.ac>
Co-authored-by: YiYi Xu <yixu310@gmail.com>
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
* Refactor `LTXConditionPipeline` to add text-only conditioning
* style
* up
* Refactor `LTXConditionPipeline` to streamline condition handling and improve clarity
* Improve condition checks
* Simplify latents handling based on conditioning type
* Refactor rope_interpolation_scale preparation for clarity and efficiency
* Update LTXConditionPipeline docstring to clarify supported input types
* Add LTX Video 0.9.5 model to documentation
* Clarify documentation to indicate support for text-only conditioning without passing `conditions`
* refactor: comment out unused parameters in LTXConditionPipeline
* fix: restore previously commented parameters in LTXConditionPipeline
* fix: remove unused parameters from LTXConditionPipeline
* refactor: remove unnecessary lines in LTXConditionPipeline
* minor documentation fixes of the depth and normals pipelines
* update license headers
* update model checkpoints in examples
fix missing prediction_type in register_to_config in the normals pipeline
* add initial marigold intrinsics pipeline
update comments about num_inference_steps and ensemble_size
minor fixes in comments of marigold normals and depth pipelines
* update uncertainty visualization to work with intrinsics
* integrate iid
---------
Co-authored-by: YiYi Xu <yixu310@gmail.com>
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
* init
* encode with glm
* draft schedule
* feat(scheduler): Add CogView scheduler implementation
* feat(embeddings): add CogView 2D rotary positional embedding
* 1
* Update pipeline_cogview4.py
* fix the timestep init and sigma
* update latent
* draft patch(not work)
* fix
* [WIP][cogview4]: implement initial CogView4 pipeline
Implement the basic CogView4 pipeline structure with the following changes:
- Add CogView4 pipeline implementation
- Implement DDIM scheduler for CogView4
- Add CogView3Plus transformer architecture
- Update embedding models
Current limitations:
- CFG implementation uses padding for sequence length alignment
- Need to verify transformer inference alignment with Megatron
TODO:
- Consider separate forward passes for condition/uncondition
instead of padding approach
* [WIP][cogview4][refactor]: Split condition/uncondition forward pass in CogView4 pipeline
Split the forward pass for conditional and unconditional predictions in the CogView4 pipeline to match the original implementation. The noise prediction is now done separately for each case before combining them for guidance. However, the results still need improvement.
This is a work in progress as the generated images are not yet matching expected quality.
* use with -2 hidden state
* remove text_projector
* 1
* [WIP] Add tensor-reload to align input from transformer block
* [WIP] for older glm
* use with cogview4 transformers forward twice of u and uc
* Update convert_cogview4_to_diffusers.py
* remove this
* use main example
* change back
* reset
* setback
* back
* back 4
* Fix qkv conversion logic for CogView4 to Diffusers format
* back5
* revert to sat to cogview4 version
* update a new convert from megatron
* [WIP][cogview4]: implement CogView4 attention processor
Add CogView4AttnProcessor class for implementing scaled dot-product attention
with rotary embeddings for the CogVideoX model. This processor concatenates
encoder and hidden states, applies QKV projections and RoPE, but does not
include spatial normalization.
TODO:
- Fix incorrect QKV projection weights
- Resolve ~25% error in RoPE implementation compared to Megatron
* [cogview4] implement CogView4 transformer block
Implement CogView4 transformer block following the Megatron architecture:
- Add multi-modulate and multi-gate mechanisms for adaptive layer normalization
- Implement dual-stream attention with encoder-decoder structure
- Add feed-forward network with GELU activation
- Support rotary position embeddings for image tokens
The implementation follows the original CogView4 architecture while adapting
it to work within the diffusers framework.
* with new attn
* [bugfix] fix dimension mismatch in CogView4 attention
* [cogview4][WIP]: update final normalization in CogView4 transformer
Refactored the final normalization layer in CogView4 transformer to use separate layernorm and AdaLN operations instead of combined AdaLayerNormContinuous. This matches the original implementation but needs validation.
Needs verification against reference implementation.
* 1
* put back
* Update transformer_cogview4.py
* change time_shift
* Update pipeline_cogview4.py
* change timesteps
* fix
* change text_encoder_id
* [cogview4][rope] align RoPE implementation with Megatron
- Implement apply_rope method in attention processor to match Megatron's implementation
- Update position embeddings to ensure compatibility with Megatron-style rotary embeddings
- Ensure consistent rotary position encoding across attention layers
This change improves compatibility with Megatron-based models and provides
better alignment with the original implementation's positional encoding approach.
* [cogview4][bugfix] apply silu activation to time embeddings in CogView4
Applied silu activation to time embeddings before splitting into conditional
and unconditional parts in CogView4Transformer2DModel. This matches the
original implementation and helps ensure correct time conditioning behavior.
* [cogview4][chore] clean up pipeline code
- Remove commented out code and debug statements
- Remove unused retrieve_timesteps function
- Clean up code formatting and documentation
This commit focuses on code cleanup in the CogView4 pipeline implementation, removing unnecessary commented code and improving readability without changing functionality.
* [cogview4][scheduler] Implement CogView4 scheduler and pipeline
* now It work
* add timestep
* batch
* change convert scipt
* refactor pt. 1; make style
* refactor pt. 2
* refactor pt. 3
* add tests
* make fix-copies
* update toctree.yml
* use flow match scheduler instead of custom
* remove scheduling_cogview.py
* add tiktoken to test dependencies
* Update src/diffusers/models/embeddings.py
Co-authored-by: YiYi Xu <yixu310@gmail.com>
* apply suggestions from review
* use diffusers apply_rotary_emb
* update flow match scheduler to accept timesteps
* fix comment
* apply review sugestions
* Update src/diffusers/schedulers/scheduling_flow_match_euler_discrete.py
Co-authored-by: YiYi Xu <yixu310@gmail.com>
---------
Co-authored-by: 三洋三洋 <1258009915@qq.com>
Co-authored-by: OleehyO <leehy0357@gmail.com>
Co-authored-by: Aryan <aryan@huggingface.co>
Co-authored-by: YiYi Xu <yixu310@gmail.com>
Update Custom Diffusion Documentation for Multiple Concept Inference
This PR updates the Custom Diffusion documentation to correctly demonstrate multiple concept inference by:
- Initializing the pipeline from a proper foundation model (e.g., "CompVis/stable-diffusion-v1-4") instead of a fine-tuned model.
- Defining model_id explicitly to avoid NameError.
- Correcting method calls for loading attention processors and textual inversion embeddings.
* update
* fix
* non_blocking; handle parameters and buffers
* update
* Group offloading with cuda stream prefetching (#10516)
* cuda stream prefetch
* remove breakpoints
* update
* copy model hook implementation from pab
* update; ~very workaround based implementation but it seems to work as expected; needs cleanup and rewrite
* more workarounds to make it actually work
* cleanup
* rewrite
* update
* make sure to sync current stream before overwriting with pinned params
not doing so will lead to erroneous computations on the GPU and cause bad results
* better check
* update
* remove hook implementation to not deal with merge conflict
* re-add hook changes
* why use more memory when less memory do trick
* why still use slightly more memory when less memory do trick
* optimise
* add model tests
* add pipeline tests
* update docs
* add layernorm and groupnorm
* address review comments
* improve tests; add docs
* improve docs
* Apply suggestions from code review
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
* apply suggestions from code review
* update tests
* apply suggestions from review
* enable_group_offloading -> enable_group_offload for naming consistency
* raise errors if multiple offloading strategies used; add relevant tests
* handle .to() when group offload applied
* refactor some repeated code
* remove unintentional change from merge conflict
* handle .cuda()
---------
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>