dependabot[bot]
7251bb4fd0
Bump urllib3 from 2.2.3 to 2.5.0 in /examples/server ( #11748 )
...
Bumps [urllib3](https://github.com/urllib3/urllib3 ) from 2.2.3 to 2.5.0.
- [Release notes](https://github.com/urllib3/urllib3/releases )
- [Changelog](https://github.com/urllib3/urllib3/blob/main/CHANGES.rst )
- [Commits](https://github.com/urllib3/urllib3/compare/2.2.3...2.5.0 )
---
updated-dependencies:
- dependency-name: urllib3
dependency-version: 2.5.0
dependency-type: indirect
...
Signed-off-by: dependabot[bot] <support@github.com >
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-06-19 11:09:33 +05:30
Aryan
a4df8dbc40
Update more licenses to 2025 ( #11746 )
...
update
2025-06-19 07:46:01 +05:30
Sayak Paul
62cce3045d
[chore] change to 2025 licensing for remaining ( #11741 )
...
change to 2025 licensing for remaining
2025-06-18 20:56:00 +05:30
Leo Jiang
d72184eba3
[training] add ds support to lora hidream ( #11737 )
...
* [training] add ds support to lora hidream
* Apply style fixes
---------
Co-authored-by: Jη³ι‘΅ <jiangshuo9@h-partners.com >
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2025-06-18 09:26:02 +05:30
Linoy Tsaban
1bc6f3dc0f
[LoRA training] update metadata use for lora alpha + README ( #11723 )
...
* lora alpha
* Apply style fixes
* Update examples/advanced_diffusion_training/README_flux.md
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
* fix readme format
---------
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
2025-06-17 12:19:27 +03:00
Sayak Paul
f0dba33d82
[training] show how metadata stuff should be incorporated in training scripts. ( #11707 )
...
* show how metadata stuff should be incorporated in training scripts.
* typing
* fix
---------
Co-authored-by: Linoy Tsaban <57615435+linoytsaban@users.noreply.github.com >
2025-06-16 16:42:34 +05:30
Sayak Paul
368958df6f
[LoRA] parse metadata from LoRA and save metadata ( #11324 )
...
* feat: parse metadata from lora state dicts.
* tests
* fix tests
* key renaming
* fix
* smol update
* smol updates
* load metadata.
* automatically save metadata in save_lora_adapter.
* propagate changes.
* changes
* add test to models too.
* tigher tests.
* updates
* fixes
* rename tests.
* sorted.
* Update src/diffusers/loaders/lora_base.py
Co-authored-by: Benjamin Bossan <BenjaminBossan@users.noreply.github.com >
* review suggestions.
* removeprefix.
* propagate changes.
* fix-copies
* sd
* docs.
* fixes
* get review ready.
* one more test to catch error.
* change to a different approach.
* fix-copies.
* todo
* sd3
* update
* revert changes in get_peft_kwargs.
* update
* fixes
* fixes
* simplify _load_sft_state_dict_metadata
* update
* style fix
* uipdate
* update
* update
* empty commit
* _pack_dict_with_prefix
* update
* TODO 1.
* todo: 2.
* todo: 3.
* update
* update
* Apply suggestions from code review
Co-authored-by: Benjamin Bossan <BenjaminBossan@users.noreply.github.com >
* reraise.
* move argument.
---------
Co-authored-by: Benjamin Bossan <BenjaminBossan@users.noreply.github.com >
Co-authored-by: Linoy Tsaban <57615435+linoytsaban@users.noreply.github.com >
2025-06-13 14:37:49 +05:30
Philip Brown
6c7fad7ec8
Add community class StableDiffusionXL_T5Pipeline ( #11626 )
...
* Add community class StableDiffusionXL_T5Pipeline
Will be used with base model opendiffusionai/stablediffusionxl_t5
* Changed pooled_embeds to use projection instead of slice
* "make style" tweaks
* Added comments to top of code
* Apply style fixes
2025-06-09 15:57:51 -04:00
Markus Pobitzer
745199a869
[examples] flux-control: use num_training_steps_for_scheduler ( #11662 )
...
[examples] flux-control: use num_training_steps_for_scheduler in get_scheduler instead of args.max_train_steps * accelerator.num_processes
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
2025-06-05 14:56:25 +05:30
co63oc
8183d0f16e
Fix typos in strings and comments ( #11476 )
...
* Fix typos in strings and comments
Signed-off-by: co63oc <co63oc@users.noreply.github.com >
* Update src/diffusers/hooks/hooks.py
Co-authored-by: Aryan <contact.aryanvs@gmail.com >
* Update src/diffusers/hooks/hooks.py
Co-authored-by: Aryan <contact.aryanvs@gmail.com >
* Update layerwise_casting.py
* Apply style fixes
* update
---------
Signed-off-by: co63oc <co63oc@users.noreply.github.com >
Co-authored-by: Aryan <contact.aryanvs@gmail.com >
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2025-05-30 18:49:00 +05:30
Justin Ruan
df55f05358
Fix wrong indent for examples of controlnet script ( #11632 )
...
fix wrong indent for training controlnet
2025-05-29 15:26:39 -07:00
Yuanzhou Cai
89ddb6c0a4
[textual_inversion_sdxl.py] fix lr scheduler steps count ( #11557 )
...
fix lr scheduler steps count
Co-authored-by: Linoy Tsaban <57615435+linoytsaban@users.noreply.github.com >
2025-05-29 15:25:45 +03:00
Linoy Tsaban
cc59505e26
[training docs] smol update to README files ( #11616 )
...
add comment to install prodigy
2025-05-27 06:26:54 -07:00
Sai Shreyas Bhavanasi
ba8dc7dc49
RegionalPrompting: Inherit from Stable Diffusion ( #11525 )
...
* Refactoring Regional Prompting pipeline to use Diffusion Pipeline instead of Stable Diffusion Pipeline
* Apply style fixes
2025-05-20 15:03:16 -04:00
Quentin GallouΓ©dec
c8bb1ff53e
Use HF Papers ( #11567 )
...
* Use HF Papers
* Apply style fixes
---------
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2025-05-19 06:22:33 -10:00
Kenneth Gerald Hamilton
07dd6f8c0e
[train_dreambooth.py] Fix the LR Schedulers when num_train_epochs is passed in a distributed training env ( #11239 )
...
Co-authored-by: Linoy Tsaban <57615435+linoytsaban@users.noreply.github.com >
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
2025-05-13 07:34:01 +05:30
Abdellah Oumida
ddd0cfb497
Fix typo in train_diffusion_orpo_sdxl_lora_wds.py ( #11541 )
2025-05-12 15:28:29 -10:00
scxue
784db0eaab
Add cross attention type for Sana-Sprint training in diffusers. ( #11514 )
...
* test permission
* Add cross attention type for Sana-Sprint.
* Add Sana-Sprint training script in diffusers.
* make style && make quality;
* modify the attention processor with `set_attn_processor` and change `SanaAttnProcessor3_0` to `SanaVanillaAttnProcessor`
* Add import for SanaVanillaAttnProcessor
* Add README file.
* Apply suggestions from code review
* style
* Update examples/research_projects/sana/README.md
---------
Co-authored-by: lawrence-cj <cjs1020440147@icloud.com >
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
2025-05-08 18:55:29 +05:30
Linoy Tsaban
66e50d4e24
[LoRA] make lora alpha and dropout configurable ( #11467 )
...
* add lora_alpha and lora_dropout
* Apply style fixes
* add lora_alpha and lora_dropout
* Apply style fixes
* revert lora_alpha until #11324 is merged
* Apply style fixes
* empty commit
---------
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2025-05-08 11:54:50 +03:00
RogerSinghChugh
ed4efbd63d
Update training script for txt to img sdxl with lora supp with new interpolation. ( #11496 )
...
* Update training script for txt to img sdxl with lora supp with new interpolation.
* ran make style and make quality.
2025-05-05 12:33:28 -04:00
Yijun Lee
9c29e938d7
Set LANCZOS as the default interpolation method for image resizing. ( #11492 )
...
* Set LANCZOS as the default interpolation method for image resizing.
* style: run make style and quality checks
2025-05-05 12:18:40 -04:00
Sayak Paul
071807c853
[training] feat: enable quantization for hidream lora training. ( #11494 )
...
* feat: enable quantization for hidream lora training.
* better handle compute dtype.
* finalize.
* fix dtype.
---------
Co-authored-by: Linoy Tsaban <57615435+linoytsaban@users.noreply.github.com >
2025-05-05 20:44:35 +05:30
Evan Han
ee1516e5c7
[train_dreambooth_lora_lumina2] Add LANCZOS as the default interpolation mode for image resizing ( #11491 )
...
[ADD] interpolation
2025-05-05 10:41:33 -04:00
MinJu-Ha
ec9323996b
[train_dreambooth_lora_sdxl] Add --image_interpolation_mode option for image resizing (default to lanczos) ( #11490 )
...
feat(train_dreambooth_lora_sdxl): support --image_interpolation_mode with default to lanczos
2025-05-05 10:19:30 -04:00
Parag Ekbote
fc5e906689
[train_text_to_image_sdxl]Add LANCZOS as default interpolation mode for image resizing ( #11455 )
...
* Add LANCZOS as default interplotation mode.
* update script
* Update as per code review.
* make style.
2025-05-05 09:52:19 -04:00
Yash
ec3d58286d
[train_dreambooth_lora_flux_advanced] Add LANCZOS as the default interpolation mode for image resizing ( #11472 )
...
* [train_controlnet_sdxl] Add LANCZOS as the default interpolation mode for image resizing
* [train_dreambooth_lora_flux_advanced] Add LANCZOS as the default interpolation mode for image resizing
2025-05-02 18:14:41 -04:00
Yuanzhou
ed6cf52572
[train_dreambooth_lora_sdxl_advanced] Add LANCZOS as the default interpolation mode for image resizing ( #11471 )
2025-05-02 16:46:01 -04:00
co63oc
86294d3c7f
Fix typos in docs and comments ( #11416 )
...
* Fix typos in docs and comments
* Apply style fixes
---------
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2025-04-30 20:30:53 -10:00
Vaibhav Kumawat
daf0a23958
Add LANCZOS as default interplotation mode. ( #11463 )
...
* Add LANCZOS as default interplotation mode.
* LANCZOS as default interplotation
* LANCZOS as default interplotation mode
* Added LANCZOS as default interplotation mode
2025-04-30 14:22:38 -04:00
captainzz
8cd7426e56
Add StableDiffusion3InstructPix2PixPipeline ( #11378 )
...
* upload StableDiffusion3InstructPix2PixPipeline
* Move to community
* Add readme
* Fix images
* remove images
* Change image url
* fix
* Apply style fixes
2025-04-30 06:13:12 -04:00
Youlun Peng
58431f102c
Set LANCZOS as the default interpolation for image resizing in ControlNet training ( #11449 )
...
Set LANCZOS as the default interpolation for image resizing
2025-04-29 08:47:02 -04:00
Linoy Tsaban
0ac1d5b482
[Hi-Dream LoRA] fix bug in validation ( #11439 )
...
remove unnecessary pipeline moving to cpu in validation
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
2025-04-28 06:22:32 -10:00
tongyu
3da98e7ee3
[train_text_to_image_lora] Better image interpolation in training scripts follow up ( #11427 )
...
* Update train_text_to_image_lora.py
* update_train_text_to_image_lora
2025-04-28 11:23:24 -04:00
tongyu
b3b04fefde
[train_text_to_image] Better image interpolation in training scripts follow up ( #11426 )
...
* Update train_text_to_image.py
* update
2025-04-28 10:50:33 -04:00
Mert Erbak
bd96a084d3
[train_dreambooth_lora.py] Set LANCZOS as default interpolation mode for resizing ( #11421 )
...
* Set LANCZOS as default interpolation mode for resizing
* [train_dreambooth_lora.py] Set LANCZOS as default interpolation mode for resizing
2025-04-26 01:58:41 -04:00
co63oc
f00a995753
Fix typos in strings and comments ( #11407 )
2025-04-24 08:53:47 -10:00
Linoy Tsaban
edd7880418
[HiDream LoRA] optimizations + small updates ( #11381 )
...
* 1. add pre-computation of prompt embeddings when custom prompts are used as well
2. save model card even if model is not pushed to hub
3. remove scheduler initialization from code example - not necessary anymore (it's now if the base model's config)
4. add skip_final_inference - to allow to run with validation, but skip the final loading of the pipeline with the lora weights to reduce memory reqs
* pre encode validation prompt as well
* Update examples/dreambooth/train_dreambooth_lora_hidream.py
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
* Update examples/dreambooth/train_dreambooth_lora_hidream.py
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
* Update examples/dreambooth/train_dreambooth_lora_hidream.py
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
* pre encode validation prompt as well
* Apply style fixes
* empty commit
* change default trained modules
* empty commit
* address comments + change encoding of validation prompt (before it was only pre-encoded if custom prompts are provided, but should be pre-encoded either way)
* Apply style fixes
* empty commit
* fix validation_embeddings definition
* fix final inference condition
* fix pipeline deletion in last inference
* Apply style fixes
* empty commit
* layers
* remove readme remarks on only pre-computing when instance prompt is provided and change example to 3d icons
* smol fix
* empty commit
---------
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2025-04-24 07:48:19 +03:00
Teriks
b4be42282d
Kolors additional pipelines, community contrib ( #11372 )
...
* Kolors additional pipelines, community contrib
---------
Co-authored-by: Teriks <Teriks@users.noreply.github.com >
Co-authored-by: Linoy Tsaban <57615435+linoytsaban@users.noreply.github.com >
2025-04-23 11:07:27 -10:00
Ishan Dutta
4b60f4b602
[train_dreambooth_flux] Add LANCZOS as the default interpolation mode for image resizing ( #11395 )
2025-04-23 10:47:05 -04:00
Ameer Azam
026507c06c
Update README_hidream.md ( #11386 )
...
Small change
requirements_sana.txt to
requirements_hidream.txt
2025-04-22 20:08:26 -04:00
Linoy Tsaban
e30d3bf544
[LoRA] add LoRA support to HiDream and fine-tuning script ( #11281 )
...
* initial commit
* initial commit
* initial commit
* initial commit
* initial commit
* initial commit
* Update examples/dreambooth/train_dreambooth_lora_hidream.py
Co-authored-by: Bagheera <59658056+bghira@users.noreply.github.com >
* move prompt embeds, pooled embeds outside
* Update examples/dreambooth/train_dreambooth_lora_hidream.py
Co-authored-by: hlky <hlky@hlky.ac >
* Update examples/dreambooth/train_dreambooth_lora_hidream.py
Co-authored-by: hlky <hlky@hlky.ac >
* fix import
* fix import and tokenizer 4, text encoder 4 loading
* te
* prompt embeds
* fix naming
* shapes
* initial commit to add HiDreamImageLoraLoaderMixin
* fix init
* add tests
* loader
* fix model input
* add code example to readme
* fix default max length of text encoders
* prints
* nullify training cond in unpatchify for temp fix to incompatible shaping of transformer output during training
* smol fix
* unpatchify
* unpatchify
* fix validation
* flip pred and loss
* fix shift!!!
* revert unpatchify changes (for now)
* smol fix
* Apply style fixes
* workaround moe training
* workaround moe training
* remove prints
* to reduce some memory, keep vae in `weight_dtype` same as we have for flux (as it's the same vae)
bbd0c161b5/examples/dreambooth/train_dreambooth_lora_flux.py (L1207)
* refactor to align with HiDream refactor
* refactor to align with HiDream refactor
* refactor to align with HiDream refactor
* add support for cpu offloading of text encoders
* Apply style fixes
* adjust lr and rank for train example
* fix copies
* Apply style fixes
* update README
* update README
* update README
* fix license
* keep prompt2,3,4 as None in validation
* remove reverse ode comment
* Update examples/dreambooth/train_dreambooth_lora_hidream.py
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
* Update examples/dreambooth/train_dreambooth_lora_hidream.py
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
* vae offload change
* fix text encoder offloading
* Apply style fixes
* cleaner to_kwargs
* fix module name in copied from
* add requirements
* fix offloading
* fix offloading
* fix offloading
* update transformers version in reqs
* try AutoTokenizer
* try AutoTokenizer
* Apply style fixes
* empty commit
* Delete tests/lora/test_lora_layers_hidream.py
* change tokenizer_4 to load with AutoTokenizer as well
* make text_encoder_four and tokenizer_four configurable
* save model card
* save model card
* revert T5
* fix test
* remove non diffusers lumina2 conversion
---------
Co-authored-by: Bagheera <59658056+bghira@users.noreply.github.com >
Co-authored-by: hlky <hlky@hlky.ac >
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2025-04-22 11:44:02 +03:00
PromeAI
7a4a126db8
fix issue that training flux controlnet was unstable and validation r⦠( #11373 )
...
* fix issue that training flux controlnet was unstable and validation results were unstable
* del unused code pieces, fix grammar
---------
Co-authored-by: Your Name <you@example.com >
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
2025-04-21 08:16:05 -10:00
Kenneth Gerald Hamilton
0dec414d5b
[train_dreambooth_lora_sdxl.py] Fix the LR Schedulers when num_train_epochs is passed in a distributed training env ( #11240 )
...
Co-authored-by: Linoy Tsaban <57615435+linoytsaban@users.noreply.github.com >
2025-04-21 12:51:03 +05:30
Linoy Tsaban
44eeba07b2
[Flux LoRAs] fix lr scheduler bug in distributed scenarios ( #11242 )
...
* add fix
* add fix
* Apply style fixes
---------
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2025-04-21 10:08:45 +03:00
Kazuki Yoda
ef47726e2d
Fix: StableDiffusionXLControlNetAdapterInpaintPipeline incorrectly inherited StableDiffusionLoraLoaderMixin ( #11357 )
...
Fix: Inherit `StableDiffusionXLLoraLoaderMixin`
`StableDiffusionXLControlNetAdapterInpaintPipeline`
used to incorrectly inherit
`StableDiffusionLoraLoaderMixin`
instead of `StableDiffusionXLLoraLoaderMixin`
2025-04-18 12:46:06 -10:00
Sayak Paul
4b868f14c1
post release 0.33.0 ( #11255 )
...
* post release
* update
* fix deprecations
* remaining
* update
---------
Co-authored-by: YiYi Xu <yixu310@gmail.com >
2025-04-15 06:50:08 -10:00
Dhruv Nair
edc154da09
Update Ruff to latest Version ( #10919 )
...
* update
* update
* update
* update
2025-04-09 16:51:34 +05:30
Sayak Paul
fd02aad402
fix: SD3 ControlNet validation so that it runs on a A100. ( #11238 )
...
* fix: SD3 ControlNet validation so that it runs on a A100.
* use backend-agnostic cache and pass devide.
2025-04-09 12:12:53 +05:30
Linoy Tsaban
71f34fc5a4
[Flux LoRA] fix issues in flux lora scripts ( #11111 )
...
* remove custom scheduler
* update requirements.txt
* log_validation with mixed precision
* add intermediate embeddings saving when checkpointing is enabled
* remove comment
* fix validation
* add unwrap_model for accelerator, torch.no_grad context for validation, fix accelerator.accumulate call in advanced script
* revert unwrap_model change temp
* add .module to address distributed training bug + replace accelerator.unwrap_model with unwrap model
* changes to align advanced script with canonical script
* make changes for distributed training + unify unwrap_model calls in advanced script
* add module.dtype fix to dreambooth script
* unify unwrap_model calls in dreambooth script
* fix condition in validation run
* mixed precision
* Update examples/advanced_diffusion_training/train_dreambooth_lora_flux_advanced.py
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
* smol style change
* change autocast
* Apply style fixes
---------
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
2025-04-08 17:40:30 +03:00
Γlvaro Somoza
723dbdd363
[Training] Better image interpolation in training scripts ( #11206 )
...
* initial
* Update examples/dreambooth/train_dreambooth_lora_sdxl.py
Co-authored-by: hlky <hlky@hlky.ac >
* update
---------
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
Co-authored-by: hlky <hlky@hlky.ac >
2025-04-08 12:26:07 +05:30