Sayak Paul
13e8fdecda
[feat] add load_lora_adapter() for compatible models ( #9712 )
...
* add first draft.
* fix
* updates.
* updates.
* updates
* updates
* updates.
* fix-copies
* lora constants.
* add tests
* Apply suggestions from code review
Co-authored-by: Benjamin Bossan <BenjaminBossan@users.noreply.github.com >
* docstrings.
---------
Co-authored-by: Benjamin Bossan <BenjaminBossan@users.noreply.github.com >
2024-11-02 09:50:39 +05:30
Dorsa Rohani
c10f875ff0
Add Diffusion Policy for Reinforcement Learning ( #9824 )
...
* enable cpu ability
* model creation + comprehensive testing
* training + tests
* all tests working
* remove unneeded files + clarify docs
* update train tests
* update readme.md
* remove data from gitignore
* undo cpu enabled option
* Update README.md
* update readme
* code quality fixes
* diffusion policy example
* update readme
* add pretrained model weights + doc
* add comment
* add documentation
* add docstrings
* update comments
* update readme
* fix code quality
* Update examples/reinforcement_learning/README.md
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
* Update examples/reinforcement_learning/diffusion_policy.py
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
* suggestions + safe globals for weights_only=True
* suggestions + safe weights loading
* fix code quality
* reformat file
---------
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
2024-11-02 09:18:44 +05:30
Leo Jiang
a98a839de7
Reduce Memory Cost in Flux Training ( #9829 )
...
* Improve NPU performance
* Improve NPU performance
* Improve NPU performance
* Improve NPU performance
* [bugfix] bugfix for npu free memory
* [bugfix] bugfix for npu free memory
* [bugfix] bugfix for npu free memory
* Reduce memory cost for flux training process
---------
Co-authored-by: 蒋硕 <jiangshuo9@h-partners.com >
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
2024-11-01 12:19:32 +05:30
Boseong Jeon
3deed729e6
Handling mixed precision for dreambooth flux lora training ( #9565 )
...
Handling mixed precision and add unwarp
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
Co-authored-by: Linoy Tsaban <57615435+linoytsaban@users.noreply.github.com >
2024-11-01 10:16:05 +05:30
ScilenceForest
7ffbc2525f
Update train_controlnet_flux.py,Fix size mismatch issue in validation ( #9679 )
...
Update train_controlnet_flux.py
Fix the problem of inconsistency between size of image and size of validation_image which causes np.stack to report error.
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
2024-11-01 10:15:10 +05:30
SahilCarterr
f55f1f7ee5
Fixes EMAModel "from_pretrained" method ( #9779 )
...
* fix from_pretrained and added test
* make style
---------
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
2024-11-01 09:20:19 +05:30
Leo Jiang
9dcac83057
NPU Adaption for FLUX ( #9751 )
...
* NPU implementation for FLUX
* NPU implementation for FLUX
* NPU implementation for FLUX
* NPU implementation for FLUX
* NPU implementation for FLUX
* NPU implementation for FLUX
* NPU implementation for FLUX
* NPU implementation for FLUX
* NPU implementation for FLUX
* NPU implementation for FLUX
* NPU implementation for FLUX
* NPU implementation for FLUX
* NPU implementation for FLUX
* NPU implementation for FLUX
---------
Co-authored-by: 蒋硕 <jiangshuo9@h-partners.com >
2024-11-01 09:03:15 +05:30
Abhipsha Das
c75431843f
[Model Card] standardize advanced diffusion training sd15 lora ( #7613 )
...
* modelcard generation edit
* add missed tag
* fix param name
* fix var
* change str to dict
* add use_dora check
* use correct tags for lora
* make style && make quality
---------
Co-authored-by: Aryan <aryan@huggingface.co >
2024-11-01 03:23:00 +05:30
YiYi Xu
d2e5cb3c10
Revert "[LoRA] fix: lora loading when using with a device_mapped mode… ( #9823 )
...
Revert "[LoRA] fix: lora loading when using with a device_mapped model. (#9449 )"
This reverts commit 41e4779d98 .
2024-10-31 08:19:32 -10:00
Sayak Paul
41e4779d98
[LoRA] fix: lora loading when using with a device_mapped model. ( #9449 )
...
* fix: lora loading when using with a device_mapped model.
* better attibutung
* empty
Co-authored-by: Benjamin Bossan <BenjaminBossan@users.noreply.github.com >
* Apply suggestions from code review
Co-authored-by: Marc Sun <57196510+SunMarc@users.noreply.github.com >
* minors
* better error messages.
* fix-copies
* add: tests, docs.
* add hardware note.
* quality
* Update docs/source/en/training/distributed_inference.md
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com >
* fixes
* skip properly.
* fixes
---------
Co-authored-by: Benjamin Bossan <BenjaminBossan@users.noreply.github.com >
Co-authored-by: Marc Sun <57196510+SunMarc@users.noreply.github.com >
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com >
2024-10-31 21:17:41 +05:30
Sayak Paul
ff182ad669
[CI] add a big GPU marker to run memory-intensive tests separately on CI ( #9691 )
...
* add a marker for big gpu tests
* update
* trigger on PRs temporarily.
* onnx
* fix
* total memory
* fixes
* reduce memory threshold.
* bigger gpu
* empty
* g6e
* Apply suggestions from code review
* address comments.
* fix
* fix
* fix
* fix
* fix
* okay
* further reduce.
* updates
* remove
* updates
* updates
* updates
* updates
* fixes
* fixes
* updates.
* fix
* workflow fixes.
---------
Co-authored-by: Aryan <aryan@huggingface.co >
2024-10-31 18:44:34 +05:30
Sayak Paul
4adf6affbb
[Tests] clean up and refactor gradient checkpointing tests ( #9494 )
...
* check.
* fixes
* fixes
* updates
* fixes
* fixes
2024-10-31 18:24:19 +05:30
Sayak Paul
8ce37ab055
[training] use the lr when using 8bit adam. ( #9796 )
...
* use the lr when using 8bit adam.
* remove lr as we pack it in params_to_optimize.
---------
Co-authored-by: Linoy Tsaban <57615435+linoytsaban@users.noreply.github.com >
2024-10-31 15:51:42 +05:30
Sayak Paul
09b8aebd67
[training] fixes to the quantization training script and add AdEMAMix optimizer as an option ( #9806 )
...
* fixes
* more fixes.
2024-10-31 15:46:00 +05:30
Sayak Paul
c1d4a0dded
[CI] add new runner for testing ( #9699 )
...
new runner.
2024-10-31 14:58:05 +05:30
Aryan
9a92b8177c
Allegro VAE fix ( #9811 )
...
fix
2024-10-30 18:04:15 +05:30
Aryan
0d1d267b12
[core] Allegro T2V ( #9736 )
...
* update
* refactor transformer part 1
* refactor part 2
* refactor part 3
* make style
* refactor part 4; modeling tests
* make style
* refactor part 5
* refactor part 6
* gradient checkpointing
* pipeline tests (broken atm)
* update
* add coauthor
Co-Authored-By: Huan Yang <hyang@fastmail.com >
* refactor part 7
* add docs
* make style
* add coauthor
Co-Authored-By: YiYi Xu <yixu310@gmail.com >
* make fix-copies
* undo unrelated change
* revert changes to embeddings, normalization, transformer
* refactor part 8
* make style
* refactor part 9
* make style
* fix
* apply suggestions from review
* Apply suggestions from code review
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com >
* update example
* remove attention mask for self-attention
* update
* copied from
* update
* update
---------
Co-authored-by: Huan Yang <hyang@fastmail.com >
Co-authored-by: YiYi Xu <yixu310@gmail.com >
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com >
2024-10-29 13:14:36 +05:30
Raul Ciotescu
c5376c5695
adds the pipeline for pixart alpha controlnet ( #8857 )
...
* add the controlnet pipeline for pixart alpha
---------
Co-authored-by: YiYi Xu <yixu310@gmail.com >
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
Co-authored-by: junsongc <cjs1020440147@icloud.com >
2024-10-28 08:48:04 -10:00
Linoy Tsaban
743a5697f2
[flux dreambooth lora training] make LoRA target modules configurable + small bug fix ( #9646 )
...
* make lora target modules configurable and change the default
* style
* make lora target modules configurable and change the default
* fix bug when using prodigy and training te
* fix mixed precision training as proposed in https://github.com/huggingface/diffusers/pull/9565 for full dreambooth as well
* add test and notes
* style
* address sayaks comments
* style
* fix test
---------
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
2024-10-28 17:27:41 +02:00
Linoy Tsaban
db5b6a9630
[SD 3.5 Dreambooth LoRA] support configurable training block & layers ( #9762 )
...
* configurable layers
* configurable layers
* update README
* style
* add test
* style
* add layer test, update readme, add nargs
* readme
* test style
* remove print, change nargs
* test arg change
* style
* revert nargs 2/2
* address sayaks comments
* style
* address sayaks comments
2024-10-28 16:07:54 +02:00
Biswaroop
493aa74312
[Fix] remove setting lr for T5 text encoder when using prodigy in flux dreambooth lora script ( #9473 )
...
* fix: removed setting of text encoder lr for T5 as it's not being tuned
* fix: removed setting of text encoder lr for T5 as it's not being tuned
---------
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
Co-authored-by: Linoy Tsaban <57615435+linoytsaban@users.noreply.github.com >
2024-10-28 13:07:30 +02:00
Vinh H. Pham
3b5b1c5698
[Fix] train_dreambooth_lora_flux_advanced ValueError: unexpected save model: <class 'transformers.models.t5.modeling_t5.T5EncoderModel'> ( #9777 )
...
fix save state te T5
2024-10-28 12:52:27 +02:00
Sayak Paul
fddbab7993
[research_projects] Update README.md to include a note about NF5 T5-xxl ( #9775 )
...
Update README.md
2024-10-26 22:13:03 +09:00
SahilCarterr
298ab6eb01
Added Support of Xlabs controlnet to FluxControlNetInpaintPipeline ( #9770 )
...
* added xlabs support
2024-10-25 11:50:55 -10:00
Ina
73b59f5203
[refactor] enhance readability of flux related pipelines ( #9711 )
...
* flux pipline: readability enhancement.
2024-10-25 11:01:51 -10:00
Jingya HUANG
52d4449810
Add a doc for AWS Neuron in Diffusers ( #9766 )
...
* start draft
* add doc
* Update docs/source/en/optimization/neuron.md
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com >
* Update docs/source/en/optimization/neuron.md
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com >
* Update docs/source/en/optimization/neuron.md
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com >
* Update docs/source/en/optimization/neuron.md
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com >
* Update docs/source/en/optimization/neuron.md
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com >
* Update docs/source/en/optimization/neuron.md
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com >
* Update docs/source/en/optimization/neuron.md
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com >
* bref intro of ON
* Update docs/source/en/optimization/neuron.md
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com >
---------
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com >
2024-10-25 08:24:58 -07:00
Sayak Paul
df073ba137
[research_projects] add flux training script with quantization ( #9754 )
...
* add flux training script with quantization
* remove exclamation
2024-10-26 00:07:57 +09:00
Leo Jiang
94643fac8a
[bugfix] bugfix for npu free memory ( #9640 )
...
* Improve NPU performance
* Improve NPU performance
* Improve NPU performance
* Improve NPU performance
* [bugfix] bugfix for npu free memory
* [bugfix] bugfix for npu free memory
* [bugfix] bugfix for npu free memory
---------
Co-authored-by: 蒋硕 <jiangshuo9@h-partners.com >
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
2024-10-25 23:35:19 +09:00
Zhiyang Shen
435f6b7e47
[Docs] fix docstring typo in SD3 pipeline ( #9765 )
...
* fix docstring typo in SD3 pipeline
* fix docstring typo in SD3 pipeline
2024-10-25 16:33:35 +05:30
Sayak Paul
1d1e1a2888
Some minor updates to the nightly and push workflows ( #9759 )
...
* move lora integration tests to nightly./
* remove slow marker in the workflow where not needed.
2024-10-24 23:49:09 +09:00
Rachit Shah
24c7d578ba
config attribute not foud error for FluxImagetoImage Pipeline for multi controlnet solved ( #9586 )
...
Co-authored-by: YiYi Xu <yixu310@gmail.com >
2024-10-23 10:33:29 -10:00
Linoy Tsaban
bfa0aa4ff2
[SD3-5 dreambooth lora] update model cards ( #9749 )
...
* improve readme
* style
---------
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
2024-10-23 23:16:53 +03:00
Álvaro Somoza
ab1b7b2080
[Official callbacks] SDXL Controlnet CFG Cutoff ( #9311 )
...
* initial proposal
* style
2024-10-23 13:21:56 -03:00
Fanli Lin
9366c8f84b
fix bug in require_accelerate_version_greater ( #9746 )
...
fix bug
2024-10-23 10:01:33 +05:30
Sayak Paul
e45c25d03a
post-release 0.31.0 ( #9742 )
...
* post-release
* style
2024-10-22 20:42:30 +05:30
Dhruv Nair
76c00c7236
is_safetensors_compatible fix ( #9741 )
...
update
2024-10-22 19:35:03 +05:30
Dhruv Nair
0d9d98fe5f
Fix typos ( #9739 )
...
* update
* update
* update
* update
* update
* update
2024-10-22 16:12:28 +05:30
Sayak Paul
60ffa84253
[bitsandbbytes] follow-ups ( #9730 )
...
* bnb follow ups.
* add a warning when dtypes mismatch.
* fx-copies
* clear cache.
* check_if_quantized_param
* add a check on shape.
* updates
* docs
* improve readability.
* resources.
* fix
2024-10-22 16:00:05 +05:30
Álvaro Somoza
0f079b932d
[Fix] Using sharded checkpoints with gated repositories ( #9737 )
...
fix
2024-10-22 01:33:52 -03:00
Yu Zheng
b0ffe92230
Update sd3 controlnet example ( #9735 )
...
* use make_image_grid in diffusers.utils
* use checkpoint on the Hub
2024-10-22 09:02:16 +05:30
Tolga Cangöz
1b64772b79
Fix schedule_shifted_power usage in 🪆 Matryoshka Diffusion Models ( #9723 )
...
* [matryoshka.py] Add schedule_shifted_power attribute and update get_schedule_shifted method
2024-10-21 14:23:50 -10:00
YiYi Xu
2d280f173f
fix singlestep dpm tests ( #9716 )
...
fix
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
2024-10-21 13:27:01 -10:00
G.O.D
63a0c9e5f7
[bugfix] reduce float value error when adding noise ( #9004 )
...
* Update train_controlnet.py
reduce float value error for bfloat16
* Update train_controlnet_sdxl.py
* style
---------
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
Co-authored-by: yiyixuxu <yixu310@gmail.com >
2024-10-21 13:26:05 -10:00
YiYi Xu
e2d037bbf1
minor doc/test update ( #9734 )
...
* update some docs and tests!
---------
Co-authored-by: Aryan <contact.aryanvs@gmail.com >
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
Co-authored-by: Aryan <aryan@huggingface.co >
Co-authored-by: apolinário <joaopaulo.passos@gmail.com >
2024-10-21 13:06:13 -10:00
timdalxx
bcd61fd349
[docs] add docstrings in pipline_stable_diffusion.py ( #9590 )
...
* fix the issue on flux dreambooth lora training
* update : origin main code
* docs: update pipeline_stable_diffusion docstring
* docs: update pipeline_stable_diffusion docstring
* Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion.py
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com >
* fix: style
* fix: style
* fix: copies
* make fix-copies
* remove extra newline
---------
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com >
Co-authored-by: Aryan <aryan@huggingface.co >
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
2024-10-21 09:39:20 -07:00
Sayak Paul
d27ecc5960
[Docs] docs to xlabs controlnets. ( #9688 )
...
* docs to xlabs controlnets.
Co-authored-by: Anzhella Pankratova <son0shad@gmail.com >
* Apply suggestions from code review
Co-authored-by: Anzhella Pankratova <54744846+Anghellia@users.noreply.github.com >
---------
Co-authored-by: Anzhella Pankratova <son0shad@gmail.com >
Co-authored-by: Anzhella Pankratova <54744846+Anghellia@users.noreply.github.com >
2024-10-21 09:38:22 -07:00
Chenyu Li
6b915672f4
Fix typo in cogvideo pipeline ( #9722 )
...
Fix type in cogvideo pipeline
2024-10-21 21:39:39 +05:30
Sayak Paul
b821f006d0
[Quantization] Add quantization support for bitsandbytes ( #9213 )
...
* quantization config.
* fix-copies
* fix
* modules_to_not_convert
* add bitsandbytes utilities.
* make progress.
* fixes
* quality
* up
* up
rotary embedding refactor 2: update comments, fix dtype for use_real=False (#9312 )
fix notes and dtype
up
up
* minor
* up
* up
* fix
* provide credits where due.
* make configurations work.
* fixes
* fix
* update_missing_keys
* fix
* fix
* make it work.
* fix
* provide credits to transformers.
* empty commit
* handle to() better.
* tests
* change to bnb from bitsandbytes
* fix tests
fix slow quality tests
SD3 remark
fix
complete int4 tests
add a readme to the test files.
add model cpu offload tests
warning test
* better safeguard.
* change merging status
* courtesy to transformers.
* move upper.
* better
* make the unused kwargs warning friendlier.
* harmonize changes with https://github.com/huggingface/transformers/pull/33122
* style
* trainin tests
* feedback part i.
* Add Flux inpainting and Flux Img2Img (#9135 )
---------
Co-authored-by: yiyixuxu <yixu310@gmail.com >
Update `UNet2DConditionModel`'s error messages (#9230 )
* refactor
[CI] Update Single file Nightly Tests (#9357 )
* update
* update
feedback.
improve README for flux dreambooth lora (#9290 )
* improve readme
* improve readme
* improve readme
* improve readme
fix one uncaught deprecation warning for accessing vae_latent_channels in VaeImagePreprocessor (#9372 )
deprecation warning vae_latent_channels
add mixed int8 tests and more tests to nf4.
[core] Freenoise memory improvements (#9262 )
* update
* implement prompt interpolation
* make style
* resnet memory optimizations
* more memory optimizations; todo: refactor
* update
* update animatediff controlnet with latest changes
* refactor chunked inference changes
* remove print statements
* update
* chunk -> split
* remove changes from incorrect conflict resolution
* remove changes from incorrect conflict resolution
* add explanation of SplitInferenceModule
* update docs
* Revert "update docs"
This reverts commit c55a50a271 .
* update docstring for freenoise split inference
* apply suggestions from review
* add tests
* apply suggestions from review
quantization docs.
docs.
* Revert "Add Flux inpainting and Flux Img2Img (#9135 )"
This reverts commit 5799954dd4 .
* tests
* don
* Apply suggestions from code review
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com >
* contribution guide.
* changes
* empty
* fix tests
* harmonize with https://github.com/huggingface/transformers/pull/33546 .
* numpy_cosine_distance
* config_dict modification.
* remove if config comment.
* note for load_state_dict changes.
* float8 check.
* quantizer.
* raise an error for non-True low_cpu_mem_usage values when using quant.
* low_cpu_mem_usage shenanigans when using fp32 modules.
* don't re-assign _pre_quantization_type.
* make comments clear.
* remove comments.
* handle mixed types better when moving to cpu.
* add tests to check if we're throwing warning rightly.
* better check.
* fix 8bit test_quality.
* handle dtype more robustly.
* better message when keep_in_fp32_modules.
* handle dtype casting.
* fix dtype checks in pipeline.
* fix warning message.
* Update src/diffusers/models/modeling_utils.py
Co-authored-by: YiYi Xu <yixu310@gmail.com >
* mitigate the confusing cpu warning
---------
Co-authored-by: Vishnu V Jaddipal <95531133+Gothos@users.noreply.github.com >
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com >
Co-authored-by: YiYi Xu <yixu310@gmail.com >
2024-10-21 10:11:57 +05:30
Aryan
24281f8036
make deps_table_update to fix CI tests (#9720 )
...
* update
* dummy change to trigger CI; will revert
* no deps peft
* np deps
* todo
---------
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
2024-10-21 09:58:26 +05:30
Sayak Paul
2a1d2f6218
[Docker] pin torch versions in the dockerfiles. ( #9721 )
...
* pin torch versions in the dockerfiles.
* more
2024-10-20 10:44:09 +05:30