Aryan
a4df8dbc40
Update more licenses to 2025 ( #11746 )
...
update
2025-06-19 07:46:01 +05:30
Steven Liu
be2fb77dc1
[docs] PyTorch 2.0 ( #11618 )
...
* combine
* Update docs/source/en/optimization/fp16.md
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
---------
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
2025-05-28 09:42:41 -07:00
Steven Liu
23a4ff8488
[docs] Remove fast diffusion tutorial ( #11583 )
...
remove tutorial
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
2025-05-20 08:56:12 -07:00
Steven Liu
e23705e557
[docs] Adapters ( #11331 )
...
* refactor adapter docs
* ip-adapter
* ip adapter
* fix toctree
* fix toctree
* lora
* images
* controlnet
* feedback
* controlnet
* t2i
* fix typo
* feedback
---------
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
2025-05-02 08:08:33 +05:30
Steven Liu
b848d479b1
[docs] Memory optims ( #11385 )
...
* reformat
* initial
* fin
* review
* inference
* feedback
* feedback
* feedback
2025-05-01 11:22:00 -07:00
Sayak Paul
cefa28f449
[docs] Promote AutoModel usage ( #11300 )
...
* docs: promote the usage of automodel.
* bitsandbytes
* Apply suggestions from code review
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com >
---------
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com >
2025-04-15 09:25:40 +05:30
Aryan
a0c22997fd
Disable PEFT input autocast when using fp8 layerwise casting ( #10685 )
...
* disable peft input autocast
* use new peft method name; only disable peft input autocast if submodule layerwise casting active
* add test; reference PeftInputAutocastDisableHook in peft docs
* add load_lora_weights test
* casted -> cast
* Update tests/lora/utils.py
2025-02-13 23:12:54 +05:30
Steven Liu
d81cc6f1da
[docs] Fix internal links ( #10418 )
...
fix links
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
2025-01-02 10:11:16 -10:00
Steven Liu
2739241ad1
[docs] delete_adapters() ( #10245 )
...
delete_adapters
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
2024-12-17 09:26:45 -08:00
Lucain
0763a7edf4
Let server decide default repo visibility ( #10047 )
2024-12-02 17:15:46 -10:00
Sayak Paul
31058cdaef
[LoRA] allow loras to be loaded with low_cpu_mem_usage. ( #9510 )
...
* allow loras to be loaded with low_cpu_mem_usage.
* add flux support but note https://github.com/huggingface/diffusers/pull/9510\#issuecomment-2378316687
* low_cpu_mem_usage.
* fix-copies
* fix-copies again
* tests
* _LOW_CPU_MEM_USAGE_DEFAULT_LORA
* _peft_version default.
* version checks.
* version check.
* version check.
* version check.
* require peft 0.13.1.
* explicitly specify low_cpu_mem_usage=False.
* docs.
* transformers version 4.45.2.
* update
* fix
* empty
* better name initialize_dummy_state_dict.
* doc todos.
* Apply suggestions from code review
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com >
* style
* fix-copies
---------
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com >
2024-10-09 10:57:16 +05:30
suzukimain
b52119ae92
[docs] Replace runwayml/stable-diffusion-v1-5 with Lykon/dreamshaper-8 ( #9428 )
...
* [docs] Replace runwayml/stable-diffusion-v1-5 with Lykon/dreamshaper-8
Updated documentation as runwayml/stable-diffusion-v1-5 has been removed from Huggingface.
* Update docs/source/en/using-diffusers/inpaint.md
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com >
* Replace with stable-diffusion-v1-5/stable-diffusion-v1-5
* Update inpaint.md
---------
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com >
2024-09-16 10:18:45 -07:00
omahs
6d32b29239
Fix typos ( #9077 )
...
* fix typo
2024-08-05 09:00:08 -10:00
Tolga Cangöz
7071b7461b
Errata: Fix typos & \s+$ ( #9008 )
...
* Fix typos
* chore: Fix typos
* chore: Update README.md for promptdiffusion example
* Trim trailing white spaces
* Fix a typo
* update number
* chore: update number
* Trim trailing white space
* Update README.md
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com >
* Update README.md
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com >
---------
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com >
2024-08-02 21:24:25 -07:00
RandomGamingDev
2afb2e0aac
Added accelerator based gradient accumulation for basic_example ( #8966 )
...
added accelerator based gradient accumulation for basic_example
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
2024-07-26 09:35:52 +05:30
Sayak Paul
d87fe95f90
[Chore] add LoraLoaderMixin to the inits ( #8981 )
...
* introduce to promote reusability.
* up
* add more tests
* up
* remove comments.
* fix fuse_nan test
* clarify the scope of fuse_lora and unfuse_lora
* remove space
* rewrite fuse_lora a bit.
* feedback
* copy over load_lora_into_text_encoder.
* address dhruv's feedback.
* fix-copies
* fix issubclass.
* num_fused_loras
* fix
* fix
* remove mapping
* up
* fix
* style
* fix-copies
* change to SD3TransformerLoRALoadersMixin
* Apply suggestions from code review
Co-authored-by: Dhruv Nair <dhruv.nair@gmail.com >
* up
* handle wuerstchen
* up
* move lora to lora_pipeline.py
* up
* fix-copies
* fix documentation.
* comment set_adapters().
* fix-copies
* fix set_adapters() at the model level.
* fix?
* fix
* loraloadermixin.
---------
Co-authored-by: Dhruv Nair <dhruv.nair@gmail.com >
2024-07-26 08:59:33 +05:30
YiYi Xu
62863bb1ea
Revert "[LoRA] introduce LoraBaseMixin to promote reusability." ( #8976 )
...
Revert "[LoRA] introduce LoraBaseMixin to promote reusability. (#8774 )"
This reverts commit 527430d0a4 .
2024-07-25 09:10:35 -10:00
Sayak Paul
527430d0a4
[LoRA] introduce LoraBaseMixin to promote reusability. ( #8774 )
...
* introduce to promote reusability.
* up
* add more tests
* up
* remove comments.
* fix fuse_nan test
* clarify the scope of fuse_lora and unfuse_lora
* remove space
* rewrite fuse_lora a bit.
* feedback
* copy over load_lora_into_text_encoder.
* address dhruv's feedback.
* fix-copies
* fix issubclass.
* num_fused_loras
* fix
* fix
* remove mapping
* up
* fix
* style
* fix-copies
* change to SD3TransformerLoRALoadersMixin
* Apply suggestions from code review
Co-authored-by: Dhruv Nair <dhruv.nair@gmail.com >
* up
* handle wuerstchen
* up
* move lora to lora_pipeline.py
* up
* fix-copies
* fix documentation.
* comment set_adapters().
* fix-copies
* fix set_adapters() at the model level.
* fix?
* fix
---------
Co-authored-by: Dhruv Nair <dhruv.nair@gmail.com >
2024-07-25 21:40:58 +05:30
RandomGamingDev
cdd12bde17
Added Code for Gradient Accumulation to work for basic_training ( #8961 )
...
added line allowing gradient accumulation to work for basic_training example
2024-07-25 08:40:53 +05:30
Sayak Paul
e8284281c1
add docs on model sharding ( #8658 )
...
* add docs on model sharding
* add entry to _toctree.
* Apply suggestions from code review
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com >
* simplify wording
* add a note on transformer library handling
* move device placement section
* Update docs/source/en/training/distributed_inference.md
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com >
---------
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com >
2024-06-26 07:35:11 +05:30
Sayak Paul
bc90c28bc9
[Docs] add note on caching in fast diffusion ( #8675 )
...
* add note on caching in fast diffusion
* formatting
* Update docs/source/en/tutorials/fast_diffusion.md
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com >
---------
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com >
2024-06-24 10:10:45 -07:00
Tolga Cangöz
468ae09ed8
Errata - Trim trailing white space in the whole repo ( #8575 )
...
* Trim all the trailing white space in the whole repo
* Remove unnecessary empty places
* make style && make quality
* Trim trailing white space
* trim
---------
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
2024-06-24 18:39:15 +05:30
Yue Wu
1096f88e2b
sampling bug fix in diffusers tutorial "basic_training.md" ( #8223 )
...
sampling bug fix in basic_training.md
In the diffusers basic training tutorial, setting the manual seed argument (generator=torch.manual_seed(config.seed)) in the pipeline call inside evaluate() function rewinds the dataloader shuffling, leading to overfitting due to the model seeing same sequence of training examples after every evaluation call. Using generator=torch.Generator(device='cpu').manual_seed(config.seed) avoids this.
2024-05-24 11:14:32 -07:00
Steven Liu
33b363edfa
[docs] AutoPipeline ( #7714 )
...
* autopipeline
* edits
* feedback
---------
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
2024-04-22 10:15:07 -07:00
UmerHA
0302446819
Implements Blockwise lora ( #7352 )
...
* Initial commit
* Implemented block lora
- implemented block lora
- updated docs
- added tests
* Finishing up
* Reverted unrelated changes made by make style
* Fixed typo
* Fixed bug + Made text_encoder_2 scalable
* Integrated some review feedback
* Incorporated review feedback
* Fix tests
* Made every module configurable
* Adapter to new lora test structure
* Final cleanup
* Some more final fixes
- Included examples in `using_peft_for_inference.md`
- Added hint that only attns are scaled
- Removed NoneTypes
- Added test to check mismatching lens of adapter names / weights raise error
* Update using_peft_for_inference.md
* Update using_peft_for_inference.md
* Make style, quality, fix-copies
* Updated tutorial;Warning if scale/adapter mismatch
* floats are forwarded as-is; changed tutorial scale
* make style, quality, fix-copies
* Fixed typo in tutorial
* Moved some warnings into `lora_loader_utils.py`
* Moved scale/lora mismatch warnings back
* Integrated final review suggestions
* Empty commit to trigger CI
* Reverted emoty commit to trigger CI
---------
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
2024-03-29 21:15:57 +05:30
Sayak Paul
82441460ef
[Docs] add missing output image ( #7425 )
...
add missing output image
2024-03-21 09:22:06 -07:00
Steven Liu
3ce905c9d0
[docs] Merge LoRAs ( #7213 )
...
* merge loras
* feedback
* torch.compile
* feedback
2024-03-07 11:28:50 -08:00
Sayak Paul
b9e1c30d0e
[Docs] more elaborate example for peft torch.compile ( #7161 )
...
more elaborate example for peft torch.compile
2024-03-04 08:55:30 +05:30
Younes Belkada
0ca7b68198
[PEFT / docs] Add a note about torch.compile ( #6864 )
...
* Update using_peft_for_inference.md
* add more explanation
2024-02-14 02:29:29 +01:00
Sayak Paul
30e5e81d58
change to 2024 in the license ( #6902 )
...
change to 2024
2024-02-08 08:19:31 -10:00
Sayak Paul
8d7dc85312
add note about serialization ( #6764 )
2024-01-31 12:45:40 +05:30
Steven Liu
9d767916da
[docs] Fast diffusion ( #6470 )
...
* edits
* fix
* feedback
2024-01-09 08:08:31 -08:00
Steven Liu
acd926f4f2
[docs] Fix local links ( #6440 )
...
fix local links
2024-01-04 09:59:11 -08:00
Sayak Paul
61d223c884
add: CUDA graph details. ( #6408 )
2023-12-31 13:43:26 +05:30
Sayak Paul
203724e9d9
[Docs] add note on fp16 in fast diffusion ( #6380 )
...
add note on fp16
2023-12-29 09:38:50 +05:30
Sayak Paul
034b39b8cb
[docs] add details concerning diffusers-specific bits. ( #6375 )
...
add details concerning diffusers-specific bits.
2023-12-28 23:12:49 +05:30
Sayak Paul
d4f10ea362
[Diffusion fast] add doc for diffusion fast ( #6311 )
...
* add doc for diffusion fast
* add entry to _toctree
* Apply suggestions from code review
* fix titlew
* fix: title entry
* add note about fuse_qkv_projections
2023-12-26 22:19:55 +05:30
Younes Belkada
3aba99af8f
[Peft / Lora] Add adapter_names in fuse_lora ( #5823 )
...
* add adapter_name in fuse
* add tesrt
* up
* fix CI
* adapt from suggestion
* Update src/diffusers/utils/testing_utils.py
Co-authored-by: Benjamin Bossan <BenjaminBossan@users.noreply.github.com >
* change to `require_peft_version_greater`
* change variable names in test
* Update src/diffusers/loaders/lora.py
Co-authored-by: Benjamin Bossan <BenjaminBossan@users.noreply.github.com >
* break into 2 lines
* final comments
---------
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
Co-authored-by: Benjamin Bossan <BenjaminBossan@users.noreply.github.com >
2023-12-26 16:54:47 +01:00
M. Tolga Cangöz
c72a173906
Revert "[Docs] Update and make improvements" ( #5858 )
...
* Revert "[`Docs`] Update and make improvements (#5819 )"
This reverts commit c697f52476 .
* Update README.md
* Update memory.md
* Update basic_training.md
* Update write_own_pipeline.md
* Update fp16.md
* Update basic_training.md
* Update write_own_pipeline.md
* Update write_own_pipeline.md
2023-11-20 10:22:21 -08:00
M. Tolga Cangöz
c697f52476
[Docs] Update and make improvements ( #5819 )
...
Update and make improvements
2023-11-16 13:47:25 -08:00
M. Tolga Cangöz
51fd3dd206
[Docs] Remove .to('cuda') before .enable_model_cpu_offload() ( #5795 )
...
Remove .to('cuda') before cpu_offload, trim trailing whitespaces
2023-11-14 17:20:54 -08:00
apolinário
6e68c71503
Add adapter fusing + PEFT to the docs ( #5662 )
...
* Add adapter fusing + PEFT to the docs
* Update docs/source/en/tutorials/using_peft_for_inference.md
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
* Update docs/source/en/tutorials/using_peft_for_inference.md
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
* Update docs/source/en/tutorials/using_peft_for_inference.md
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com >
* Update docs/source/en/tutorials/using_peft_for_inference.md
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com >
* Update docs/source/en/tutorials/using_peft_for_inference.md
* Update docs/source/en/tutorials/using_peft_for_inference.md
---------
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com >
2023-11-08 18:26:53 +01:00
M. Tolga Cangöz
5c75a5fbc4
[Docs] Fix typos, improve, update at Tutorials page ( #5586 )
...
* Fix typos, improve, update
* Update autopipeline.md
* Update docs/source/en/tutorials/using_peft_for_inference.md
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com >
* Update docs/source/en/tutorials/using_peft_for_inference.md
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com >
* Update docs/source/en/tutorials/using_peft_for_inference.md
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com >
---------
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com >
2023-11-01 10:40:47 -07:00
Sayak Paul
b4cbbd5ed2
[Examples] Follow up of #5393 ( #5420 )
...
* fix: create_repo()
* Empty-Commit
2023-10-17 12:07:39 +05:30
Sayak Paul
cc12f3ec92
[Examples] Update with HFApi ( #5393 )
...
* update training examples to use HFAPI.
* update training example.
* reflect the changes in the korean version too.
* Empty-Commit
2023-10-16 19:34:46 +05:30
Sayak Paul
5495073faf
[Docs] add docs on peft diffusers integration ( #5359 )
...
* add docs on peft diffusers integration/
Co-authored-by: Younes Belkada <49240599+younesbelkada@users.noreply.github.com >
Co-authored-by: pacman100 <13534540+pacman100@users.noreply.github.com >
* update URLs.
* Apply suggestions from code review
Co-authored-by: Younes Belkada <49240599+younesbelkada@users.noreply.github.com >
* Apply suggestions from code review
* Apply suggestions from code review
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com >
* minor changes
* Update docs/source/en/tutorials/using_peft_for_inference.md
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com >
* reflect the latest changes.
* note about update.
---------
Co-authored-by: Younes Belkada <49240599+younesbelkada@users.noreply.github.com >
Co-authored-by: pacman100 <13534540+pacman100@users.noreply.github.com >
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com >
2023-10-16 18:41:37 +05:30
Sayak Paul
d67eba0f31
[Utility] adds an image grid utility ( #4576 )
...
* add: utility for image grid.
* add: return type.
* change necessary places.
* add to utility page.
2023-08-12 10:34:51 +05:30
Steven Liu
ae82a3eb34
[docs] AutoPipeline tutorial ( #4273 )
...
* first draft
* tidy api
* apply feedback
* mdx to md
* apply feedback
* Apply suggestions from code review
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com >
---------
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com >
2023-08-02 10:32:02 -07:00
camenduru
c6ae9b7df6
Where did this 'x' come from, Elon? ( #4277 )
...
* why mdx?
* why mdx?
* why mdx?
* no x for kandinksy either
---------
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com >
2023-07-26 18:18:14 +02:00
Aisuko
f911287cc9
fix/doc-code: Updating to the latest version parameters ( #3924 )
...
fix/doc-code: update to use the new parameter
Signed-off-by: GitHub <noreply@github.com >
2023-07-03 12:28:05 +02:00