Patrick von Platen
459b8ca81a
Research folder ( #1553 )
...
* Research folder
* Update examples/research_projects/README.md
* up
2022-12-05 17:58:35 +01:00
Suraj Patil
bce65cd13a
[refactor] make set_attention_slice recursive ( #1532 )
...
* make attn slice recursive
* remove set_attention_slice from blocks
* fix copies
* make enable_attention_slicing base class method of DiffusionPipeline
* fix set_attention_slice
* fix set_attention_slice
* fix copies
* add tests
* up
* up
* up
* update
* up
* uP
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com >
2022-12-05 17:31:04 +01:00
Adalberto
e289998932
fix mask discrepancies in train_dreambooth_inpaint ( #1529 )
...
The mask and instance image were being cropped in different ways without --center_crop, causing the model to learn to ignore the mask in some cases. This PR fixes that and generate more consistent results.
2022-12-05 17:26:36 +01:00
Suraj Patil
634be6e53d
[examples] use from_pretrained to load scheduler ( #1549 )
...
us from_pretrained to load scheduler
2022-12-05 15:32:24 +01:00
allo-
d1bcbf38ca
[textual_inversion] Add an option for only saving the embeddings ( #781 )
...
[textual_inversion] Add an option to only save embeddings
Add an command line option --only_save_embeds to the example script, for
not saving the full model. Then only the learned embeddings are saved,
which can be added to the original model at runtime in a similar way as
they are created in the training script.
Saving the full model is forced when --push_to_hub is used. (Implements #759 )
2022-12-05 14:45:13 +01:00
Patrick von Platen
df7cd5fe3f
Update bug-report.yml
2022-12-05 14:39:35 +01:00
Naga Sai Abhinay
c28d6945b8
[Community Pipeline] Checkpoint Merger based on Automatic1111 ( #1472 )
...
* Add checkpoint_merger pipeline
* Added missing docs for a parameter.
* Fomratting fixes.
* Fixed code quality issues.
* Bug fix: Off by 1 index
* Added docs for pipeline
2022-12-05 14:36:55 +01:00
Patrick von Platen
5177e65ff0
Update bug-report.yml
2022-12-05 14:17:04 +01:00
Patrick von Platen
60ac5fc235
Update bug-report.yml
2022-12-05 14:13:02 +01:00
Patrick von Platen
19b01749f0
Update bug-report.yml
2022-12-05 14:10:25 +01:00
Patrick von Platen
a980ef2f08
Update bug-report.yml ( #1548 )
...
* Update bug-report.yml
* Update bug-report.yml
* Update bug-report.yml
2022-12-05 14:03:54 +01:00
Patrick von Platen
7932971542
[Upscaling] Fix batch size ( #1525 )
2022-12-05 13:28:55 +01:00
Benjamin Lefaudeux
720dbfc985
Compute embedding distances with torch.cdist ( #1459 )
...
small but mighty
2022-12-05 12:37:05 +01:00
Patrick von Platen
513fc68104
[Stable Diffusion Inpaint] Allow tensor as input image & mask ( #1527 )
...
up
2022-12-05 12:18:02 +01:00
Anton Lozhkov
cc22bda5f6
[CI] Add slow MPS tests ( #1104 )
...
* [CI] Add slow MPS tests
* fix yml
* temporarily resolve caching
* Tests: fix mps crashes.
* Skip test_load_pipeline_from_git on mps.
Not compatible with float16.
* Increase tolerance, use CPU generator, alt. slices.
* Move to nightly
* style
Co-authored-by: Pedro Cuenca <pedro@huggingface.co >
2022-12-05 11:50:24 +01:00
Ilmari Heikkinen
daebee0963
Add xformers attention to VAE ( #1507 )
...
* Add xformers attention to VAE
* Simplify VAE xformers code
* Update src/diffusers/models/attention.py
Co-authored-by: Ilmari Heikkinen <ilmari@fhtr.org >
Co-authored-by: Suraj Patil <surajp815@gmail.com >
2022-12-03 15:08:11 +01:00
Matthieu Bizien
ae368e42d2
[Proposal] Support saving to safetensors ( #1494 )
...
* Add parameter safe_serialization to DiffusionPipeline.save_pretrained
* Add option safe_serialization on ModelMixin.save_pretrained
* Add test test_save_safe_serialization
* Black
* Re-trigger the CI
* Fix doc-builder
* Validate files are saved as safetensor in test_save_safe_serialization
2022-12-02 18:33:16 +01:00
Patrick von Platen
cf4664e885
fix tests
2022-12-02 17:27:58 +00:00
Patrick von Platen
7222a8eadf
make style
2022-12-02 17:18:50 +00:00
bachr
155d272cc1
Update FlaxLMSDiscreteScheduler ( #1474 )
...
- Add the missing `scale_model_input` method to `FlaxLMSDiscreteScheduler`
- Use `jnp.append` for appending to `state.derivatives`
- Use `jnp.delete` to pop from `state.derivatives`
2022-12-02 18:18:30 +01:00
Adalberto
2b30b1090f
Create train_dreambooth_inpaint.py ( #1091 )
...
* Create train_dreambooth_inpaint.py
train_dreambooth.py adapted to work with the inpaint model, generating random masks during the training
* Update train_dreambooth_inpaint.py
refactored train_dreambooth_inpaint with black
* Update train_dreambooth_inpaint.py
* Update train_dreambooth_inpaint.py
* Update train_dreambooth_inpaint.py
Fix prior preservation
* add instructions to readme, fix SD2 compatibility
2022-12-02 18:06:57 +01:00
Antoine Bouthors
3ad49eeedd
Fixed mask+masked_image in sd inpaint pipeline ( #1516 )
...
* Fixed mask+masked_image in sd inpaint pipeline
Those were left unset when inputs are not PIL images
* Fixed formatting
2022-12-02 17:51:51 +01:00
Patrick von Platen
769f0be8fb
Finalize 2nd order schedulers ( #1503 )
...
* up
* up
* finish
* finish
* up
* up
* finish
2022-12-02 16:38:35 +01:00
Pedro Gabriel Gengo Lourenço
4f596599f4
Fix training docs to install datasets ( #1476 )
...
Fixed doc to install from training packages
2022-12-02 15:52:04 +01:00
Dhruv Naik
f57a2e0745
Fix Imagic example ( #1520 )
...
fix typo, remove incorrect arguments from .train()
2022-12-02 15:06:04 +01:00
Pedro Cuenca
3ceaa280bd
Do not use torch.long in mps ( #1488 )
...
* Do not use torch.long in mps
Addresses #1056 .
* Use torch.int instead of float.
* Propagate changes.
* Do not silently change float -> int.
* Propagate changes.
* Apply suggestions from code review
Co-authored-by: Anton Lozhkov <anton@huggingface.co >
Co-authored-by: Anton Lozhkov <anton@huggingface.co >
2022-12-02 13:10:17 +01:00
Benjamin Lefaudeux
a816a87a09
[refactor] Making the xformers mem-efficient attention activation recursive ( #1493 )
...
* Moving the mem efficiient attention activation to the top + recursive
* black, too bad there's no pre-commit ?
Co-authored-by: Benjamin Lefaudeux <benjamin@photoroom.com >
2022-12-02 12:30:01 +01:00
Patrick von Platen
f21415d1d9
Update conversion script to correctly handle SD 2 ( #1511 )
...
* Conversion SD 2
* finish
2022-12-02 12:28:01 +01:00
Patrick von Platen
22b9cb086b
[From pretrained] Allow returning local path ( #1450 )
...
Allow returning local path
2022-12-02 12:26:39 +01:00
Will Berman
25f850a23b
[docs] [dreambooth training] num_class_images clarification ( #1508 )
2022-12-02 12:12:28 +01:00
Will Berman
b25ae2e6ab
[docs] [dreambooth training] accelerate.utils.write_basic_config ( #1513 )
2022-12-02 12:11:18 +01:00
Suraj Patil
0f1c24664c
fix heun scheduler ( #1512 )
2022-12-01 22:39:57 +01:00
Anton Lozhkov
e65b71aba4
Add an explicit --image_size to the conversion script ( #1509 )
...
* Add an explicit `--image_size` to the conversion script
* style
2022-12-01 19:22:48 +01:00
Akash Gokul
a6a25ceb61
Fix Flax flip_sin_to_cos ( #1369 )
...
* Fix Flax flip_sin_to_cos
* Adding flip_sin_to_cos
Co-authored-by: Kashif Rasul <kashif.rasul@gmail.com >
2022-12-01 18:57:01 +01:00
Suraj Patil
b85bb0753e
support v prediction in other schedulers ( #1505 )
...
* support v prediction in other schedulers
* v heun
* add tests for v pred
* fix tests
* fix test euler a
* v ddpm
2022-12-01 18:10:39 +01:00
fboulnois
52eb0348e5
Standardize on using image argument in all pipelines ( #1361 )
...
* feat: switch core pipelines to use image arg
* test: update tests for core pipelines
* feat: switch examples to use image arg
* docs: update docs to use image arg
* style: format code using black and doc-builder
* fix: deprecate use of init_image in all pipelines
2022-12-01 16:55:22 +01:00
Suraj Patil
2bbf8b67a7
simplyfy AttentionBlock ( #1492 )
2022-12-01 16:40:59 +01:00
Patrick von Platen
5a5bf7ef5a
[Deprecate] Correct stacklevel ( #1483 )
...
* Correct stacklevel
* fix
2022-12-01 16:28:10 +01:00
Anton Lozhkov
9276b1e148
Replace deprecated hub utils in train_unconditional_ort ( #1504 )
...
* Replace deprecated hub utils in `train_unconditional_ort`
* typo
2022-12-01 16:00:52 +01:00
regisss
2579d42158
Add doc for Stable Diffusion on Habana Gaudi ( #1496 )
...
* Add doc for Stable Diffusion on Habana Gaudi
* Make style
* Add benchmark
* Center-align columns in the benchmark table
2022-12-01 15:43:48 +01:00
Anton Lozhkov
999044596a
Bump to 0.10.0.dev0 + deprecations ( #1490 )
2022-11-30 15:27:56 +01:00
Pedro Cuenca
eeeb28a9ad
Remove reminder comment ( #1489 )
...
Remove reminder comment.
2022-11-30 14:59:54 +01:00
Patrick von Platen
c05356497a
Add better docs xformers ( #1487 )
...
* Add better docs xformers
* update
* Apply suggestions from code review
* fix
2022-11-30 13:57:45 +01:00
Patrick von Platen
1d4ad34af0
[Dreambooth] Make compatible with alt diffusion ( #1470 )
...
* [Dreambooth] Make compatible with alt diffusion
* make style
* add example
2022-11-30 13:48:17 +01:00
Patrick von Platen
20ce68f945
Fix dtype model loading ( #1449 )
...
* Add test
* up
* no bfloat16 for mps
* fix
* rename test
2022-11-30 11:31:50 +01:00
Patrick von Platen
110ffe2589
Allow saving trained betas ( #1468 )
2022-11-30 10:05:51 +01:00
Anton Lozhkov
0b7225e918
Add ort_nightly_directml to the onnxruntime candidates ( #1458 )
...
* Add `ort_nightly_directml` to the `onnxruntime` candidates
* style
2022-11-29 14:00:41 +01:00
Anton Lozhkov
db7b7bd983
[Train unconditional] Unwrap model before EMA ( #1469 )
2022-11-29 13:45:42 +01:00
Rohan Taori
6a0a312370
Fix bug in half precision for DPMSolverMultistepScheduler ( #1349 )
...
* cast to float for quantile method
* add fp16 test for DPMSolverMultistepScheduler fix
* formatting update
2022-11-29 13:29:23 +01:00
Ilmari Heikkinen
c28d3c82ce
StableDiffusion: Decode latents separately to run larger batches ( #1150 )
...
* StableDiffusion: Decode latents separately to run larger batches
* Move VAE sliced decode under enable_vae_sliced_decode and vae.enable_sliced_decode
* Rename sliced_decode to slicing
* fix whitespace
* fix quality check and repository consistency
* VAE slicing tests and documentation
* API doc hooks for VAE slicing
* reformat vae slicing tests
* Skip VAE slicing for one-image batches
* Documentation tweaks for VAE slicing
Co-authored-by: Ilmari Heikkinen <ilmari@fhtr.org >
2022-11-29 13:28:14 +01:00