1
0
mirror of https://github.com/huggingface/diffusers.git synced 2026-01-27 17:22:53 +03:00
Commit Graph

410 Commits

Author SHA1 Message Date
Patrick von Platen
04ad948673 make style 2 - sorry 2023-01-27 16:54:40 +02:00
Patrick von Platen
97ef5e0665 make style 2023-01-27 16:52:04 +02:00
Patrick von Platen
0c39f53cbb Allow lora from pipeline (#2129)
* [LoRA] All to use in inference with pipeline

* [LoRA] allow cross attention kwargs passed to pipeline

* finish
2023-01-27 08:19:46 +01:00
Patrick von Platen
f653ded7ed [LoRA] Make sure LoRA can be disabled after it's run (#2128) 2023-01-26 21:26:11 +01:00
Patrick von Platen
09779cbb40 [Bump version] 0.13.0dev0 & Deprecate predict_epsilon (#2109)
* [Bump version] 0.13

* Bump model up

* up
2023-01-25 17:59:02 +01:00
Patrick von Platen
6ba2231d72 Reproducibility 3/3 (#1924)
* make tests deterministic

* run slow tests

* prepare for testing

* finish

* refactor

* add print statements

* finish more

* correct some test failures

* more fixes

* set up to correct tests

* more corrections

* up

* fix more

* more prints

* add

* up

* up

* up

* uP

* uP

* more fixes

* uP

* up

* up

* up

* up

* fix more

* up

* up

* clean tests

* up

* up

* up

* more fixes

* Apply suggestions from code review

Co-authored-by: Suraj Patil <surajp815@gmail.com>

* make

* correct

* finish

* finish

Co-authored-by: Suraj Patil <surajp815@gmail.com>
2023-01-25 13:44:22 +01:00
Patrick von Platen
b562b6611f Allow directly passing text embeddings to Stable Diffusion Pipeline for prompt weighting (#2071)
* add text embeds to sd

* add text embeds to sd

* finish tests

* finish

* finish

* make style

* fix tests

* make style

* make style

* up

* better docs

* fix

* fix

* new try

* up

* up

* finish
2023-01-25 12:29:49 +01:00
Patrick von Platen
69c76173fa fix tests 2023-01-22 14:31:05 +02:00
Patrick von Platen
926b34b40c improve tests 2023-01-22 14:30:15 +02:00
Patrick von Platen
59b7339a84 [From pretrained] Don't download .safetensors files if safetensors is… (#2057)
* [From pretrained] Don't download .safetensors files if safetensors is not available

* tests

* tests

* up
2023-01-21 15:51:33 +01:00
Suraj Patil
aa265f74bd [StableDiffusionInstructPix2Pix] use cpu generator in slow tests (#2051)
* use cpu generator in slow tests

* ifx get_inputs
2023-01-20 21:43:00 +02:00
Lucain
bcb476797c Remove modelcards dependency (#2050)
* Switch to huggingface_hub.ModelCard

* Remove modelcards dependency in favor of Jinja2
2023-01-20 16:39:42 +01:00
Suraj Patil
e5ff75540c Add InstructPix2Pix pipeline (#2040)
* being pix2pix

* ifx

* cfg image_latents

* fix some docstr

* fix

* fix

* hack

* fix

* Apply suggestions from code review

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

* add comments to explain the hack

* move __call__ to the top

* doc

* remove height and width

* remove depreications

* fix doc str

* quality

* fast tests

* chnage model id

* fast tests

* fix test

* address Pedro's comments

* copyright

* Simple doc page.

* Apply suggestions from code review

* style

* Remove import

* address some review comments

* Apply suggestions from code review

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* style

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2023-01-20 16:25:46 +01:00
Anton Lozhkov
7c82a16fc1 Fix EMA for multi-gpu training in the unconditional example (#1930)
* improve EMA

* style

* one EMA model

* quality

* fix tests

* fix test

* Apply suggestions from code review

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

* re organise the unconditional script

* backwards compatibility

* default to init values for some args

* fix ort script

* issubclass => isinstance

* update state_dict

* docstr

* doc

* Apply suggestions from code review

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

* use .to if device is passed

* deprecate device

* make flake happy

* fix typo

Co-authored-by: patil-suraj <surajp815@gmail.com>
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2023-01-19 11:35:55 +01:00
Patrick von Platen
013955b5a7 [Dit] Fix dit tests (#2034)
* [Dit] Fix dit tests

* up
2023-01-19 01:50:22 +01:00
Patrick von Platen
ed616bd8a8 [LoRA] Add LoRA training script (#1884)
* [Lora] first upload

* add first lora version

* upload

* more

* first training

* up

* correct

* improve

* finish loaders and inference

* up

* up

* fix more

* up

* finish more

* finish more

* up

* up

* change year

* revert year change

* Change lines

* Add cloneofsimo as co-author.

Co-authored-by: Simo Ryu <cloneofsimo@gmail.com>

* finish

* fix docs

* Apply suggestions from code review

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
Co-authored-by: Suraj Patil <surajp815@gmail.com>

* upload

* finish

Co-authored-by: Simo Ryu <cloneofsimo@gmail.com>
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
Co-authored-by: Suraj Patil <surajp815@gmail.com>
2023-01-18 18:05:51 +01:00
Kashif Rasul
37d113cce7 DiT Pipeline (#1806)
* added dit model

* import

* initial pipeline

* initial convert script

* initial pipeline

* make style

* raise valueerror

* single function

* rename classes

* use DDIMScheduler

* timesteps embedder

* samples to cpu

* fix var names

* fix numpy type

* use timesteps class for proj

* fix typo

* fix arg name

* flip_sin_to_cos and better var names

* fix C shape cal

* make style

* remove unused imports

* cleanup

* add back patch_size

* initial dit doc

* typo

* Update docs/source/api/pipelines/dit.mdx

Co-authored-by: Suraj Patil <surajp815@gmail.com>

* added copyright license headers

* added example usage and toc

* fix variable names asserts

* remove comment

* added docs

* fix typo

* upstream changes

* set proper device for drop_ids

* added initial dit pipeline test

* update docs

* fix imports

* make fix-copies

* isort

* fix imports

* get rid of more magic numbers

* fix code when guidance is off

* remove block_kwargs

* cleanup script

* removed to_2tuple

* use FeedForward class instead of another MLP

* style

* work on mergint DiTBlock with BasicTransformerBlock

* added missing final_dropout and args to BasicTransformerBlock

* use norm from block

* fix arg

* remove unused arg

* fix call to class_embedder

* use timesteps

* make style

* attn_output gets multiplied

* removed commented code

* use Transformer2D

* use self.is_input_patches

* fix flags

* fixed conversion to use Transformer2DModel

* fixes for pipeline

* remove dit.py

* fix timesteps device

* use randn_tensor and fix fp16 inf.

* timesteps_emb already the right dtype

* fix dit test class

* fix test and style

* fix norm2 usage in vq-diffusion

* added author names to pipeline and lmagenet labels link

* fix tests

* use norm_type as string

* rename dit to transformer

* fix name

* fix test

* set  norm_type = "layer" by default

* fix tests

* do not skip common tests

* Update src/diffusers/models/attention.py

Co-authored-by: Suraj Patil <surajp815@gmail.com>

* revert AdaLayerNorm API

* fix norm_type name

* make sure all components are in eval mode

* revert norm2 API

* compact

* finish deprecation

* add slow tests

* remove @

* refactor some stuff

* upload

* Update src/diffusers/pipelines/dit/pipeline_dit.py

* finish more

* finish docs

* improve docs

* finish docs

Co-authored-by: Suraj Patil <surajp815@gmail.com>
Co-authored-by: William Berman <WLBberman@gmail.com>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2023-01-17 23:09:29 +01:00
Jerry Jiarui XU
a43bdd01cd [Flax] Add Flax inpainting impl (#1966)
* [Flax] Add Flax inpainting impl

* fixed copies, add README.md

* fixed README.md

* add test

* format

* update README.md
2023-01-17 10:42:04 +01:00
Will Berman
07c0fe4b87 Use pipeline tests mixin for UnCLIP pipeline tests + unCLIP MPS fixes (#1908)
re: https://github.com/huggingface/diffusers/issues/1857

We relax some of the checks to deal with unclip reproducibility issues. Mainly by checking the average pixel difference (measured w/in 0-255) instead of the max pixel difference (measured w/in 0-1).

- [x] add mixin to UnCLIPPipelineFastTests
- [x] add mixin to UnCLIPImageVariationPipelineFastTests
- [x] Move UnCLIPPipeline flags in mixin to base class
- [x] Small MPS fixes for F.pad and F.interpolate
- [x] Made test unCLIP model's dimensions smaller to run tests faster
2023-01-16 15:21:58 +01:00
Patrick von Platen
522f8aa7b2 [Black] Update black library (#2007) 2023-01-16 15:16:28 +01:00
Erin
cc2cc00d20 Add tests for 2D UNet blocks (#1945)
* test unet blocks 2d

* change to randn_tensor

* mps flaky
2023-01-16 12:53:05 +01:00
Vladimir Sotnikov
9b37ed33b5 [SD Img2Img] resize source images to multiple of 8 instead of 32 (#1571)
* [Stable Diffusion Img2Img] resize source images to integer multiple of 8 instead of 32

* [Alt Diffusion Img2Img] resize source images to multiple of 8 instead of 32

* [Img2Img] fix AltDiffusion Img2Img resolution test

* [Img2Img] add Stable Diffusion Img2Img resolution test

* [Cycle Diffusion] round resolution to multiplies of 8 instead of 32

* [ONNX SD Img2Img] round resolution to multiplies of 64 instead of 32

* [SD Depth2Img] round resolution to multiplies of 8 instead of 32

* [Repaint] round resolution to multiplies of 8 instead of 32

* fix make style

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2023-01-13 16:02:22 +01:00
camenduru
f73ed17961 Allow converting Flax to PyTorch by adding a "from_flax" keyword (#1900)
* from_flax

* oops

* oops

* make style with pip install -e ".[dev]"

* oops

* now code quality happy 😋

* allow_patterns += FLAX_WEIGHTS_NAME

* Update src/diffusers/pipelines/pipeline_utils.py

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* Update src/diffusers/pipelines/pipeline_utils.py

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* Update src/diffusers/pipelines/pipeline_utils.py

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* Update src/diffusers/pipelines/pipeline_utils.py

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* Update src/diffusers/models/modeling_utils.py

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* Update src/diffusers/pipelines/pipeline_utils.py

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* for test

* bye bye is_flax_available()

* oops

* Update src/diffusers/models/modeling_pytorch_flax_utils.py

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

* Update src/diffusers/models/modeling_pytorch_flax_utils.py

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

* Update src/diffusers/models/modeling_pytorch_flax_utils.py

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

* Update src/diffusers/models/modeling_utils.py

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

* Update src/diffusers/models/modeling_utils.py

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

* make style

* add test

* finihs

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
2023-01-12 20:00:35 +01:00
Patrick von Platen
6d3adf6570 Fix slow tests (#1983)
* [Slow tests] Fix tests

* Update tests/pipelines/karras_ve/test_karras_ve.py
2023-01-12 18:24:51 +01:00
qsh-zh
be99201a56 feat : add log-rho deis multistep scheduler (#1432)
* feat : add log-rho deis multistep deis

* docs :fix typo

* docs : add docs for impl algo

* docs : remove duplicate ref

* finish deis

* add docs

* fix

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2023-01-05 00:09:30 +01:00
Patrick von Platen
9b63854886 Improve reproduceability 2/3 (#1906)
* [Repro] Correct reproducability

* up

* up

* uP

* up

* need better image

* allow conversion from no state dict checkpoints

* up

* up

* up

* up

* check tensors

* check tensors

* check tensors

* check tensors

* next try

* up

* up

* better name

* up

* up

* Apply suggestions from code review

* correct more

* up

* replace all torch randn

* fix

* correct

* correct

* finish

* fix more

* up
2023-01-04 23:51:17 +01:00
Erin
9e17983d9f Test ResnetBlock2D (#1850)
* test resnet block

* fix code format required by isort

* add torch device

* nit
2023-01-04 22:57:32 +01:00
Patrick von Platen
8ed08e4270 [Deterministic torch randn] Allow tensors to be generated on CPU (#1902)
* [Deterministic torch randn] Allow tensors to be generated on CPU

* fix more

* up

* fix more

* up

* Update src/diffusers/utils/torch_utils.py

Co-authored-by: Anton Lozhkov <anton@huggingface.co>

* Apply suggestions from code review

* up

* up

* Apply suggestions from code review

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

Co-authored-by: Anton Lozhkov <anton@huggingface.co>
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
2023-01-03 18:22:40 +01:00
Robert Dargavel Smith
4a7e4cec38 Add condtional generation to AudioDiffusionPipeline (#1826)
* Add condtional generation

* add fast test for conditional audio generation
2023-01-03 14:09:14 +01:00
Patrick von Platen
21bbc633c4 [Attention] Finish refactor attention file (#1879)
* [Attention] Finish refactor attention file

* correct more

* fix

* more fixes

* correct

* up
2023-01-01 19:18:10 +01:00
Patrick von Platen
b28ab30215 [Unclip] Make sure text_embeddings & image_embeddings can directly be passed to enable interpolation tasks. (#1858)
* [Unclip] Make sure latents can be reused

* allow one to directly pass embeddings

* up

* make unclip for text work

* finish allowing to pass embeddings

* correct more

* make style
2022-12-30 12:18:19 +01:00
Patrick von Platen
29b2c93c90 Make repo structure consistent (#1862)
* move files a bit

* more refactors

* fix more

* more fixes

* fix more onnx

* make style

* upload

* fix

* up

* fix more

* up again

* up

* small fix

* Update src/diffusers/__init__.py

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

* correct

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
2022-12-30 11:51:08 +01:00
Patrick von Platen
03bf877bf4 [StableDiffusionInpaint] Correct test (#1859) 2022-12-29 14:47:56 +01:00
Patrick von Platen
f2e521c499 [Dtype] Align dtype casting behavior with Transformers and Accelerate (#1725)
* [Dtype] Align automatic dtype

* up

* up

* fix

* re-add accelerate
2022-12-29 14:36:02 +01:00
Will Berman
53c8147afe unCLIP image variation (#1781)
* unCLIP image variation

* remove prior comment re: @pcuenca

* stable diffusion -> unCLIP re: @pcuenca

* add copy froms re: @patil-suraj
2022-12-28 14:17:09 +01:00
Pedro Cuenca
df2b548e89 Make safety_checker optional in more pipelines (#1796)
* Make safety_checker optional in more pipelines.

* Remove inappropriate comment in inpaint pipeline.

* InPaint Test: set feature_extractor to None.

* Remove import

* img2img test: set feature_extractor to None.

* inpaint sd2 test: set feature_extractor to None.

Co-authored-by: Suraj Patil <surajp815@gmail.com>
2022-12-25 21:58:45 +01:00
Patrick von Platen
4125756e88 Refactor cross attention and allow mechanism to tweak cross attention function (#1639)
* first proposal

* rename

* up

* Apply suggestions from code review

* better

* up

* finish

* up

* rename

* correct versatile

* up

* up

* up

* up

* fix

* Apply suggestions from code review

* make style

* Apply suggestions from code review

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

* add error message

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
2022-12-20 18:49:05 +01:00
Simon Kirsten
f106ab40b3 [Flax] Stateless schedulers, fixes and refactors (#1661)
* [Flax] Stateless schedulers, fixes and refactors

* Remove scheduling_common_flax and some renames

* Update src/diffusers/schedulers/scheduling_pndm_flax.py

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
2022-12-20 01:42:41 +01:00
Anton Lozhkov
8331da4683 Bump to 0.12.0.dev0 (#1771) 2022-12-19 18:44:08 +01:00
Anton Lozhkov
f1a32203aa [Tests] Fix UnCLIP cpu offload tests (#1769) 2022-12-19 18:25:08 +01:00
Patrick von Platen
ce1c27adc8 [Revision] Don't recommend using revision (#1764) 2022-12-19 16:25:41 +01:00
Anton Lozhkov
c7b4acfb37 Add CPU offloading to UnCLIP (#1761)
* Add CPU offloading to UnCLIP

* use fp32 for testing the offload
2022-12-19 14:44:08 +01:00
Will Berman
830a9d1f01 [fix] pipeline_unclip generator (#1751)
* [fix] pipeline_unclip generator

pass generator to all schedulers

* fix fast tests test data
2022-12-19 10:27:18 +01:00
Will Berman
2dcf64b72a kakaobrain unCLIP (#1428)
* [wip] attention block updates

* [wip] unCLIP unet decoder and super res

* [wip] unCLIP prior transformer

* [wip] scheduler changes

* [wip] text proj utility class

* [wip] UnCLIPPipeline

* [wip] kakaobrain unCLIP convert script

* [unCLIP pipeline] fixes re: @patrickvonplaten

remove callbacks

move denoising loops into call function

* UNCLIPScheduler re: @patrickvonplaten

Revert changes to DDPMScheduler. Make UNCLIPScheduler, a modified
DDPM scheduler with changes to support karlo

* mask -> attention_mask re: @patrickvonplaten

* [DDPMScheduler] remove leftover change

* [docs] PriorTransformer

* [docs] UNet2DConditionModel and UNet2DModel

* [nit] UNCLIPScheduler -> UnCLIPScheduler

matches existing unclip naming better

* [docs] SchedulingUnCLIP

* [docs] UnCLIPTextProjModel

* refactor

* finish licenses

* rename all to attention_mask and prep in models

* more renaming

* don't expose unused configs

* final renaming fixes

* remove x attn mask when not necessary

* configure kakao script to use new class embedding config

* fix copies

* [tests] UnCLIPScheduler

* finish x attn

* finish

* remove more

* rename condition blocks

* clean more

* Apply suggestions from code review

* up

* fix

* [tests] UnCLIPPipelineFastTests

* remove unused imports

* [tests] UnCLIPPipelineIntegrationTests

* correct

* make style

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
2022-12-18 15:15:30 -08:00
Anton Lozhkov
c2a38ef9df Fix/update the LDM pipeline and tests (#1743)
* Fix/update LDM tests

* batched generators
2022-12-18 11:49:53 +01:00
Anton Lozhkov
08cc36ddff Fix MPS fast test warnings (#1744)
* unset level
2022-12-17 22:57:30 +01:00
Patrick von Platen
c53a850604 [Batched Generators] This PR adds generators that are useful to make batched generation fully reproducible (#1718)
* [Batched Generators] all batched generators

* up

* up

* up

* up

* up

* up

* up

* up

* up

* up

* up

* up

* up

* up

* up

* up

* hey

* up again

* fix tests

* Apply suggestions from code review

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

* correct tests

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
2022-12-17 11:13:16 +01:00
Anton Lozhkov
086c7f9ea8 Nightly integration tests (#1664)
* [WIP] Nightly integration tests

* initial SD tests

* update SD slow tests

* style

* repaint

* ImageVariations

* style

* finish imgvar

* img2img tests

* debug

* inpaint 1.5

* inpaint legacy

* torch isn't happy about deterministic ops

* allclose -> max diff for shorter logs

* add SD2

* debug

* Update tests/pipelines/stable_diffusion_2/test_stable_diffusion.py

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* Update tests/pipelines/stable_diffusion/test_stable_diffusion.py

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* fix refs

* Update src/diffusers/utils/testing_utils.py

Co-authored-by: Pedro Cuenca <pedro@huggingface.co>

* fix refs

* remove debug

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
2022-12-16 18:51:11 +01:00
Patrick von Platen
c6d0dff4a3 Fix ldm tests on master by not running the CPU tests on GPU (#1729) 2022-12-16 15:28:40 +01:00
Anton Lozhkov
a40095dd22 Fix ONNX img2img preprocessing and add fast tests coverage (#1727)
* Fix ONNX img2img preprocessing and add fast tests coverage

* revert

* disable progressbars
2022-12-16 15:24:16 +01:00