Tolga CangΓΆz
0ab63ff647
Fix CPU Offloading Usage & Typos ( #8230 )
...
* Fix typos
* Fix `pipe.enable_model_cpu_offload()` usage
* Fix cpu offloading
* Update numbers
2024-05-24 11:25:29 -07:00
Sayak Paul
95d3748453
[LoRA] Fix LoRA tests (side effects of RGB ordering) part ii ( #7932 )
...
* check
* check 2.
* update slices
2024-05-13 09:23:48 -10:00
Sayak Paul
305f2b4498
[Tests] fix things after #7013 ( #7899 )
...
* debugging
* save the resulting image
* check if order reversing works.
* checking values.
* up
* okay
* checking
* fix
* remove print
2024-05-09 16:05:35 +02:00
Γlvaro Somoza
23e091564f
Fix for "no lora weight found module" with some loras ( #7875 )
...
* return layer weight if not found
* better system and test
* key example and typo
2024-05-07 13:54:57 +02:00
Benjamin Bossan
2523390c26
FIX Setting device for DoRA parameters ( #7655 )
...
Fix a bug that causes the the call to set_lora_device to ignore the DoRA
parameters.
2024-04-12 13:55:46 +02:00
UmerHA
0302446819
Implements Blockwise lora ( #7352 )
...
* Initial commit
* Implemented block lora
- implemented block lora
- updated docs
- added tests
* Finishing up
* Reverted unrelated changes made by make style
* Fixed typo
* Fixed bug + Made text_encoder_2 scalable
* Integrated some review feedback
* Incorporated review feedback
* Fix tests
* Made every module configurable
* Adapter to new lora test structure
* Final cleanup
* Some more final fixes
- Included examples in `using_peft_for_inference.md`
- Added hint that only attns are scaled
- Removed NoneTypes
- Added test to check mismatching lens of adapter names / weights raise error
* Update using_peft_for_inference.md
* Update using_peft_for_inference.md
* Make style, quality, fix-copies
* Updated tutorial;Warning if scale/adapter mismatch
* floats are forwarded as-is; changed tutorial scale
* make style, quality, fix-copies
* Fixed typo in tutorial
* Moved some warnings into `lora_loader_utils.py`
* Moved scale/lora mismatch warnings back
* Integrated final review suggestions
* Empty commit to trigger CI
* Reverted emoty commit to trigger CI
---------
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
2024-03-29 21:15:57 +05:30
Dhruv Nair
4d39b7483d
Memory clean up on all Slow Tests ( #7514 )
...
* update
* update
---------
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
2024-03-29 14:23:28 +05:30
UmerHA
0b8e29289d
Skip test_lora_fuse_nan on mps ( #7481 )
...
Skipping test_lora_fuse_nan on mps
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
2024-03-27 14:35:59 +05:30
Sayak Paul
699dfb084c
feat: support DoRA LoRA from community ( #7371 )
...
* feat: support dora loras from community
* safe-guard dora operations under peft version.
* pop use_dora when False
* make dora lora from kohya work.
* fix: kohya conversion utils.
* add a fast test for DoRA compatibility..
* add a nightly test.
2024-03-26 09:37:33 +05:30
UmerHA
1cd4732e7f
Fixed minor error in test_lora_layers_peft.py ( #7394 )
...
* Update test_lora_layers_peft.py
* Update utils.py
2024-03-25 11:35:27 -10:00
Sayak Paul
e25e525fde
[LoRA test suite] refactor the test suite and cleanse it ( #7316 )
...
* cleanse and refactor lora testing suite.
* more cleanup.
* make check_if_lora_correctly_set a utility function
* fix: typo
* retrigger ci
* style
2024-03-20 17:13:52 +05:30
Sayak Paul
b09a2aa308
[LoRA] fix cross_attention_kwargs problems and tighten tests ( #7388 )
...
* debugging
* let's see the numbers
* let's see the numbers
* let's see the numbers
* restrict tolerance.
* increase inference steps.
* shallow copy of cross_attentionkwargs
* remove print
2024-03-19 17:53:38 +05:30
Younes Belkada
8a692739c0
FIX [PEFT / Core] Copy the state dict when passing it to load_lora_weights ( #7058 )
...
* copy the state dict in load lora weights
* fixup
2024-02-27 02:42:23 +01:00
jinghuan-Chen
88aa7f6ebf
Make LoRACompatibleConv padding_mode work. ( #6031 )
...
* Make LoRACompatibleConv padding_mode work.
* Format code style.
* add fast test
* Update src/diffusers/models/lora.py
Simplify the code by patrickvonplaten.
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com >
* code refactor
* apply patrickvonplaten suggestion to simplify the code.
* rm test_lora_layers_old_backend.py and add test case in test_lora_layers_peft.py
* update test case.
---------
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com >
Co-authored-by: YiYi Xu <yixu310@gmail.com >
2024-02-26 14:05:13 -10:00
Dhruv Nair
40dd9cb2bd
Move SDXL T2I Adapter lora test into PEFT workflow ( #6965 )
...
update
2024-02-13 17:08:53 +05:30
Sayak Paul
ca9ed5e8d1
[LoRA] deprecate certain lora methods from the old backend. ( #6889 )
...
* deprecate certain lora methods from the old backend.
* uncomment necessary things.
* safe remove old lora backend π
2024-02-09 17:14:32 +01:00
Sayak Paul
30e5e81d58
change to 2024 in the license ( #6902 )
...
change to 2024
2024-02-08 08:19:31 -10:00
Dhruv Nair
d66d554dc2
Add tearDown method to LoRA tests. ( #6660 )
...
* update
* update
2024-01-22 14:00:37 +05:30
Sayak Paul
ae060fc4f1
[feat] introduce unload_lora(). ( #6451 )
...
* introduce unload_lora.
* fix-copies
2024-01-05 16:22:11 +05:30
Sayak Paul
0a0bb526aa
[LoRA depcrecation] LoRA depcrecation trilogy ( #6450 )
...
* edebug
* debug
* more debug
* more more debug
* remove tests for LoRAAttnProcessors.
* rename
2024-01-05 15:48:20 +05:30
Sayak Paul
107e02160a
[LoRA tests] fix stuff related to assertions arising from the recent changes. ( #6448 )
...
* debug
* debug test_with_different_scales_fusion_equivalence
* use the right method.
* place it right.
* let's see.
* let's see again
* alright then.
* add a comment.
2024-01-04 12:55:15 +05:30
sayakpaul
6dbef45e6e
Revert "debug"
...
This reverts commit 7715e6c31c .
2024-01-04 10:39:38 +05:30
sayakpaul
7715e6c31c
debug
2024-01-04 10:39:00 +05:30
sayakpaul
05b3d36a25
Revert "debug"
...
This reverts commit fb4aec0ce3 .
2024-01-04 10:38:04 +05:30
sayakpaul
fb4aec0ce3
debug
2024-01-04 10:37:28 +05:30
Sayak Paul
d700140076
[LoRA deprecation] handle rest of the stuff related to deprecated lora stuff. ( #6426 )
...
* handle rest of the stuff related to deprecated lora stuff.
* fix: copies
* don't modify the uNet in-place.
* fix: temporal autoencoder.
* manually remove lora layers.
* don't copy unet.
* alright
* remove lora attn processors from unet3d
* fix: unet3d.
* styl
* Empty-Commit
2024-01-03 20:54:09 +05:30
Sayak Paul
2e4dc3e25d
[LoRA] add: test to check if peft loras are loadable in non-peft envs. ( #6400 )
...
* add: test to check if peft loras are loadable in non-peft envs.
* add torch_device approrpiately.
* fix: get_dummy_inputs().
* test logits.
* rename
* debug
* debug
* fix: generator
* new assertion values after fixing the seed.
* shape
* remove print statements and settle this.
* to update values.
* change values when lora config is initialized under a fixed seed.
* update colab link
* update notebook link
* sanity restored by getting the exact same values without peft.
2024-01-03 09:57:49 +05:30
Sayak Paul
61f6c5472a
[LoRA] Remove the use of depcrecated loRA functionalities such as LoRAAttnProcessor ( #6369 )
...
* start deprecating loraattn.
* fix
* wrap into unet_lora_state_dict
* utilize text_encoder_lora_params
* utilize text_encoder_attn_modules
* debug
* debug
* remove print
* don't use text encoder for test_stable_diffusion_lora
* load the procs.
* set_default_attn_processor
* fix: set_default_attn_processor call.
* fix: lora_components[unet_lora_params]
* checking for 3d.
* 3d.
* more fixes.
* debug
* debug
* debug
* debug
* more debug
* more debug
* more debug
* more debug
* more debug
* more debug
* hack.
* remove comments and prep for a PR.
* appropriate set_lora_weights()
* fix
* fix: test_unload_lora_sd
* fix: test_unload_lora_sd
* use dfault attebtion processors.
* debu
* debug nan
* debug nan
* debug nan
* use NaN instead of inf
* remove comments.
* fix: test_text_encoder_lora_state_dict_unchanged
* attention processor default
* default attention processors.
* default
* style
2024-01-02 18:14:04 +05:30
Sayak Paul
6a376ceea2
[LoRA] remove unnecessary components from lora peft test suite ( #6401 )
...
remove unnecessary components from lora peft suite/
2023-12-30 18:25:40 +05:30
Younes Belkada
3aba99af8f
[Peft / Lora]Β Add adapter_names in fuse_lora ( #5823 )
...
* add adapter_name in fuse
* add tesrt
* up
* fix CI
* adapt from suggestion
* Update src/diffusers/utils/testing_utils.py
Co-authored-by: Benjamin Bossan <BenjaminBossan@users.noreply.github.com >
* change to `require_peft_version_greater`
* change variable names in test
* Update src/diffusers/loaders/lora.py
Co-authored-by: Benjamin Bossan <BenjaminBossan@users.noreply.github.com >
* break into 2 lines
* final comments
---------
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
Co-authored-by: Benjamin Bossan <BenjaminBossan@users.noreply.github.com >
2023-12-26 16:54:47 +01:00
Sayak Paul
89459a5d56
fix: lora peft dummy components ( #6308 )
...
* fix: lora peft dummy components
* fix: dummy components
2023-12-25 11:26:45 +05:30
Dhruv Nair
fe574c8b29
LoRA Unfusion test fix ( #6291 )
...
update
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
2023-12-24 14:31:48 +05:30
Sayak Paul
90b9479903
[LoRA PEFT] fix LoRA loading so that correct alphas are parsed ( #6225 )
...
* initialize alpha too.
* add: test
* remove config parsing
* store rank
* debug
* remove faulty test
2023-12-24 09:59:41 +05:30
Dhruv Nair
59d1caa238
Remove peft tests from old lora backend tests ( #6273 )
...
update
2023-12-22 13:35:52 +05:30
Benjamin Bossan
43979c2890
TST Fix LoRA test that fails with PEFT >= 0.7.0 ( #6216 )
...
See #6185 for context.
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
2023-12-21 11:50:05 +01:00
Dhruv Nair
f5dfe2a8b0
LoRA test fixes ( #6163 )
...
* update
* update
* update
* update
---------
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
2023-12-15 08:39:41 +05:30
YiYi Xu
ba352aea29
[feat] IP Adapters (author @okotaku ) ( #5713 )
...
* add ip-adapter
---------
Co-authored-by: okotaku <to78314910@gmail.com >
Co-authored-by: sayakpaul <spsayakpaul@gmail.com >
Co-authored-by: yiyixuxu <yixu310@gmail,com>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com >
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com >
2023-11-21 07:34:30 -10:00
Sayak Paul
ded93f798c
[Refactor] refactor loaders.py to make it cleaner and leaner. ( #5771 )
...
* refactor loaders.py to make it cleaner and leaner.
* refactor loaders init
* inits.
* textual inversion to the init.
* inits.
* remove certain modules from the main init.
* AttnProcsLayers
* fix imports
* avoid circular import.
* fix circular import pt 2.
* address PR comments
* imports
* fix: imports.
* remove from main init for avoiding circular deps.
* remove spurious deps.
* fix-copies.
* fix imports.
* more debug
* more debug
* Apply suggestions from code review
* Apply suggestions from code review
---------
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com >
2023-11-14 12:54:28 +01:00
Sourab Mangrulkar
9c8eca702c
add lora delete feature ( #5738 )
...
* add lora delete feature
* added tests and changed condition
* deal with corner cases
* more corner cases
* rename to `delete_adapter_layers` for consistency
---------
Co-authored-by: younesbelkada <younesbelkada@gmail.com >
2023-11-14 10:51:13 +01:00
Patrick von Platen
3d7eaf83d7
LCM Add Tests ( #5707 )
...
* lcm add tests
* uP
* Fix all
* uP
* Add
* all
* uP
* uP
* uP
* uP
* uP
* uP
* uP
2023-11-09 15:45:11 +01:00
Younes Belkada
02ba50c610
[PEFT / LoRA] Fix civitai bug when network alpha is an empty dict ( #5608 )
...
* fix civitai bug
* add test
* up
* fix test
* added slow test.
* style
* Update src/diffusers/utils/peft_utils.py
Co-authored-by: Benjamin Bossan <BenjaminBossan@users.noreply.github.com >
* Update src/diffusers/utils/peft_utils.py
---------
Co-authored-by: Benjamin Bossan <BenjaminBossan@users.noreply.github.com >
2023-11-01 22:08:22 +01:00
Younes Belkada
bc7a4d4917
[PEFT] Fix scale unscale with LoRA adapters ( #5417 )
...
* fix scale unscale v1
* final fixes + CI
* fix slow trst
* oops
* fix copies
* oops
* oops
* fix
* style
* fix copies
---------
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
2023-10-21 22:17:18 +05:30
Younes Belkada
2bfa55f4ed
[core / PEFT / LoRA] Integrate PEFT into Unet ( #5151 )
...
* v1
* add tests and fix previous failing tests
* fix CI
* add tests + v1 `PeftLayerScaler`
* style
* add scale retrieving mechanism system
* fix CI
* up
* up
* simple approach --> not same results for some reason
* fix issues
* fix copies
* remove unneeded method
* active adapters!
* fix merge conflicts
* up
* up
* kohya - test-1
* Apply suggestions from code review
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com >
* fix scale
* fix copies
* add comment
* multi adapters
* fix tests
* oops
* v1 faster loading - in progress
* Revert "v1 faster loading - in progress"
This reverts commit ac925f8132 .
* kohya same generation
* fix some slow tests
* peft integration features for unet lora
1. Support for Multiple ranks/alphas
2. Support for Multiple active adapters
3. Support for enabling/disabling LoRAs
* fix `get_peft_kwargs`
* Update loaders.py
* add some tests
* add unfuse tests
* fix tests
* up
* add set adapter from sourab and tests
* fix multi adapter tests
* style & quality
* style
* remove comment
* fix `adapter_name` issues
* fix unet adapter name for sdxl
* fix enabling/disabling adapters
* fix fuse / unfuse unet
* nit
* fix
* up
* fix cpu offloading
* fix another slow test
* fix another offload test
* add more tests
* all slow tests pass
* style
* fix alpha pattern for unet and text encoder
* Update src/diffusers/loaders.py
Co-authored-by: Benjamin Bossan <BenjaminBossan@users.noreply.github.com >
* Update src/diffusers/models/attention.py
Co-authored-by: Benjamin Bossan <BenjaminBossan@users.noreply.github.com >
* up
* up
* clarify comment
* comments
* change comment order
* change comment order
* stylr & quality
* Update tests/lora/test_lora_layers_peft.py
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com >
* fix bugs and add tests
* Update src/diffusers/models/modeling_utils.py
Co-authored-by: Benjamin Bossan <BenjaminBossan@users.noreply.github.com >
* Update src/diffusers/models/modeling_utils.py
Co-authored-by: Benjamin Bossan <BenjaminBossan@users.noreply.github.com >
* refactor
* suggestion
* add break statemebt
* add compile tests
* move slow tests to peft tests as I modified them
* quality
* refactor a bit
* style
* change import
* style
* fix CI
* refactor slow tests one last time
* style
* oops
* oops
* oops
* final tweak tests
* Apply suggestions from code review
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
* Update src/diffusers/loaders.py
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
* comments
* Apply suggestions from code review
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
* remove comments
* more comments
* try
* revert
* add `safe_merge` tests
* add comment
* style, comments and run tests in fp16
* add warnings
* fix doc test
* replace with `adapter_weights`
* add `get_active_adapters()`
* expose `get_list_adapters` method
* better error message
* Apply suggestions from code review
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com >
* style
* trigger slow lora tests
* fix tests
* maybe fix last test
* revert
* Update src/diffusers/loaders.py
Co-authored-by: Benjamin Bossan <BenjaminBossan@users.noreply.github.com >
* Update src/diffusers/loaders.py
Co-authored-by: Benjamin Bossan <BenjaminBossan@users.noreply.github.com >
* Update src/diffusers/loaders.py
Co-authored-by: Benjamin Bossan <BenjaminBossan@users.noreply.github.com >
* Update src/diffusers/loaders.py
Co-authored-by: Benjamin Bossan <BenjaminBossan@users.noreply.github.com >
* Apply suggestions from code review
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com >
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
* move `MIN_PEFT_VERSION`
* Apply suggestions from code review
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com >
* let's not use class variable
* fix few nits
* change a bit offloading logic
* check earlier
* rm unneeded block
* break long line
* return empty list
* change logic a bit and address comments
* add typehint
* remove parenthesis
* fix
* revert to fp16 in tests
* add to gpu
* revert to old test
* style
* Update src/diffusers/loaders.py
Co-authored-by: Benjamin Bossan <BenjaminBossan@users.noreply.github.com >
* change indent
* Apply suggestions from code review
* Apply suggestions from code review
---------
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com >
Co-authored-by: Sourab Mangrulkar <13534540+pacman100@users.noreply.github.com >
Co-authored-by: Benjamin Bossan <BenjaminBossan@users.noreply.github.com >
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com >
2023-10-13 16:47:03 +02:00
Dhruv Nair
4d2c981d55
New xformers test runner ( #5349 )
...
* move xformers to dedicated runner
* fix
* remove ptl from test runner images
2023-10-13 00:32:39 +05:30
Patrick von Platen
ed2f956072
Fix loading broken LoRAs that could give NaN ( #5316 )
...
* Fix fuse Lora
* improve a bit
* make style
* Update src/diffusers/models/lora.py
Co-authored-by: Benjamin Bossan <BenjaminBossan@users.noreply.github.com >
* ciao C file
* ciao C file
* test & make style
---------
Co-authored-by: Benjamin Bossan <BenjaminBossan@users.noreply.github.com >
2023-10-09 18:01:55 +02:00
Dhruv Nair
dd5a36291f
New Pipeline Slow Test runners ( #5131 )
...
* pipline fetcher
* update script
* clean up
* clean up
* clean up
* new pipeline runner
* rename tests to match modules
* test actions in pr
* change runner to gpu
* clean up
* clean up
* clean up
* fix report
* fix reporting
* clean up
* show test stats in failure reports
* give names to jobs
* add lora tests
* split torch cuda tests and add compile tests
* clean up
* fix tests
* change push to run only on main
---------
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com >
2023-10-04 11:42:17 +02:00
Patrick von Platen
a584d42ce5
[LoRA, Xformers] Fix xformers lora ( #5201 )
...
* fix xformers lora
* improve
* fix
2023-09-27 21:46:32 +05:30
Dhruv Nair
9946dcf8db
Test Fixes for CUDA Tests and Fast Tests ( #5172 )
...
* fix other tests
* fix tests
* fix tests
* Update tests/pipelines/shap_e/test_shap_e_img2img.py
* Update tests/pipelines/shap_e/test_shap_e_img2img.py
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com >
* fix upstream merge mistake
* fix tests:
* test fix
* Update tests/lora/test_lora_layers_old_backend.py
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com >
* Update tests/lora/test_lora_layers_old_backend.py
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com >
---------
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com >
2023-09-26 19:08:02 +05:30
Younes Belkada
493f9529d7
[PEFT / LoRA] PEFT integration - text encoder ( #5058 )
...
* more fixes
* up
* up
* style
* add in setup
* oops
* more changes
* v1 rzfactor CI
* Apply suggestions from code review
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com >
* few todos
* protect torch import
* style
* fix fuse text encoder
* Update src/diffusers/loaders.py
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
* replace with `recurse_replace_peft_layers`
* keep old modules for BC
* adjustments on `adjust_lora_scale_text_encoder`
* nit
* move tests
* add conversion utils
* remove unneeded methods
* use class method instead
* oops
* use `base_version`
* fix examples
* fix CI
* fix weird error with python 3.8
* fix
* better fix
* style
* Apply suggestions from code review
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com >
* Apply suggestions from code review
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com >
* add comment
* Apply suggestions from code review
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
* conv2d support for recurse remove
* added docstrings
* more docstring
* add deprecate
* revert
* try to fix merge conflicts
* v1 tests
* add new decorator
* add saving utilities test
* adapt tests a bit
* add save / from_pretrained tests
* add saving tests
* add scale tests
* fix deps tests
* fix lora CI
* fix tests
* add comment
* fix
* style
* add slow tests
* slow tests pass
* style
* Update src/diffusers/utils/import_utils.py
Co-authored-by: Benjamin Bossan <BenjaminBossan@users.noreply.github.com >
* Apply suggestions from code review
Co-authored-by: Benjamin Bossan <BenjaminBossan@users.noreply.github.com >
* circumvents pattern finding issue
* left a todo
* Apply suggestions from code review
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com >
* update hub path
* add lora workflow
* fix
---------
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com >
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com >
Co-authored-by: Benjamin Bossan <BenjaminBossan@users.noreply.github.com >
2023-09-22 13:03:39 +02:00
Sayak Paul
e312b2302b
[LoRA] support LyCORIS ( #5102 )
...
* better condition.
* debugging
* how about now?
* how about now?
* debugging
* debugging
* debugging
* debugging
* debugging
* debugging
* debugging
* debugging
* debugging
* debugging
* debugging
* debugging
* debugging
* debugging
* debugging
* debugging
* debugging
* debugging
* debugging
* debugging
* support for lycoris.
* style
* add: lycoris test
* fix from_pretrained call.
* fix assertion values.
2023-09-20 10:30:18 +01:00