Fix argument name in 8bit quantized example
Found a tiny mistake in the documentation where the text encoder model was passed to the wrong argument in the FluxPipeline.from_pretrained function.
* autoencoder_dc tiling
* add tiling and slicing support in SANA pipelines
* create variables for padding length because the line becomes too long
* add tiling and slicing support in pag SANA pipelines
* revert changes to tile size
* make style
* add vae tiling test
---------
Co-authored-by: Aryan <aryan@huggingface.co>
Correcting a typo in the table number of a referenced paper (in scheduling_ddim_inverse.py)
Changed the number of the referenced table from 1 to 2 in a comment of the set_timesteps() method of the DDIMInverseScheduler class (also according to the description of the 'timestep_spacing' attribute of its __init__ method).
* Add no_mmap arg.
* Fix arg parsing.
* Update another method to force no mmap.
* logging
* logging2
* propagate no_mmap
* logging3
* propagate no_mmap
* logging4
* fix open call
* clean up logging
* cleanup
* fix missing arg
* update logging and comments
* Rename to disable_mmap and update other references.
* [Docs] Update ltx_video.md to remove generator from `from_pretrained()` (#10316)
Update ltx_video.md to remove generator from `from_pretrained()`
* docs: fix a mistake in docstring (#10319)
Update pipeline_hunyuan_video.py
docs: fix a mistake
* [BUG FIX] [Stable Audio Pipeline] Resolve torch.Tensor.new_zeros() TypeError in function prepare_latents caused by audio_vae_length (#10306)
[BUG FIX] [Stable Audio Pipeline] TypeError: new_zeros(): argument 'size' failed to unpack the object at pos 3 with error "type must be tuple of ints,but got float"
torch.Tensor.new_zeros() takes a single argument size (int...) – a list, tuple, or torch.Size of integers defining the shape of the output tensor.
in function prepare_latents:
audio_vae_length = self.transformer.config.sample_size * self.vae.hop_length
audio_shape = (batch_size // num_waveforms_per_prompt, audio_channels, audio_vae_length)
...
audio = initial_audio_waveforms.new_zeros(audio_shape)
audio_vae_length evaluates to float because self.transformer.config.sample_size returns a float
Co-authored-by: hlky <hlky@hlky.ac>
* [docs] Fix quantization links (#10323)
Update overview.md
* [Sana]add 2K related model for Sana (#10322)
add 2K related model for Sana
* Update src/diffusers/loaders/single_file_model.py
Co-authored-by: Dhruv Nair <dhruv.nair@gmail.com>
* Update src/diffusers/loaders/single_file.py
Co-authored-by: Dhruv Nair <dhruv.nair@gmail.com>
* make style
---------
Co-authored-by: hlky <hlky@hlky.ac>
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
Co-authored-by: Leojc <liao_junchao@outlook.com>
Co-authored-by: Aditya Raj <syntaxticsugr@gmail.com>
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
Co-authored-by: Junsong Chen <cjs1020440147@icloud.com>
Co-authored-by: Dhruv Nair <dhruv.nair@gmail.com>
* dont assume scheduler has optional config params
* make style, make fix-copies
* calculate_shift
* fix-copies, usage in pipelines
---------
Co-authored-by: hlky <hlky@hlky.ac>
* fix device issue in single gpu case
* Update src/diffusers/pipelines/pipeline_utils.py
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
---------
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
RFInversionFluxPipeline.encode_image, device fix
Use self._execution_device instead of self.device when selecting
a device for the input image tensor.
This allows for compatibility with enable_model_cpu_offload &
enable_sequential_cpu_offload
Co-authored-by: Teriks <Teriks@users.noreply.github.com>
Co-authored-by: Linoy Tsaban <57615435+linoytsaban@users.noreply.github.com>
* update
* fix make copies
* update
* add relevant markers to the integration test suite.
* add copied.
* fox-copies
* temporarily add print.
* directly place on CUDA as CPU isn't that big on the CIO.
* fixes to fuse_lora, aryan was right.
* fixes
---------
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
* make base code changes referred from train_instructpix2pix script in examples
* change code to use PEFT as discussed in issue 10062
* update README training command
* update README training command
* refactor variable name and freezing unet
* Update examples/research_projects/instructpix2pix_lora/train_instruct_pix2pix_lora.py
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
* update README installation instructions.
* cleanup code using make style and quality
---------
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
* Update pipeline_controlnet.py
* Update pipeline_controlnet_img2img.py
runwayml Take-down so change all from to this
stable-diffusion-v1-5/stable-diffusion-v1-5
* Update pipeline_controlnet_inpaint.py
* runwayml take-down make change to sd-legacy
* runwayml take-down make change to sd-legacy
* runwayml take-down make change to sd-legacy
* runwayml take-down make change to sd-legacy
* Update convert_blipdiffusion_to_diffusers.py
style change