* [Onnx] add Stable Diffusion Upscale pipeline
* add a test for the OnnxStableDiffusionUpscalePipeline
* check for VAE config before adjusting scaling factor
* update test assertions, lint fixes
* run fix-copies target
* switch test checkpoint to one hosted on huggingface
* partially restore attention mask
* reshape embeddings after running text encoder
* add longer nightly test for ONNX upscale pipeline
* use package import to fix tests
* fix scheduler compatibility and class labels dtype
* use more precise type
* remove LMS from fast tests
* lookup latent and timestamp types
* add docs for ONNX upscaling, rename lookup table
* replace deprecated pipeline names in ONNX docs
* Tiled VAE for high-res text2img and img2img
* vae tiling, fix formatting
* enable_vae_tiling API and tests
* tiled vae docs, disable tiling for images that would have only one tile
* tiled vae tests, use channels_last memory format
* tiled vae tests, use smaller test image
* tiled vae tests, remove tiling test from fast tests
* up
* up
* make style
* Apply suggestions from code review
* Apply suggestions from code review
* Apply suggestions from code review
* make style
* improve naming
* finish
* apply suggestions
* Apply suggestions from code review
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* up
---------
Co-authored-by: Ilmari Heikkinen <ilmari@fhtr.org>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* add scaffold
- copied convert_controlnet_to_diffusers.py from
convert_original_stable_diffusion_to_diffusers.py
* Add support to load ControlNet (WIP)
- this makes Missking Key error on ControlNetModel
* Update to convert ControlNet without error msg
- init impl for StableDiffusionControlNetPipeline
- init impl for ControlNetModel
* cleanup of commented out
* split create_controlnet_diffusers_config()
from create_unet_diffusers_config()
- add config: hint_channels
* Add input_hint_block, input_zero_conv and
middle_block_out
- this makes missing key error on loading model
* add unet_2d_blocks_controlnet.py
- copied from unet_2d_blocks.py as impl CrossAttnDownBlock2D,DownBlock2D
- this makes missing key error on loading model
* Add loading for input_hint_block, zero_convs
and middle_block_out
- this makes no error message on model loading
* Copy from UNet2DConditionalModel except __init__
* Add ultra primitive test for ControlNetModel
inference
* Support ControlNetModel inference
- without exceptions
* copy forward() from UNet2DConditionModel
* Impl ControlledUNet2DConditionModel inference
- test_controlled_unet_inference passed
* Frozen weight & biases for training
* Minimized version of ControlNet/ControlledUnet
- test_modules_controllnet.py passed
* make style
* Add support model loading for minimized ver
* Remove all previous version files
* from_pretrained and inference test passed
* copied from pipeline_stable_diffusion.py
except `__init__()`
* Impl pipeline, pixel match test (almost) passed.
* make style
* make fix-copies
* Fix to add import ControlNet blocks
for `make fix-copies`
* Remove einops dependency
* Support np.ndarray, PIL.Image for controlnet_hint
* set default config file as lllyasviel's
* Add support grayscale (hw) numpy array
* Add and update docstrings
* add control_net.mdx
* add control_net.mdx to toctree
* Update copyright year
* Fix to add PIL.Image RGB->BGR conversion
- thanks @Mystfit
* make fix-copies
* add basic fast test for controlnet
* add slow test for controlnet/unet
* Ignore down/up_block len check on ControlNet
* add a copy from test_stable_diffusion.py
* Accept controlnet_hint is None
* merge pipeline_stable_diffusion.py diff
* Update class name to SDControlNetPipeline
* make style
* Baseline fast test almost passed (w long desc)
* still needs investigate.
Following didn't passed descriped in TODO comment:
- test_stable_diffusion_long_prompt
- test_stable_diffusion_no_safety_checker
Following didn't passed same as stable_diffusion_pipeline:
- test_attention_slicing_forward_pass
- test_inference_batch_single_identical
- test_xformers_attention_forwardGenerator_pass
these seems come from calc accuracy.
* Add note comment related vae_scale_factor
* add test_stable_diffusion_controlnet_ddim
* add assertion for vae_scale_factor != 8
* slow test of pipeline almost passed
Failed: test_stable_diffusion_pipeline_with_model_offloading
- ImportError: `enable_model_offload` requires `accelerate v0.17.0` or higher
but currently latest version == 0.16.0
* test_stable_diffusion_long_prompt passed
* test_stable_diffusion_no_safety_checker passed
- due to its model size, move to slow test
* remove PoC test files
* fix num_of_image, prompt length issue add add test
* add support List[PIL.Image] for controlnet_hint
* wip
* all slow test passed
* make style
* update for slow test
* RGB(PIL)->BGR(ctrlnet) conversion
* fixes
* remove manual num_images_per_prompt test
* add document
* add `image` argument docstring
* make style
* Add line to correct conversion
* add controlnet_conditioning_scale (aka control_scales
strength)
* rgb channel ordering by default
* image batching logic
* Add control image descriptions for each checkpoint
* Only save controlnet model in conversion script
* Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_controlnet.py
typo
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* Update docs/source/en/api/pipelines/stable_diffusion/control_net.mdx
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* Update docs/source/en/api/pipelines/stable_diffusion/control_net.mdx
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* Update docs/source/en/api/pipelines/stable_diffusion/control_net.mdx
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* Update docs/source/en/api/pipelines/stable_diffusion/control_net.mdx
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* Update docs/source/en/api/pipelines/stable_diffusion/control_net.mdx
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* Update docs/source/en/api/pipelines/stable_diffusion/control_net.mdx
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* Update docs/source/en/api/pipelines/stable_diffusion/control_net.mdx
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* Update docs/source/en/api/pipelines/stable_diffusion/control_net.mdx
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* Update docs/source/en/api/pipelines/stable_diffusion/control_net.mdx
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* add gerated image example
* a depth mask -> a depth map
* rename control_net.mdx to controlnet.mdx
* fix toc title
* add ControlNet abstruct and link
* Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_controlnet.py
Co-authored-by: dqueue <dbyqin@gmail.com>
* remove controlnet constructor arguments re: @patrickvonplaten
* [integration tests] test canny
* test_canny fixes
* [integration tests] test_depth
* [integration tests] test_hed
* [integration tests] test_mlsd
* add channel order config to controlnet
* [integration tests] test normal
* [integration tests] test_openpose test_scribble
* change height and width to default to conditioning image
* [integration tests] test seg
* style
* test_depth fix
* [integration tests] size fixes
* [integration tests] cpu offloading
* style
* generalize controlnet embedding
* fix conversion script
* Update docs/source/en/api/pipelines/stable_diffusion/controlnet.mdx
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
* Update docs/source/en/api/pipelines/stable_diffusion/controlnet.mdx
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
* Update docs/source/en/api/pipelines/stable_diffusion/controlnet.mdx
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
* Update docs/source/en/api/pipelines/stable_diffusion/controlnet.mdx
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
* Style adapted to the documentation of pix2pix
* merge main by hand
* style
* [docs] controlling generation doc nits
* correct some things
* add: controlnetmodel to autodoc.
* finish docs
* finish
* finish 2
* correct images
* finish controlnet
* Apply suggestions from code review
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* uP
* upload model
* up
* up
---------
Co-authored-by: William Berman <WLBberman@gmail.com>
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
Co-authored-by: dqueue <dbyqin@gmail.com>
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* add sdpa processor
* don't use it by default
* add some checks and style
* typo
* support torch sdpa in dreambooth example
* use torch attn proc by default when available
* typo
* add attn mask
* fix naming
* being doc
* doc
* Apply suggestions from code review
* polish
* torctree
* Apply suggestions from code review
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* better name
* style
* add benchamrk table
* Update docs/source/en/optimization/torch2.0.mdx
* up
* fix example
* check if processor is None
* Apply suggestions from code review
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* add fp32 benchmakr
* Apply suggestions from code review
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
---------
Co-authored-by: Sayak Paul <spsayakpaul@gmail.com>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* attend and excite pipeline
* update
update docstring example
remove visualization
remove the base class attention control
remove dependency on stable diffusion pipeline
always apply gaussian filter with default setting
remove run_standard_sd argument
hardcode attention_res and scale_range (related to step size)
Update docs/source/en/api/pipelines/stable_diffusion/attend_and_excite.mdx
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_attend_and_excite.py
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_attend_and_excite.py
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_attend_and_excite.py
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_attend_and_excite.py
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_attend_and_excite.py
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_attend_and_excite.py
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Update tests/pipelines/stable_diffusion_2/test_stable_diffusion_attend_and_excite.py
Co-authored-by: Will Berman <wlbberman@gmail.com>
Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_attend_and_excite.py
Co-authored-by: Will Berman <wlbberman@gmail.com>
Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_attend_and_excite.py
Co-authored-by: Will Berman <wlbberman@gmail.com>
revert test_float16_inference
revert change to the batch related tests
fix test_float16_inference
handle batch
remove the deprecation message
remove None check, step_size
remove debugging logging
add slow test
indices_to_alter -> indices
add check_input
* skip mps
* style
* Apply suggestions from code review
Co-authored-by: Suraj Patil <surajp815@gmail.com>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* indices -> token_indices
---------
Co-authored-by: evin <evinpinarornek@gmail.com>
Co-authored-by: yiyixuxu <yixu310@gmail,com>
Co-authored-by: Suraj Patil <surajp815@gmail.com>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* add UniPC scheduler
* add the return type to the functions
* code quality check
* add tests
* finish docs
---------
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* add: support for BLIP generation.
* add: support for editing synthetic images.
* remove unnecessary comments.
* add inits and run make fix-copies.
* version change of diffusers.
* fix: condition for loading the captioner.
* default conditions_input_image to False.
* guidance_amount -> cross_attention_guidance_amount
* fix inputs to check_inputs()
* fix: attribute.
* fix: prepare_attention_mask() call.
* debugging.
* better placement of references.
* remove torch.no_grad() decorations.
* put torch.no_grad() context before the first denoising loop.
* detach() latents before decoding them.
* put deocding in a torch.no_grad() context.
* add reconstructed image for debugging.
* no_grad(0
* apply formatting.
* address one-off suggestions from the draft PR.
* back to torch.no_grad() and add more elaborate comments.
* refactor prepare_unet() per Patrick's suggestions.
* more elaborate description for .
* formatting.
* add docstrings to the methods specific to pix2pix zero.
* suspecting a redundant noise prediction.
* needed for gradient computation chain.
* less hacks.
* fix: attention mask handling within the processor.
* remove attention reference map computation.
* fix: cross attn args.
* fix: prcoessor.
* store attention maps.
* fix: attention processor.
* update docs and better treatment to xa args.
* update the final noise computation call.
* change xa args call.
* remove xa args option from the pipeline.
* add: docs.
* first test.
* fix: url call.
* fix: argument call.
* remove image conditioning for now.
* 🚨 add: fast tests.
* explicit placement of the xa attn weights.
* add: slow tests 🐢
* fix: tests.
* edited direction embedding should be on the same device as prompt_embeds.
* debugging message.
* debugging.
* add pix2pix zero pipeline for a non-deterministic test.
* debugging/
* remove debugging message.
* make caption generation _
* address comments (part I).
* address PR comments (part II)
* fix: DDPM test assertion.
* refactor doc.
* address PR comments (part III).
* fix: type annotation for the scheduler.
* apply styling.
* skip_mps and add note on embeddings in the docs.
There isn't a space between the "Scope" paragraph and "Ethical Guidelines", here: https://huggingface.co/docs/diffusers/main/en/conceptual/ethical_guidelines , yet I can't see that in the preview. In this PR, I'm simply adding some spaces in the hopes that it resolves the issue.....
* initial docs about KarrasDiffusionSchedulers
* typo
* grammer
* Update docs/source/en/api/schedulers/overview.mdx
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* do not list the schedulers explicitly
---------
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* pipeline_variant
* Add docs for when clip_stats_path is specified
* Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_unclip.py
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_unclip.py
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_unclip_img2img.py
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_unclip_img2img.py
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* prepare_latents # Copied from re: @patrickvonplaten
* NoiseAugmentor->ImageNormalizer
* stable_unclip_prior default to None re: @patrickvonplaten
* prepare_prior_extra_step_kwargs
* prior denoising scale model input
* {DDIM,DDPM}Scheduler -> KarrasDiffusionSchedulers re: @patrickvonplaten
* docs
* Update docs/source/en/api/pipelines/stable_unclip.mdx
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
---------
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* better accelerated saving
* up
* finish
* finish
* uP
* up
* up
* fix
* Apply suggestions from code review
* correct ema
* Remove @
* up
* Apply suggestions from code review
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* Update docs/source/en/training/dreambooth.mdx
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
---------
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* Modify UNet2DConditionModel
- allow skipping mid_block
- adding a norm_group_size argument so that we can set the `num_groups` for group norm using `num_channels//norm_group_size`
- allow user to set dimension for the timestep embedding (`time_embed_dim`)
- the kernel_size for `conv_in` and `conv_out` is now configurable
- add random fourier feature layer (`GaussianFourierProjection`) for `time_proj`
- allow user to add the time and class embeddings before passing through the projection layer together - `time_embedding(t_emb + class_label))`
- added 2 arguments `attn1_types` and `attn2_types`
* currently we have argument `only_cross_attention`: when it's set to `True`, we will have a to the
`BasicTransformerBlock` block with 2 cross-attention , otherwise we
get a self-attention followed by a cross-attention; in k-upscaler, we need to have blocks that include just one cross-attention, or self-attention -> cross-attention;
so I added `attn1_types` and `attn2_types` to the unet's argument list to allow user specify the attention types for the 2 positions in each block; note that I stil kept
the `only_cross_attention` argument for unet for easy configuration, but it will be converted to `attn1_type` and `attn2_type` when passing down to the down blocks
- the position of downsample layer and upsample layer is now configurable
- in k-upscaler unet, there is only one skip connection per each up/down block (instead of each layer in stable diffusion unet), added `skip_freq = "block"` to support
this use case
- if user passes attention_mask to unet, it will prepare the mask and pass a flag to cross attention processer to skip the `prepare_attention_mask` step
inside cross attention block
add up/down blocks for k-upscaler
modify CrossAttention class
- make the `dropout` layer in `to_out` optional
- `use_conv_proj` - use conv instead of linear for all projection layers (i.e. `to_q`, `to_k`, `to_v`, `to_out`) whenever possible. note that when it's used to do cross
attention, to_k, to_v has to be linear because the `encoder_hidden_states` is not 2d
- `cross_attention_norm` - add an optional layernorm on encoder_hidden_states
- `attention_dropout`: add an optional dropout on attention score
adapt BasicTransformerBlock
- add an ada groupnorm layer to conditioning attention input with timestep embedding
- allow skipping the FeedForward layer in between the attentions
- replaced the only_cross_attention argument with attn1_type and attn2_type for more flexible configuration
update timestep embedding: add new act_fn gelu and an optional act_2
modified ResnetBlock2D
- refactored with AdaGroupNorm class (the timestep scale shift normalization)
- add `mid_channel` argument - allow the first conv to have a different output dimension from the second conv
- add option to use input AdaGroupNorm on the input instead of groupnorm
- add options to add a dropout layer after each conv
- allow user to set the bias in conv_shortcut (needed for k-upscaler)
- add gelu
adding conversion script for k-upscaler unet
add pipeline
* fix attention mask
* fix a typo
* fix a bug
* make sure model can be used with GPU
* make pipeline work with fp16
* fix an error in BasicTransfomerBlock
* make style
* fix typo
* some more fixes
* uP
* up
* correct more
* some clean-up
* clean time proj
* up
* uP
* more changes
* remove the upcast_attention=True from unet config
* remove attn1_types, attn2_types etc
* fix
* revert incorrect changes up/down samplers
* make style
* remove outdated files
* Apply suggestions from code review
* attention refactor
* refactor cross attention
* Apply suggestions from code review
* update
* up
* update
* Apply suggestions from code review
* finish
* Update src/diffusers/models/cross_attention.py
* more fixes
* up
* up
* up
* finish
* more corrections of conversion state
* act_2 -> act_2_fn
* remove dropout_after_conv from ResnetBlock2D
* make style
* simplify KAttentionBlock
* add fast test for latent upscaler pipeline
* add slow test
* slow test fp16
* make style
* add doc string for pipeline_stable_diffusion_latent_upscale
* add api doc page for latent upscaler pipeline
* deprecate attention mask
* clean up embeddings
* simplify resnet
* up
* clean up resnet
* up
* correct more
* up
* up
* improve a bit more
* correct more
* more clean-ups
* Update docs/source/en/api/pipelines/stable_diffusion/latent_upscale.mdx
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* Update docs/source/en/api/pipelines/stable_diffusion/latent_upscale.mdx
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* add docstrings for new unet config
* Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_latent_upscale.py
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_latent_upscale.py
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* # Copied from
* encode the image if not latent
* remove force casting vae to fp32
* fix
* add comments about preconditioning parameters from k-diffusion paper
* attn1_type, attn2_type -> add_self_attention
* clean up get_down_block and get_up_block
* fix
* fixed a typo(?) in ada group norm
* update slice attention processer for cross attention
* update slice
* fix fast test
* update the checkpoint
* finish tests
* fix-copies
* fix-copy for modeling_text_unet.py
* make style
* make style
* fix f-string
* Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_latent_upscale.py
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* fix import
* correct changes
* fix resnet
* make fix-copies
* correct euler scheduler
* add missing #copied from for preprocess
* revert
* fix
* fix copies
* Update docs/source/en/api/pipelines/stable_diffusion/latent_upscale.mdx
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* Update docs/source/en/api/pipelines/stable_diffusion/latent_upscale.mdx
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* Update docs/source/en/api/pipelines/stable_diffusion/latent_upscale.mdx
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* Update docs/source/en/api/pipelines/stable_diffusion/latent_upscale.mdx
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* Update src/diffusers/models/cross_attention.py
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_latent_upscale.py
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_latent_upscale.py
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* clean up conversion script
* KDownsample2d,KUpsample2d -> KDownsample2D,KUpsample2D
* more
* Update src/diffusers/models/unet_2d_condition.py
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* remove prepare_extra_step_kwargs
* Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_latent_upscale.py
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* Update src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_latent_upscale.py
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
* fix a typo in timestep embedding
* remove num_image_per_prompt
* fix fasttest
* make style + fix-copies
* fix
* fix xformer test
* fix style
* doc string
* make style
* fix-copies
* docstring for time_embedding_norm
* make style
* final finishes
* make fix-copies
* fix tests
---------
Co-authored-by: yiyixuxu <yixu@yis-macbook-pro.lan>
Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* Short doc on changing the scheduler in Flax.
* Apply fix from @patil-suraj
Co-authored-by: Suraj Patil <surajp815@gmail.com>
---------
Co-authored-by: Suraj Patil <surajp815@gmail.com>