diff --git a/docs/source/en/api/cache.md b/docs/source/en/api/cache.md index c93dcad438..6a2d74892c 100644 --- a/docs/source/en/api/cache.md +++ b/docs/source/en/api/cache.md @@ -29,7 +29,7 @@ Cache methods speedup diffusion transformers by storing and reusing intermediate [[autodoc]] apply_faster_cache -### FirstBlockCacheConfig +## FirstBlockCacheConfig [[autodoc]] FirstBlockCacheConfig diff --git a/docs/source/en/optimization/cache.md b/docs/source/en/optimization/cache.md index 6397c7d4cd..3854ecd469 100644 --- a/docs/source/en/optimization/cache.md +++ b/docs/source/en/optimization/cache.md @@ -68,6 +68,20 @@ config = FasterCacheConfig( pipeline.transformer.enable_cache(config) ``` +## FirstBlockCache + +[FirstBlock Cache](https://huggingface.co/docs/diffusers/main/en/api/cache#diffusers.FirstBlockCacheConfig) checks how much the early layers of the denoiser changes from one timestep to the next. If the change is small, the model skips the expensive later layers and reuses the previous output. + +```py +import torch +from diffusers import DiffusionPipeline +from diffusers.hooks import apply_first_block_cache, FirstBlockCacheConfig + +pipeline = DiffusionPipeline.from_pretrained( + "Qwen/Qwen-Image", torch_dtype=torch.bfloat16 +) +apply_first_block_cache(pipeline.transformer, FirstBlockCacheConfig(threshold=0.2)) +``` ## TaylorSeer Cache [TaylorSeer Cache](https://huggingface.co/papers/2403.06923) accelerates diffusion inference by using Taylor series expansions to approximate and cache intermediate activations across denoising steps. The method predicts future outputs based on past computations, reusing them at specified intervals to reduce redundant calculations. @@ -87,8 +101,7 @@ from diffusers import FluxPipeline, TaylorSeerCacheConfig pipe = FluxPipeline.from_pretrained( "black-forest-labs/FLUX.1-dev", torch_dtype=torch.bfloat16, -) -pipe.to("cuda") +).to("cuda") config = TaylorSeerCacheConfig( cache_interval=5, @@ -97,4 +110,4 @@ config = TaylorSeerCacheConfig( taylor_factors_dtype=torch.bfloat16, ) pipe.transformer.enable_cache(config) -``` \ No newline at end of file +``` diff --git a/src/diffusers/models/cache_utils.py b/src/diffusers/models/cache_utils.py index f4ad1af278..153608bb2b 100644 --- a/src/diffusers/models/cache_utils.py +++ b/src/diffusers/models/cache_utils.py @@ -41,9 +41,11 @@ class CacheMixin: Enable caching techniques on the model. Args: - config (`Union[PyramidAttentionBroadcastConfig]`): + config (`Union[PyramidAttentionBroadcastConfig, FasterCacheConfig, FirstBlockCacheConfig]`): The configuration for applying the caching technique. Currently supported caching techniques are: - [`~hooks.PyramidAttentionBroadcastConfig`] + - [`~hooks.FasterCacheConfig`] + - [`~hooks.FirstBlockCacheConfig`] Example: