2.7 KiB
Kandinsky 2.1
Kandinsky 2.1 is created by Arseniy Shakhmatov, Anton Razzhigaev, Aleksandr Nikolich, Vladimir Arkhipkin, Igor Pavlov, Andrey Kuznetsov, and Denis Dimitrov.
The description from it's GitHub page is:
Kandinsky 2.1 inherits best practicies from Dall-E 2 and Latent diffusion, while introducing some new ideas. As text and image encoder it uses CLIP model and diffusion image prior (mapping) between latent spaces of CLIP modalities. This approach increases the visual performance of the model and unveils new horizons in blending images and text-guided image manipulation.
The original codebase can be found at ai-forever/Kandinsky-2.
Tip
Check out the Kandinsky Community organization on the Hub for the official model checkpoints for tasks like text-to-image, image-to-image, and inpainting.
Tip
Make sure to check out the Schedulers guide to learn how to explore the tradeoff between scheduler speed and quality, and see the reuse components across pipelines section to learn how to efficiently load the same components into multiple pipelines.
KandinskyPriorPipeline
autodoc KandinskyPriorPipeline - all - call - interpolate
KandinskyPipeline
autodoc KandinskyPipeline - all - call
KandinskyCombinedPipeline
autodoc KandinskyCombinedPipeline - all - call
KandinskyImg2ImgPipeline
autodoc KandinskyImg2ImgPipeline - all - call
KandinskyImg2ImgCombinedPipeline
autodoc KandinskyImg2ImgCombinedPipeline - all - call
KandinskyInpaintPipeline
autodoc KandinskyInpaintPipeline - all - call
KandinskyInpaintCombinedPipeline
autodoc KandinskyInpaintCombinedPipeline - all - call